lerot.evaluation

class lerot.evaluation.AsRbpEval(alpha=10, beta=0.8)[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Compute AS_RBP metric as described in [1].

[1] Zhou, K. et al. 2012. Evaluating aggregated search pages. SIGIR. (2012).

get_value(ranking, labels, orientations, cutoff=-1)[source]
class lerot.evaluation.DcgEval[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Compute DCG (with gain = 2**rel-1 and log2 discount).

evaluate_ranking(ranking, query, cutoff=-1)[source]

Compute DCG for the provided ranking. The ranking is expected to contain document ids in rank order.

get_dcg(ranked_labels, cutoff=-1)[source]

Get the dcg value of a list ranking. Does not check if the numer for ranked labels is smaller than cutoff.

get_value(ranking, labels, orientations, cutoff=-1)[source]

Compute the value of the metric - ranking contains the list of documents to evaluate - labels are the relevance labels for all the documents, even those

that are not in the ranking; labels[doc.get_id()] is the relevance of doc
  • orientations contains orientation values for the verticals; orientations[doc.get_type()] is the orientation value for the doc (from 0 to 1).
class lerot.evaluation.NdcgEval[source]

Bases: lerot.evaluation.DcgEval.DcgEval

Compute NDCG (with gain = 2**rel-1 and log2 discount).

evaluate_ranking(ranking, query, cutoff=-1)[source]

Compute NDCG for the provided ranking. The ranking is expected to contain document ids in rank order.

get_value(ranking, labels, orientations, cutoff=-1)[source]
class lerot.evaluation.LetorNdcgEval[source]

Bases: lerot.evaluation.NdcgEval.NdcgEval

Compute NDCG as implemented in the Letor toolkit.

get_dcg(labels, cutoff=-1)[source]
class lerot.evaluation.VSEval[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Simple vertical selection (VS) metric, a.k.a. prec_v.

get_value(ranking, labels, orientations, cutoff=-1)[source]
class lerot.evaluation.VDEval[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Simple vertical selection (VD) metric, a.k.a. rec_v.

get_value(ranking, labels, orientations, cutoff=-1)[source]
class lerot.evaluation.ISEval[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Simple vertical selection (IS) metric, a.k.a. mean-prec.

get_value(ranking, labels, orientations, cutoff=-1)[source]
class lerot.evaluation.RPEval[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Simple vertical selection (RP) metric, a.k.a. corr.

get_value(ranking, labels, orientations, cutoff=-1, ideal_ranking=None)[source]
class lerot.evaluation.LivingLabsEval[source]
get_performance()[source]
get_win()[source]
update_score(wins)[source]
class lerot.evaluation.PAKEval[source]

Bases: lerot.evaluation.AbstractEval.AbstractEval

Precision at k evaluation. Relevant document in ranking up to index k

evaluate_ranking(ranking, query, cutoff=-1)[source]