lerot.experiment

class lerot.experiment.GenericExperiment(args_str=None)[source]
run()[source]
run_experiment(aux_log_fh)[source]
class lerot.experiment.LearningExperiment(training_queries, test_queries, feature_count, log_fh, args)[source]

Bases: lerot.experiment.AbstractLearningExperiment.AbstractLearningExperiment

Represents an experiment in which a retrieval system learns from implicit user feedback. The experiment is initialized as specified in the provided arguments, or config file.

run()[source]

A single run of the experiment.

class lerot.experiment.MetaExperiment[source]
apply(conf)[source]
finish_analytics()[source]
run_celery()[source]
run_conf()[source]
run_local()[source]
store(conf, r)[source]
update_analytics()[source]
update_analytics_file(log_file)[source]
class lerot.experiment.PrudentLearningExperiment(training_queries, test_queries, feature_count, log_fh, args)[source]

Bases: lerot.experiment.AbstractLearningExperiment.AbstractLearningExperiment

Represents an experiment in which a retrieval system learns from implicit user feedback. The experiment is initialized as specified in the provided arguments, or config file.

run()[source]

Run the experiment num_runs times.

class lerot.experiment.HistoricalComparisonExperiment(queries, feature_count, log_fh, args)[source]

Represents an experiment in which rankers are compared using interleaved comparisons with live and historic click data.

run()[source]

Run the experiment for num_queries queries.

class lerot.experiment.SingleQueryComparisonExperiment(query_dir, feature_count, log_fh, args)[source]

Represents an experiment in which rankers are compared using interleaved comparisons on a single query.

run()[source]

Run the experiment for num_queries queries.

class lerot.experiment.SyntheticComparisonExperiment(log_fh, args)[source]

Represents an experiment in which synthetic rankers are compared to investigate theoretical properties / guarantees.

run()[source]

Run the experiment for num_queries queries.

class lerot.experiment.VASyntheticComparisonExperiment(log_fh, args)[source]

Represents an experiment in which synthetic rankers are compared to investigate theoretical properties / guarantees.

static block_counts(l)[source]
static block_position1(l, result_length)[source]
static block_sizes(l)[source]
static generate_ranking_pair(result_length, num_relevant, pos_method='beyondten', vert_rel='non-relevant', block_size=3, verticals=None, fixed=False, dominates=<function <lambda>>)[source]

Generate pair of synthetic rankings. Appendix A, https://bitbucket.org/varepsilon/tois2013-interleaving

static get_online_metrics(clicks, ranking)[source]
init_rankers(query)[source]

Init rankers for a query

Since the ranker may be stateful, we need to init it every time we access its documents.

run()[source]