ir_datasets
: CranfieldA small corpus of 1,400 scientific abstracts.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("cranfield")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export cranfield queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:cranfield')
index_ref = pt.IndexRef.of('./indices/cranfield') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("cranfield")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, author, bib>
You can find more details about the Python API here.
ir_datasets export cranfield docs
[doc_id] [title] [text] [author] [bib]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:cranfield')
# Index cranfield
indexer = pt.IterDictIndexer('./indices/cranfield')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'author', 'bib'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
-1 | References of no interest. | 225 | 12.2% |
1 | References of minimum interest, for example, those that have been included from an historical viewpoint. | 128 | 7.0% |
2 | References which were useful, either as general background to the work or as suggesting methods of tackling certain aspects of the work. | 387 | 21.1% |
3 | References of a high degree of relevance, the lack of which either would have made the research impracticable or would have resulted in a considerable amount of extra work. | 734 | 40.0% |
4 | References which are a complete answer to the question. | 363 | 19.8% |
Examples:
import ir_datasets
dataset = ir_datasets.load("cranfield")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export cranfield qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:cranfield')
index_ref = pt.IndexRef.of('./indices/cranfield') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
{ "docs": { "count": 1400, "fields": { "doc_id": { "max_len": 4, "common_prefix": "" } } }, "queries": { "count": 225 }, "qrels": { "count": 1837, "fields": { "relevance": { "counts_by_value": { "2": 387, "3": 734, "4": 363, "-1": 225, "1": 128 } } } } }