ir_datasets
: AQUAINTTo use this dataset, you need a copy of the source corpus, provided by the the Linguistic Data Consortium. The specific resource needed is LDC2002T31.
Many organizations already have a subscription to the LDC, so access to the collection can be as easy as confirming the data usage agreement and downloading the corpus. Check with your library for access details.
The source file is: aquaint_comp_LDC2002T31.tgz.
ir_datasets expects this file to be copied/linked in ~/.ir_datasets/aquaint/.
A document collection of about 1M English newswire text. Sources are the Xinhua News Service (People's Republic of China), the New York Times News Service, and the Associated Press Worldstream News Service.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("aquaint")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, marked_up_doc>
You can find more details about the Python API here.
ir_datasets export aquaint docs
[doc_id] [text] [marked_up_doc]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:aquaint')
# Index aquaint
indexer = pt.IterDictIndexer('./indices/aquaint')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'marked_up_doc'])
You can find more details about PyTerrier indexing here.
Bibtex:
@misc{Graff2002Aquaint, title={The AQUAINT Corpus of English News Text}, author={David Graff}, year={2002}, url={https://catalog.ldc.upenn.edu/LDC2002T31}, publisher={Linguistic Data Consortium} }{ "docs": { "count": 1033461, "fields": { "doc_id": { "max_len": 16, "common_prefix": "" } } } }
The TREC Robust 2005 dataset. Contains a subset of 50 "hard" queries from trec-robust04.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("aquaint/trec-robust-2005")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, narrative>
You can find more details about the Python API here.
ir_datasets export aquaint/trec-robust-2005 queries
[query_id] [title] [description] [narrative]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:aquaint/trec-robust-2005')
index_ref = pt.IndexRef.of('./indices/aquaint') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('title'))
You can find more details about PyTerrier retrieval here.
Inherits docs from aquaint
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("aquaint/trec-robust-2005")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, marked_up_doc>
You can find more details about the Python API here.
ir_datasets export aquaint/trec-robust-2005 docs
[doc_id] [text] [marked_up_doc]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:aquaint/trec-robust-2005')
# Index aquaint
indexer = pt.IterDictIndexer('./indices/aquaint')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'marked_up_doc'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 31K | 82.6% |
1 | relevant | 3.8K | 10.0% |
2 | highly relevant | 2.8K | 7.4% |
Examples:
import ir_datasets
dataset = ir_datasets.load("aquaint/trec-robust-2005")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export aquaint/trec-robust-2005 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:aquaint/trec-robust-2005')
index_ref = pt.IndexRef.of('./indices/aquaint') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('title'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Voorhees2005Robust, title={Overview of the TREC 2005 Robust Retrieval Track}, author={Ellen M. Voorhees}, booktitle={TREC}, year={2005} } @misc{Graff2002Aquaint, title={The AQUAINT Corpus of English News Text}, author={David Graff}, year={2002}, url={https://catalog.ldc.upenn.edu/LDC2002T31}, publisher={Linguistic Data Consortium} }{ "docs": { "count": 1033461, "fields": { "doc_id": { "max_len": 16, "common_prefix": "" } } }, "queries": { "count": 50 }, "qrels": { "count": 37798, "fields": { "relevance": { "counts_by_value": { "2": 2790, "1": 3771, "0": 31237 } } } } }