← home
Github: datasets/aquaint.py

ir_datasets: AQUAINT

Index
  1. aquaint
  2. aquaint/trec-robust-2005

Data Access Information

To use this dataset, you need a copy of the source corpus, provided by the the Linguistic Data Consortium. The specific resource needed is LDC2002T31.

Many organizations already have a subscription to the LDC, so access to the collection can be as easy as confirming the data usage agreement and downloading the corpus. Check with your library for access details.

The source file is: aquaint_comp_LDC2002T31.tgz.

ir_datasets expects this file to be copied/linked in ~/.ir_datasets/aquaint/.


"aquaint"

A document collection of about 1M English newswire text. Sources are the Xinhua News Service (People's Republic of China), the New York Times News Service, and the Associated Press Worldstream News Service.

docs
1.0M docs

Language: en

Document type:
TrecDoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. marked_up_doc: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("aquaint")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, marked_up_doc>

You can find more details about the Python API here.

CLI
ir_datasets export aquaint docs
[doc_id]    [text]    [marked_up_doc]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:aquaint')
# Index aquaint
indexer = pt.IterDictIndexer('./indices/aquaint')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'marked_up_doc'])

You can find more details about PyTerrier indexing here.

Citation

ir_datasets.bib:

\cite{Graff2002Aquaint}

Bibtex:

@misc{Graff2002Aquaint, title={The AQUAINT Corpus of English News Text}, author={David Graff}, year={2002}, url={https://catalog.ldc.upenn.edu/LDC2002T31}, publisher={Linguistic Data Consortium} }
Metadata

"aquaint/trec-robust-2005"

The TREC Robust 2005 dataset. Contains a subset of 50 "hard" queries from trec-robust04.

queries
50 queries

Language: en

Query type:
TrecQuery: (namedtuple)
  1. query_id: str
  2. title: str
  3. description: str
  4. narrative: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("aquaint/trec-robust-2005")
for query in dataset.queries_iter():
    query # namedtuple<query_id, title, description, narrative>

You can find more details about the Python API here.

CLI
ir_datasets export aquaint/trec-robust-2005 queries
[query_id]    [title]    [description]    [narrative]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:aquaint/trec-robust-2005')
index_ref = pt.IndexRef.of('./indices/aquaint') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('title'))

You can find more details about PyTerrier retrieval here.

docs
1.0M docs

Inherits docs from aquaint

Language: en

Document type:
TrecDoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. marked_up_doc: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("aquaint/trec-robust-2005")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, marked_up_doc>

You can find more details about the Python API here.

CLI
ir_datasets export aquaint/trec-robust-2005 docs
[doc_id]    [text]    [marked_up_doc]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:aquaint/trec-robust-2005')
# Index aquaint
indexer = pt.IterDictIndexer('./indices/aquaint')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'marked_up_doc'])

You can find more details about PyTerrier indexing here.

qrels
38K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant31K82.6%
1relevant3.8K10.0%
2highly relevant2.8K7.4%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("aquaint/trec-robust-2005")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export aquaint/trec-robust-2005 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:aquaint/trec-robust-2005')
index_ref = pt.IndexRef.of('./indices/aquaint') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('title'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

Citation

ir_datasets.bib:

\cite{Voorhees2005Robust,Graff2002Aquaint}

Bibtex:

@inproceedings{Voorhees2005Robust, title={Overview of the TREC 2005 Robust Retrieval Track}, author={Ellen M. Voorhees}, booktitle={TREC}, year={2005} } @misc{Graff2002Aquaint, title={The AQUAINT Corpus of English News Text}, author={David Graff}, year={2002}, url={https://catalog.ldc.upenn.edu/LDC2002T31}, publisher={Linguistic Data Consortium} }
Metadata