ir_datasets
: TREC CARAn ad-hoc passage retrieval collection, constructed from Wikipedia and used as the basis of the TREC Complex Answer Retrieval (CAR) task.
Version 1.5 of the TREC dataset. This version is used for year 1 (2017) of the TREC CAR shared task.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Bibtex:
@article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Un-official test set consisting of manually-selected articles. Sometimes used as a validation set.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/test200")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/test200 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/test200')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/test200")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/test200 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/test200')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/test200")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/test200 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/test200')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Nanni2017BenchmarkCar, title={Benchmark for complex answer retrieval}, author={Nanni, Federico and Mitra, Bhaskar and Magnusson, Matt and Dietz, Laura}, booktitle={ICTIR}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Fold 0 of the official large training set for TREC CAR 2017. Relevance assumed from hierarchical structure of pages (i.e., paragraphs under a header are assumed relevant.)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold0")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold0 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold0')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold0")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold0 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold0')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold0")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold0 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold0')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Fold 1 of the official large training set for TREC CAR 2017. Relevance assumed from hierarchical structure of pages (i.e., paragraphs under a header are assumed relevant.)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold1")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold1 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold1')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold1")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold1 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold1')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold1")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold1 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold1')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Fold 2 of the official large training set for TREC CAR 2017. Relevance assumed from hierarchical structure of pages (i.e., paragraphs under a header are assumed relevant.)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold2")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold2 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold2')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold2")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold2 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold2')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold2")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold2 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold2')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Fold 3 of the official large training set for TREC CAR 2017. Relevance assumed from hierarchical structure of pages (i.e., paragraphs under a header are assumed relevant.)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold3")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold3 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold3')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold3")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold3 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold3')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold3")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold3 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold3')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Fold 4 of the official large training set for TREC CAR 2017. Relevance assumed from hierarchical structure of pages (i.e., paragraphs under a header are assumed relevant.)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold4")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold4 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold4')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold4")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold4 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold4')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/train/fold4")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/train/fold4 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/train/fold4')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Official test set of TREC CAR 2017 (year 1).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1 queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1 docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Official test set of TREC CAR 2017 (year 1), using automatic relevance judgments (assumed from hierarchical structure of pages, i.e., paragraphs under a header are assumed relevant.)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1/auto")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1/auto queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1/auto')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1/auto")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1/auto docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1/auto')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Paragraph appears under heading |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1/auto")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1/auto qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1/auto')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }Official test set of TREC CAR 2017 (year 1), using manual graded relevance judgments.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1/manual")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, title, headings>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1/manual queries
[query_id] [text] [title] [headings]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1/manual')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from car/v1.5
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1/manual")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1/manual docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1/manual')
# Index car/v1.5
indexer = pt.IterDictIndexer('./indices/car_v1.5')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
-2 | Trash |
-1 | NO, non-relevant |
0 | Non-relevant, but roughly on TOPIC |
1 | CAN be mentioned |
2 | SHOULD be mentioned |
3 | MUST be mentioned |
Examples:
import ir_datasets
dataset = ir_datasets.load("car/v1.5/trec-y1/manual")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export car/v1.5/trec-y1/manual qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:car/v1.5/trec-y1/manual')
index_ref = pt.IndexRef.of('./indices/car_v1.5') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} }