ir_datasets
: MSMARCO (document, version 2)Version 2 of the MS MARCO document ranking dataset. The corpus contains 12M documents (roughly 3x as many as version 1).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Official dev1 set with 4,552 queries.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev1')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev1')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Labeled by crowd worker as relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev1')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Official dev2 set with 5,000 queries.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev2')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev2')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Labeled by crowd worker as relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev2')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Official train set with 322,196 queries.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/train')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/train')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | Labeled by crowd worker as relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/train')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Queries from the TREC Deep Learning (DL) 2019 shared task, which were sampled from msmarco-document/eval. A subset of these queries were judged by NIST assessors, (filtered list available in msmarco-document-v2/trec-dl-2019/judged).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | Irrelevant: Document does not provide any useful information about the query |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. |
2 | Highly relevant: The content of this document provides substantial information on the query. |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Subset of msmarco-document-v2/trec-dl-2019, only including queries with qrels.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019/judged")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019/judged queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019/judged')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019/judged")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019/judged docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019/judged')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | Irrelevant: Document does not provide any useful information about the query |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. |
2 | Highly relevant: The content of this document provides substantial information on the query. |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019/judged")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019/judged qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019/judged')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Queries from the TREC Deep Learning (DL) 2020 shared task, which were sampled from msmarco-document/eval. A subset of these queries were judged by NIST assessors, (filtered list available in msmarco-document-v2/trec-dl-2020/judged).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | Irrelevant: Document does not provide any useful information about the query |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. |
2 | Highly relevant: The content of this document provides substantial information on the query. |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }Subset of msmarco-document-v2/trec-dl-2020, only including queries with qrels.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020/judged")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020/judged queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020/judged')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from msmarco-document-v2
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020/judged")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020/judged docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020/judged')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | Irrelevant: Document does not provide any useful information about the query |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. |
2 | Highly relevant: The content of this document provides substantial information on the query. |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020/judged")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020/judged qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020/judged')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }