ir_datasets
: MSMARCO (document, version 2)Version 2 of the MS MARCO document ranking dataset. The corpus contains 12M documents (roughly 3x as many as version 1).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } } }
Official dev1 set with 4,552 queries.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev1')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev1')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
1 | Document contains a passage labeled as relevant in msmarco-passage | 4.7K | 100.0% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev1')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev1")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev1 scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 4552 }, "qrels": { "count": 4702, "fields": { "relevance": { "counts_by_value": { "1": 4702 } } } }, "scoreddocs": { "count": 455200 } }
Official dev2 set with 5,000 queries.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev2')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev2')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
1 | Document contains a passage labeled as relevant in msmarco-passage | 5.2K | 100.0% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/dev2')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/dev2")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/dev2 scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 5000 }, "qrels": { "count": 5178, "fields": { "relevance": { "counts_by_value": { "1": 5178 } } } }, "scoreddocs": { "count": 500000 } }
Official train set with 322,196 queries.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/train')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/train')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
1 | Document contains a passage labeled as relevant in msmarco-passage | 332K | 100.0% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/train')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/train")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/train scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 322196 }, "qrels": { "count": 331956, "fields": { "relevance": { "counts_by_value": { "1": 331956 } } } }, "scoreddocs": { "count": 32218809 } }
Queries from the TREC Deep Learning (DL) 2019 shared task, which were sampled from msmarco-document/eval. A subset of these queries were judged by NIST assessors, (filtered list available in msmarco-document-v2/trec-dl-2019/judged).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Irrelevant: Document does not provide any useful information about the query | 8.2K | 59.0% |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. | 4.0K | 28.4% |
2 | Highly relevant: The content of this document provides substantial information on the query. | 1.0K | 7.2% |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. | 745 | 5.3% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 200 }, "qrels": { "count": 13940, "fields": { "relevance": { "counts_by_value": { "0": 8229, "1": 3957, "2": 1009, "3": 745 } } } } }
Subset of msmarco-document-v2/trec-dl-2019, only including queries with qrels.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019/judged")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019/judged queries
[query_id] [text]
...
You can find more details about the CLI here.
No example available for PyTerrier
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019/judged")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019/judged docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2019/judged')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Inherits qrels from msmarco-document-v2/trec-dl-2019
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Irrelevant: Document does not provide any useful information about the query | 8.2K | 59.0% |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. | 4.0K | 28.4% |
2 | Highly relevant: The content of this document provides substantial information on the query. | 1.0K | 7.2% |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. | 745 | 5.3% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2019/judged")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2019/judged qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 43 }, "qrels": { "count": 13940, "fields": { "relevance": { "counts_by_value": { "0": 8229, "1": 3957, "2": 1009, "3": 745 } } } } }
Queries from the TREC Deep Learning (DL) 2020 shared task, which were sampled from msmarco-document/eval. A subset of these queries were judged by NIST assessors, (filtered list available in msmarco-document-v2/trec-dl-2020/judged).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Irrelevant: Document does not provide any useful information about the query | 6.4K | 80.2% |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. | 1.1K | 13.3% |
2 | Highly relevant: The content of this document provides substantial information on the query. | 279 | 3.5% |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. | 233 | 2.9% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 200 }, "qrels": { "count": 7942, "fields": { "relevance": { "counts_by_value": { "0": 6371, "3": 233, "1": 1059, "2": 279 } } } } }
Subset of msmarco-document-v2/trec-dl-2020, only including queries with qrels.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020/judged")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020/judged queries
[query_id] [text]
...
You can find more details about the CLI here.
No example available for PyTerrier
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020/judged")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020/judged docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2020/judged')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Inherits qrels from msmarco-document-v2/trec-dl-2020
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Irrelevant: Document does not provide any useful information about the query | 6.4K | 80.2% |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. | 1.1K | 13.3% |
2 | Highly relevant: The content of this document provides substantial information on the query. | 279 | 3.5% |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. | 233 | 2.9% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2020/judged")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2020/judged qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 45 }, "qrels": { "count": 7942, "fields": { "relevance": { "counts_by_value": { "0": 6371, "3": 233, "1": 1059, "2": 279 } } } } }
Official topics for the TREC Deep Learning (DL) 2021 shared task.
Note that at this time, qrels are only available to those with TREC active participant login credentials.
Official evaluation measures: AP@100, nDCG@10, P@10, RR(rel=2)
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2021')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021 docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2021')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Irrelevant: Document does not provide any useful information about the query | 4.9K | 37.2% |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. | 4.2K | 32.0% |
2 | Highly relevant: The content of this document provides substantial information on the query. | 2.8K | 21.2% |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. | 1.3K | 9.6% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2021')
index_ref = pt.IndexRef.of('./indices/msmarco-document-v2') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[AP@100, nDCG@10, P@10, RR(rel=2)]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021 scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 477 }, "qrels": { "count": 13058, "fields": { "relevance": { "counts_by_value": { "2": 2769, "0": 4855, "3": 1256, "1": 4178 } } } }, "scoreddocs": { "count": 47700 } }
msmarco-document-v2/trec-dl-2021, but filtered down to the 57 queries with qrels.
Note that at this time, this is only available to those with TREC active participant login credentials.
Official evaluation measures: AP@100, nDCG@10, P@10, RR(rel=2)
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021/judged")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021/judged queries
[query_id] [text]
...
You can find more details about the CLI here.
No example available for PyTerrier
Inherits docs from msmarco-document-v2
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021/judged")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, url, title, headings, body>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021/judged docs
[doc_id] [url] [title] [headings] [body]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-document-v2/trec-dl-2021/judged')
# Index msmarco-document-v2
indexer = pt.IterDictIndexer('./indices/msmarco-document-v2')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['url', 'title', 'headings', 'body'])
You can find more details about PyTerrier indexing here.
Inherits qrels from msmarco-document-v2/trec-dl-2021
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Irrelevant: Document does not provide any useful information about the query | 4.9K | 37.2% |
1 | Relevant: Document provides some information relevant to the query, which may be minimal. | 4.2K | 32.0% |
2 | Highly relevant: The content of this document provides substantial information on the query. | 2.8K | 21.2% |
3 | Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. | 1.3K | 9.6% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021/judged")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021/judged qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-document-v2/trec-dl-2021/judged")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-document-v2/trec-dl-2021/judged scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
{ "docs": { "count": 11959635, "fields": { "doc_id": { "max_len": 25, "common_prefix": "msmarco_doc_" } } }, "queries": { "count": 57 }, "qrels": { "count": 13058, "fields": { "relevance": { "counts_by_value": { "2": 2769, "0": 4855, "3": 1256, "1": 4178 } } } }, "scoreddocs": { "count": 5700 } }