ir_datasets
: MSMARCO (QnA)The MS MARCO Question Answering dataset. This is the source collection of msmarco-passage and msmarco-document.
Query IDs in this collection align with those found in msmarco-passage and msmarco-document. The collection does not provide doc_ids, so these are assigned in the following format: [msmarco_passage_id]-[url_seq]
, where [msmarco_passage_id]
is the document from msmarco-passage that has matching contents and [url_seq]
is assigned sequentially for each URL encountered. In other words, all documents with the same prefix have the same text; they only differ in the originating document.
Doc msmarco_passage_id
fields are assigned by matching pasasge contents in msmarco-passage, and this field is provided for every document. Doc msmarco_document_id
fields are assigned by matching the URL to the one found in msmarco-document. Due to how msmarco-document was constructed, there is not necessarily a match (value will be None
if no match).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>
You can find more details about the Python API here.
ir_datasets export msmarco-qna docs
[doc_id] [text] [url] [msmarco_passage_id] [msmarco_document_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 9048606, "fields": { "doc_id": { "max_len": 10, "common_prefix": "" } } } }
Official dev set.
The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, type, answers>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/dev queries
[query_id] [text] [type] [answers]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.msmarco-qna.dev.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from msmarco-qna
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/dev docs
[doc_id] [text] [url] [msmarco_passage_id] [msmarco_document_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna.dev')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Not marked by annotator as a contribution to their answer | 950K | 94.1% |
1 | Marked by annotator as a contribution to their answer | 59K | 5.9% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/dev qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.msmarco-qna.dev.qrels') # AdhocAssessments
for topic_qrels in qrels.iter():
print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/dev scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
dataset.get_results()
You can find more details about PyTerrier dataset API here.
import datamaestro # Supposes experimaestro-ir be installed
run = datamaestro.prepare_dataset('irds.msmarco-qna.dev.scoreddocs') # AdhocRun
# A run is a generic object, and is specialized into final classes
# e.g. TrecAdhocRun
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocRun
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 9048606, "fields": { "doc_id": { "max_len": 10, "common_prefix": "" } } }, "queries": { "count": 101093 }, "qrels": { "count": 1008985, "fields": { "relevance": { "counts_by_value": { "0": 949712, "1": 59273 } } } }, "scoreddocs": { "count": 1008985 } }
Official eval set.
The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, type>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/eval queries
[query_id] [text] [type]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/eval')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.msmarco-qna.eval.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from msmarco-qna
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/eval docs
[doc_id] [text] [url] [msmarco_passage_id] [msmarco_document_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/eval')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna.eval')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/eval scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/eval')
dataset.get_results()
You can find more details about PyTerrier dataset API here.
import datamaestro # Supposes experimaestro-ir be installed
run = datamaestro.prepare_dataset('irds.msmarco-qna.eval.scoreddocs') # AdhocRun
# A run is a generic object, and is specialized into final classes
# e.g. TrecAdhocRun
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocRun
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 9048606, "fields": { "doc_id": { "max_len": 10, "common_prefix": "" } } }, "queries": { "count": 101092 }, "scoreddocs": { "count": 1008943 } }
Official train set.
The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, type, answers>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/train queries
[query_id] [text] [type] [answers]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.msmarco-qna.train.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from msmarco-qna
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/train docs
[doc_id] [text] [url] [msmarco_passage_id] [msmarco_document_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna.train')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Not marked by annotator as a contribution to their answer | 7.5M | 93.4% |
1 | Marked by annotator as a contribution to their answer | 533K | 6.6% |
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/train qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('text'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.msmarco-qna.train.qrels') # AdhocAssessments
for topic_qrels in qrels.iter():
print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Examples:
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export msmarco-qna/train scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
dataset.get_results()
You can find more details about PyTerrier dataset API here.
import datamaestro # Supposes experimaestro-ir be installed
run = datamaestro.prepare_dataset('irds.msmarco-qna.train.scoreddocs') # AdhocRun
# A run is a generic object, and is specialized into final classes
# e.g. TrecAdhocRun
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocRun
Bibtex:
@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }{ "docs": { "count": 9048606, "fields": { "doc_id": { "max_len": 10, "common_prefix": "" } } }, "queries": { "count": 808731 }, "qrels": { "count": 8069749, "fields": { "relevance": { "counts_by_value": { "1": 532761, "0": 7536988 } } } }, "scoreddocs": { "count": 8069749 } }