← home
Github: datasets/msmarco_qna.py

ir_datasets: MSMARCO (QnA)

Index
  1. msmarco-qna
  2. msmarco-qna/dev
  3. msmarco-qna/eval
  4. msmarco-qna/train

"msmarco-qna"

The MS MARCO Question Answering dataset. This is the source collection of msmarco-passage and msmarco-document.

It is prohibited to use information from this dataset for submissions to the MS MARCO passage and document leaderboards or the TREC DL shared task.

Query IDs in this collection align with those found in msmarco-passage and msmarco-document. The collection does not provide doc_ids, so these are assigned in the following format: [msmarco_passage_id]-[url_seq], where [msmarco_passage_id] is the document from msmarco-passage that has matching contents and [url_seq] is assigned sequentially for each URL encountered. In other words, all documents with the same prefix have the same text; they only differ in the originating document.

Doc msmarco_passage_id fields are assigned by matching pasasge contents in msmarco-passage, and this field is provided for every document. Doc msmarco_document_id fields are assigned by matching the URL to the one found in msmarco-document. Due to how msmarco-document was constructed, there is not necessarily a match (value will be None if no match).

docs
9.0M docs

Language: en

Document type:
MsMarcoQnADoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. msmarco_passage_id: str
  5. msmarco_document_id: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna docs
[doc_id]    [text]    [url]    [msmarco_passage_id]    [msmarco_document_id]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Citation

ir_datasets.bib:

\cite{Bajaj2016Msmarco}

Bibtex:

@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }
Metadata

"msmarco-qna/dev"

Official dev set.

The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.

queries
101K queries

Language: en

Query type:
MsMarcoQnAQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. type: str
  4. answers: Tuple[str, ...]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, type, answers>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/dev queries
[query_id]    [text]    [type]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.msmarco-qna.dev.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
9.0M docs

Inherits docs from msmarco-qna

Language: en

Document type:
MsMarcoQnADoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. msmarco_passage_id: str
  5. msmarco_document_id: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/dev docs
[doc_id]    [text]    [url]    [msmarco_passage_id]    [msmarco_document_id]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna.dev')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
1.0M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0Not marked by annotator as a contribution to their answer950K94.1%
1Marked by annotator as a contribution to their answer59K5.9%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/dev qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.msmarco-qna.dev.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

scoreddocs
1.0M scoreddocs
Scored Document type:
GenericScoredDoc: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. score: float

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for scoreddoc in dataset.scoreddocs_iter():
    scoreddoc # namedtuple<query_id, doc_id, score>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/dev scoreddocs --format tsv
[query_id]    [doc_id]    [score]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/dev')
dataset.get_results()

You can find more details about PyTerrier dataset API here.

XPM-IR
import datamaestro # Supposes experimaestro-ir be installed

run = datamaestro.prepare_dataset('irds.msmarco-qna.dev.scoreddocs') # AdhocRun
# A run is a generic object, and is specialized into final classes
# e.g. TrecAdhocRun 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocRun

Citation

ir_datasets.bib:

\cite{Bajaj2016Msmarco}

Bibtex:

@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }
Metadata

"msmarco-qna/eval"

Official eval set.

The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.

queries
101K queries

Language: en

Query type:
MsMarcoQnAEvalQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. type: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, type>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/eval queries
[query_id]    [text]    [type]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/eval')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.msmarco-qna.eval.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
9.0M docs

Inherits docs from msmarco-qna

Language: en

Document type:
MsMarcoQnADoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. msmarco_passage_id: str
  5. msmarco_document_id: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/eval docs
[doc_id]    [text]    [url]    [msmarco_passage_id]    [msmarco_document_id]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/eval')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna.eval')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

scoreddocs
1.0M scoreddocs
Scored Document type:
GenericScoredDoc: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. score: float

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for scoreddoc in dataset.scoreddocs_iter():
    scoreddoc # namedtuple<query_id, doc_id, score>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/eval scoreddocs --format tsv
[query_id]    [doc_id]    [score]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/eval')
dataset.get_results()

You can find more details about PyTerrier dataset API here.

XPM-IR
import datamaestro # Supposes experimaestro-ir be installed

run = datamaestro.prepare_dataset('irds.msmarco-qna.eval.scoreddocs') # AdhocRun
# A run is a generic object, and is specialized into final classes
# e.g. TrecAdhocRun 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocRun

Citation

ir_datasets.bib:

\cite{Bajaj2016Msmarco}

Bibtex:

@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }
Metadata

"msmarco-qna/train"

Official train set.

The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.

queries
809K queries

Language: en

Query type:
MsMarcoQnAQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. type: str
  4. answers: Tuple[str, ...]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, type, answers>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/train queries
[query_id]    [text]    [type]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.msmarco-qna.train.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
9.0M docs

Inherits docs from msmarco-qna

Language: en

Document type:
MsMarcoQnADoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. msmarco_passage_id: str
  5. msmarco_document_id: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/train docs
[doc_id]    [text]    [url]    [msmarco_passage_id]    [msmarco_document_id]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
# Index msmarco-qna
indexer = pt.IterDictIndexer('./indices/msmarco-qna')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'msmarco_passage_id'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.msmarco-qna.train')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
8.1M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0Not marked by annotator as a contribution to their answer7.5M93.4%
1Marked by annotator as a contribution to their answer533K6.6%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
index_ref = pt.IndexRef.of('./indices/msmarco-qna') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.msmarco-qna.train.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

scoreddocs
8.1M scoreddocs
Scored Document type:
GenericScoredDoc: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. score: float

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for scoreddoc in dataset.scoreddocs_iter():
    scoreddoc # namedtuple<query_id, doc_id, score>

You can find more details about the Python API here.

CLI
ir_datasets export msmarco-qna/train scoreddocs --format tsv
[query_id]    [doc_id]    [score]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:msmarco-qna/train')
dataset.get_results()

You can find more details about PyTerrier dataset API here.

XPM-IR
import datamaestro # Supposes experimaestro-ir be installed

run = datamaestro.prepare_dataset('irds.msmarco-qna.train.scoreddocs') # AdhocRun
# A run is a generic object, and is specialized into final classes
# e.g. TrecAdhocRun 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocRun

Citation

ir_datasets.bib:

\cite{Bajaj2016Msmarco}

Bibtex:

@inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} }
Metadata