← home
Github: datasets/msmarco_qna.py

ir_datasets: MSMARCO (QnA)

Index
  1. msmarco-qna
  2. msmarco-qna/dev
  3. msmarco-qna/eval
  4. msmarco-qna/train

"msmarco-qna"

The MS MARCO Question Answering dataset. This is the source collection of msmarco-passage and msmarco-document.

It is prohibited to use information from this dataset for submissions to the MS MARCO passage and document leaderboards or the TREC DL shared task.

Query IDs in this collection align with those found in msmarco-passage and msmarco-document. The collection does not provide doc_ids, so these are assigned in the following format: [msmarco_passage_id]-[url_seq], where [msmarco_passage_id] is the document from msmarco-passage that has matching contents and [url_seq] is assigned sequentially for each URL encountered. In other words, all documents with the same prefix have the same text; they only differ in the originating document.

Doc msmarco_passage_id fields are assigned by matching pasasge contents in msmarco-passage, and this field is provided for every document. Doc msmarco_document_id fields are assigned by matching the URL to the one found in msmarco-document. Due to how msmarco-document was constructed, there is not necessarily a match (value will be None if no match).

docsCitationMetadata
9.0M docs

Language: en

Document type:
MsMarcoQnADoc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. msmarco_passage_id: str
  5. msmarco_document_id: str

Examples:

Python APICLIPyTerrierXPM-IR
import ir_datasets
dataset = ir_datasets.load("msmarco-qna")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, msmarco_passage_id, msmarco_document_id>

You can find more details about the Python API here.


"msmarco-qna/dev"

Official dev set.

The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.

queriesdocsqrelsscoreddocsCitationMetadata
101K queries

Language: en

Query type:
MsMarcoQnAQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. type: str
  4. answers: Tuple[str, ...]

Examples:

Python APICLIPyTerrierXPM-IR
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/dev")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, type, answers>

You can find more details about the Python API here.


"msmarco-qna/eval"

Official eval set.

The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.

queriesdocsscoreddocsCitationMetadata
101K queries

Language: en

Query type:
MsMarcoQnAEvalQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. type: str

Examples:

Python APICLIPyTerrierXPM-IR
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/eval")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, type>

You can find more details about the Python API here.


"msmarco-qna/train"

Official train set.

The scoreddocs provides the roughtly 10 passages presented to the user for annotation, where the score indicates the order presented.

queriesdocsqrelsscoreddocsCitationMetadata
809K queries

Language: en

Query type:
MsMarcoQnAQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. type: str
  4. answers: Tuple[str, ...]

Examples:

Python APICLIPyTerrierXPM-IR
import ir_datasets
dataset = ir_datasets.load("msmarco-qna/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, type, answers>

You can find more details about the Python API here.