ir_datasets
: Natural QuestionsGoogle Natural Questions is a Q&A dataset containing long, short, and Yes/No answers from Wikipedia. ir_datasets frames this around an ad-hoc ranking setting by building a collection of all long answer candidate passages. However, short and Yes/No annotations are also available in the qrels, as are the passages presented to the annotators (via scoreddocs).
Importantly, the document collection does not consist of all Wikipedia passages, but instead a union of the candidate passages presented to the annotators (akin to MS MARCO). dph-w100/natural-questions/train and dph-w100/natural-questions/dev contain a filtered set of the questions in this dataset and a full Wikipedia dump (which is a more realistic retrieval setting).
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, html, start_byte, end_byte, start_token, end_token, document_title, document_url, parent_doc_id>
You can find more details about the Python API here.
ir_datasets export natural-questions docs
[doc_id] [text] [html] [start_byte] [end_byte] [start_token] [end_token] [document_title] [document_url] [parent_doc_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:natural-questions')
# Index natural-questions
indexer = pt.IterDictIndexer('./indices/natural-questions')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'html', 'document_title', 'document_url', 'parent_doc_id'])
You can find more details about PyTerrier indexing here.
Official dev set.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/dev")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export natural-questions/dev queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:natural-questions/dev')
index_ref = pt.IndexRef.of('./indices/natural-questions') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from natural-questions
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/dev")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, html, start_byte, end_byte, start_token, end_token, document_title, document_url, parent_doc_id>
You can find more details about the Python API here.
ir_datasets export natural-questions/dev docs
[doc_id] [text] [html] [start_byte] [end_byte] [start_token] [end_token] [document_title] [document_url] [parent_doc_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:natural-questions/dev')
# Index natural-questions
indexer = pt.IterDictIndexer('./indices/natural-questions')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'html', 'document_title', 'document_url', 'parent_doc_id'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | passage marked by annotator as a "long" answer to the question |
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/dev")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, short_answers, yes_no_answer>
You can find more details about the Python API here.
ir_datasets export natural-questions/dev qrels --format tsv
[query_id] [doc_id] [relevance] [short_answers] [yes_no_answer]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:natural-questions/dev')
index_ref = pt.IndexRef.of('./indices/natural-questions') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/dev")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export natural-questions/dev scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier
Official train set.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export natural-questions/train queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:natural-questions/train')
index_ref = pt.IndexRef.of('./indices/natural-questions') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from natural-questions
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/train")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, html, start_byte, end_byte, start_token, end_token, document_title, document_url, parent_doc_id>
You can find more details about the Python API here.
ir_datasets export natural-questions/train docs
[doc_id] [text] [html] [start_byte] [end_byte] [start_token] [end_token] [document_title] [document_url] [parent_doc_id]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:natural-questions/train')
# Index natural-questions
indexer = pt.IterDictIndexer('./indices/natural-questions')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'html', 'document_title', 'document_url', 'parent_doc_id'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | passage marked by annotator as a "long" answer to the question |
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/train")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, short_answers, yes_no_answer>
You can find more details about the Python API here.
ir_datasets export natural-questions/train qrels --format tsv
[query_id] [doc_id] [relevance] [short_answers] [yes_no_answer]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:natural-questions/train')
index_ref = pt.IndexRef.of('./indices/natural-questions') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Examples:
import ir_datasets
dataset = ir_datasets.load("natural-questions/train")
for scoreddoc in dataset.scoreddocs_iter():
scoreddoc # namedtuple<query_id, doc_id, score>
You can find more details about the Python API here.
ir_datasets export natural-questions/train scoreddocs --format tsv
[query_id] [doc_id] [score]
...
You can find more details about the CLI here.
No example available for PyTerrier