← home
Github: datasets/dpr_w100.py

ir_datasets: DPR Wiki100

Index
  1. dpr-w100
  2. dpr-w100/natural-questions/dev
  3. dpr-w100/natural-questions/train
  4. dpr-w100/trivia-qa/dev
  5. dpr-w100/trivia-qa/train

"dpr-w100"

A wikipedia dump from 20 December, 2018, split into passages of 100 words. Used in experiments in the DPR paper (and other subsequent works) for retrieval experiments over Q&A collections.

docs
21M docs

Language: en

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100 docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.dpr-w100')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Citation

ir_datasets.bib:

\cite{Karpukhin2020Dpr}

Bibtex:

@misc{Karpukhin2020Dpr, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Metadata

"dpr-w100/natural-questions/dev"

Dev subset from the Natural Questions Q&A collection. This differs from the natural-questions/dev dataset in that it uses the full Wikipedia dump and additional filtering (described in the DPR paper) was applied.

queries
6.5K queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/dev")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/dev queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.dpr-w100.natural-questions.dev.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
21M docs

Inherits docs from dpr-w100

Language: en

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/dev")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/dev docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/dev')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.dpr-w100.natural-questions.dev')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
980K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
-1negative samples326K33.2%
0"hard" negative samples603K61.5%
1contains the answer text and retrieved in the top BM25 results45K4.6%
2marked by human annotator as containing the answer6.5K0.7%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/dev")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/dev qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.dpr-w100.natural-questions.dev.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Kwiatkowski2019Nq,Karpukhin2020Dpr}

Bibtex:

@article{Kwiatkowski2019Nq, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } @misc{Karpukhin2020Dpr, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Metadata

"dpr-w100/natural-questions/train"

Training subset from the Natural Questions Q&A collection. This differs from the natural-questions/train dataset in that it uses the full Wikipedia dump and additional filtering (described in the DPR paper) was applied.

queries
59K queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/train queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.dpr-w100.natural-questions.train.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
21M docs

Inherits docs from dpr-w100

Language: en

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/train docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/train')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.dpr-w100.natural-questions.train')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
8.9M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
-1negative samples2.9M33.2%
0"hard" negative samples5.4M61.5%
1contains the answer text and retrieved in the top BM25 results406K4.6%
2marked by human annotator as containing the answer59K0.7%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.dpr-w100.natural-questions.train.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Kwiatkowski2019Nq,Karpukhin2020Dpr}

Bibtex:

@article{Kwiatkowski2019Nq, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } @misc{Karpukhin2020Dpr, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Metadata

"dpr-w100/trivia-qa/dev"

Dev subset from the Trivia QA dataset. Differing from the official Trivia QA collection, this uses the DPR Wikipedia dump as the source collection. Refer to the DPR paper for more details.

queries
8.8K queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/dev")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/dev queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.dpr-w100.trivia-qa.dev.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
21M docs

Inherits docs from dpr-w100

Language: en

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/dev")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/dev docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/dev')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.dpr-w100.trivia-qa.dev')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
884K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
-1negative samples0 0.0%
0"hard" negative samples801K90.6%
1contains the answer text and retrieved in the top BM25 results83K9.4%
2marked by human annotator as containing the answer0 0.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/dev")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/dev qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.dpr-w100.trivia-qa.dev.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Joshi2017TriviaQA,Karpukhin2020Dpr}

Bibtex:

@inproceedings{Joshi2017TriviaQA, title={TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}, author={Mandar Joshi and Eunsol Choi and Daniel S. Weld and Luke Zettlemoyer}, booktitle={ACL}, year={2017} } @misc{Karpukhin2020Dpr, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Metadata

"dpr-w100/trivia-qa/train"

Training subset from the Trivia QA dataset. Differing from the official Trivia QA collection, this uses the DPR Wikipedia dump as the source collection. Refer to the DPR paper for more details.

queries
79K queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/train queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.dpr-w100.trivia-qa.train.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
21M docs

Inherits docs from dpr-w100

Language: en

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/train docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/train')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.dpr-w100.trivia-qa.train')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
7.9M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
-1negative samples0 0.0%
0"hard" negative samples7.1M90.6%
1contains the answer text and retrieved in the top BM25 results741K9.4%
2marked by human annotator as containing the answer0 0.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.dpr-w100.trivia-qa.train.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Joshi2017TriviaQA,Karpukhin2020Dpr}

Bibtex:

@inproceedings{Joshi2017TriviaQA, title={TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}, author={Mandar Joshi and Eunsol Choi and Daniel S. Weld and Luke Zettlemoyer}, booktitle={ACL}, year={2017} } @misc{Karpukhin2020Dpr, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Metadata