← home
Github: datasets/dpr_w100.py

ir_datasets: DPR Wiki100

Index
  1. dpr-w100
  2. dpr-w100/natural-questions/dev
  3. dpr-w100/natural-questions/train
  4. dpr-w100/trivia-qa/dev
  5. dpr-w100/trivia-qa/train

"dpr-w100"

A wikipedia dump from 20 December, 2018, split into passages of 100 words. Used in experiments in the DPR paper (and other subsequent works) for retrieval experiments over Q&A collections.

docs

Language: en

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100 docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

Citation
bibtex: @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }

"dpr-w100/natural-questions/dev"

Dev subset from the Natural Questions Q&A collection. This differs from the natural-questions/dev dataset in that it uses the full Wikipedia dump and additional filtering (described in the DPR paper) was applied.

queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/dev")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/dev queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

docs

Language: en

Note: Uses docs from dpr-w100

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/dev")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/dev docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/dev')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
-1negative samples
0"hard" negative samples
1contains the answer text and retrieved in the top BM25 results
2marked by human annotator as containing the answer

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/dev")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/dev qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

Citation
bibtex: @article{Kwiatkowski2019NQ, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }

"dpr-w100/natural-questions/train"

Training subset from the Natural Questions Q&A collection. This differs from the natural-questions/train dataset in that it uses the full Wikipedia dump and additional filtering (described in the DPR paper) was applied.

queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/train queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

docs

Language: en

Note: Uses docs from dpr-w100

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/train docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/train')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
-1negative samples
0"hard" negative samples
1contains the answer text and retrieved in the top BM25 results
2marked by human annotator as containing the answer

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/natural-questions/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/natural-questions/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/natural-questions/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

Citation
bibtex: @article{Kwiatkowski2019NQ, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }

"dpr-w100/trivia-qa/dev"

Dev subset from the Trivia QA dataset. Differing from the official Trivia QA collection, this uses the DPR Wikipedia dump as the source collection. Refer to the DPR paper for more details.

queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/dev")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/dev queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

docs

Language: en

Note: Uses docs from dpr-w100

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/dev")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/dev docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/dev')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
-1negative samples
0"hard" negative samples
1contains the answer text and retrieved in the top BM25 results
2marked by human annotator as containing the answer

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/dev")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/dev qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/dev')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

Citation
bibtex: @inproceedings{Joshi2017TriviaQAAL, title={TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}, author={Mandar Joshi and Eunsol Choi and Daniel S. Weld and Luke Zettlemoyer}, booktitle={ACL}, year={2017} } @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }

"dpr-w100/trivia-qa/train"

Training subset from the Trivia QA dataset. Differing from the official Trivia QA collection, this uses the DPR Wikipedia dump as the source collection. Refer to the DPR paper for more details.

queries

Language: en

Query type:
DprW100Query: (namedtuple)
  1. query_id: str
  2. text: str
  3. answers: Tuple[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, answers>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/train queries
[query_id]    [text]    [answers]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

docs

Language: en

Note: Uses docs from dpr-w100

Document type:
DprW100Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. title: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, title>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/train docs
[doc_id]    [text]    [title]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/train')
# Index dpr-w100
indexer = pt.IterDictIndexer('./indices/dpr-w100')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'title'])

You can find more details about PyTerrier indexing here.

qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
-1negative samples
0"hard" negative samples
1contains the answer text and retrieved in the top BM25 results
2marked by human annotator as containing the answer

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("dpr-w100/trivia-qa/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export dpr-w100/trivia-qa/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:dpr-w100/trivia-qa/train')
index_ref = pt.IndexRef.of('./indices/dpr-w100') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

Citation
bibtex: @inproceedings{Joshi2017TriviaQAAL, title={TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}, author={Mandar Joshi and Eunsol Choi and Daniel S. Weld and Luke Zettlemoyer}, booktitle={ACL}, year={2017} } @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} }