ir_datasets
: MedlineMedical articles from Medline. This collection was used by TREC Genomics 2004-05 (2004 version of dataset) and by TREC Precision Medicine 2017-18 (2017 version).
3M Medline articles including titles and abstracts, used for the TREC 2004-05 Genomics track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2004 docs
[doc_id] [title] [abstract]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2004')
# Index medline/2004
indexer = pt.IterDictIndexer('./indices/medline_2004')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
The TREC Genomics Track 2004 benchmark. Contains 50 queries with article-level relevance judgments.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004/trec-genomics-2004")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, need, context>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2004 queries
[query_id] [title] [need] [context]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004')
index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('title'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from medline/2004
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004/trec-genomics-2004")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2004 docs
[doc_id] [title] [abstract]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004')
# Index medline/2004
indexer = pt.IterDictIndexer('./indices/medline_2004')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | not relevant |
1 | possibly relevant |
2 | definitely relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004/trec-genomics-2004")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2004 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004')
index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('title'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
The TREC Genomics Track 2005 benchmark. Contains 36 queries with passage-level relevance judgments.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004/trec-genomics-2005")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2005 queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005')
index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from medline/2004
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004/trec-genomics-2005")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2005 docs
[doc_id] [title] [abstract]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005')
# Index medline/2004
indexer = pt.IterDictIndexer('./indices/medline_2004')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | not relevant |
1 | possibly relevant |
2 | definitely relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2004/trec-genomics-2005")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2005 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005')
index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
26M Medline and AACR/ASCO Proceedings articles including titles and abstracts. This collection is used for the TREC 2017-18 TREC Precision Medicine track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2017 docs
[doc_id] [title] [abstract]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2017')
# Index medline/2017
indexer = pt.IterDictIndexer('./indices/medline_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
The TREC Precision Medicine (PM) Track 2017 benchmark. Contains 30 queries containing disease, gene, and target demographic information.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017/trec-pm-2017")
for query in dataset.queries_iter():
query # namedtuple<query_id, disease, gene, demographic, other>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2017 queries
[query_id] [disease] [gene] [demographic] [other]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017')
index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from medline/2017
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017/trec-pm-2017")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2017 docs
[doc_id] [title] [abstract]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017')
# Index medline/2017
indexer = pt.IterDictIndexer('./indices/medline_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | not relevant |
1 | possibly relevant |
2 | definitely relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017/trec-pm-2017")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2017 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017')
index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('disease'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
The TREC Precision Medicine (PM) Track 2018 benchmark. Contains 50 queries containing disease, gene, and target demographic information.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017/trec-pm-2018")
for query in dataset.queries_iter():
query # namedtuple<query_id, disease, gene, demographic>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2018 queries
[query_id] [disease] [gene] [demographic]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018')
index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from medline/2017
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017/trec-pm-2018")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2018 docs
[doc_id] [title] [abstract]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018')
# Index medline/2017
indexer = pt.IterDictIndexer('./indices/medline_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
0 | not relevant |
1 | possibly relevant |
2 | definitely relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("medline/2017/trec-pm-2018")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2018 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018')
index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics('disease'),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.