← home
Github: datasets/clinicaltrials.py

ir_datasets: Clinical Trials

Index
  1. clinicaltrials
  2. clinicaltrials/2017
  3. clinicaltrials/2017/trec-pm-2017
  4. clinicaltrials/2017/trec-pm-2018
  5. clinicaltrials/2019
  6. clinicaltrials/2019/trec-pm-2019
  7. clinicaltrials/2021
  8. clinicaltrials/2021/trec-ct-2021
  9. clinicaltrials/2021/trec-ct-2022

"clinicaltrials"

Clinical trial information from ClinicalTrials.gov. Used for the Clinical Trials subtasks in TREC Precision Medicine.


"clinicaltrials/2017"

A snapshot of ClinicalTrials.gov from April 2017 for use with the clinicaltrials/2017/trec-pm-2017 and clinicaltrials/2017/trec-pm-2018 Clinical Trials subtasks.

docs
241K docs

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017')
# Index clinicaltrials/2017
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2017')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"clinicaltrials/2017/trec-pm-2017"

The TREC 2017 Precision Medicine clinical trials subtask.

queries
30 queries

Language: en

Query type:
TrecPm2017Query: (namedtuple)
  1. query_id: str
  2. disease: str
  3. gene: str
  4. demographic: str
  5. other: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2017")
for query in dataset.queries_iter():
    query # namedtuple<query_id, disease, gene, demographic, other>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017/trec-pm-2017 queries
[query_id]    [disease]    [gene]    [demographic]    [other]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2017')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2017.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
241K docs

Inherits docs from clinicaltrials/2017

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2017")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017/trec-pm-2017 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2017')
# Index clinicaltrials/2017
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2017')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
13K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant12K91.0%
1possibly relevant735 5.6%
2definitely relevant436 3.3%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2017")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017/trec-pm-2017 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2017')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('disease'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2017.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Roberts2017TrecPm}

Bibtex:

@inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} }
Metadata

"clinicaltrials/2017/trec-pm-2018"

The TREC 2018 Precision Medicine clinical trials subtask.

queries
50 queries

Language: en

Query type:
TrecPmQuery: (namedtuple)
  1. query_id: str
  2. disease: str
  3. gene: str
  4. demographic: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2018")
for query in dataset.queries_iter():
    query # namedtuple<query_id, disease, gene, demographic>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017/trec-pm-2018 queries
[query_id]    [disease]    [gene]    [demographic]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2018')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2018.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
241K docs

Inherits docs from clinicaltrials/2017

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2018")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017/trec-pm-2018 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2018')
# Index clinicaltrials/2017
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2018')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
14K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant12K85.6%
1possibly relevant1.2K8.3%
2definitely relevant873 6.2%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2018")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2017/trec-pm-2018 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2018')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('disease'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2018.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Roberts2018TrecPm}

Bibtex:

@inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} }
Metadata

"clinicaltrials/2019"

A snapshot of ClinicalTrials.gov from May 2019 for use with the clinicaltrials/2019/trec-pm-2019 Clinical Trials subtask.

docs
306K docs

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2019 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019')
# Index clinicaltrials/2019
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2019')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2019')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"clinicaltrials/2019/trec-pm-2019"

The TREC 2019 Precision Medicine clinical trials subtask.

queries
40 queries

Language: en

Query type:
TrecPmQuery: (namedtuple)
  1. query_id: str
  2. disease: str
  3. gene: str
  4. demographic: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019/trec-pm-2019")
for query in dataset.queries_iter():
    query # namedtuple<query_id, disease, gene, demographic>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2019/trec-pm-2019 queries
[query_id]    [disease]    [gene]    [demographic]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019/trec-pm-2019')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2019') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2019.trec-pm-2019.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
306K docs

Inherits docs from clinicaltrials/2019

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019/trec-pm-2019")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2019/trec-pm-2019 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019/trec-pm-2019')
# Index clinicaltrials/2019
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2019')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2019.trec-pm-2019')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
13K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant11K83.2%
1possibly relevant1.7K13.1%
2definitely relevant485 3.7%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019/trec-pm-2019")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2019/trec-pm-2019 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019/trec-pm-2019')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2019') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('disease'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2019.trec-pm-2019.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Roberts2019TrecPm}

Bibtex:

@inproceedings{Roberts2019TrecPm, title={Overview of the TREC 2019 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant and Funda Meric-Bernstam}, booktitle={TREC}, year={2019} }
Metadata

"clinicaltrials/2021"

A snapshot of ClinicalTrials.gov from April 2021 for use with the TREC Clinical Trials 2021 Track.

docs
376K docs

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2021 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021')
# Index clinicaltrials/2021
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2021')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"clinicaltrials/2021/trec-ct-2021"

The TREC Clinical Trials 2021 track.

queries
75 queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2021")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2021/trec-ct-2021 queries
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2021')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2021.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
376K docs

Inherits docs from clinicaltrials/2021

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2021/trec-ct-2021 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2021')
# Index clinicaltrials/2021
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2021')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
36K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0Not Relevant24K67.7%
1Excluded6.0K16.8%
2Eligible5.6K15.5%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2021")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2021/trec-ct-2021 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2021')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics(),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2021.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Metadata

"clinicaltrials/2021/trec-ct-2022"

The TREC Clinical Trials 2022 track.

queries
50 queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2022")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2021/trec-ct-2022 queries
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2022')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2022.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
376K docs

Inherits docs from clinicaltrials/2021

Language: en

Document type:
ClinicalTrialsDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. condition: str
  4. summary: str
  5. detailed_description: str
  6. eligibility: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2022")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>

You can find more details about the Python API here.

CLI
ir_datasets export clinicaltrials/2021/trec-ct-2022 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2022')
# Index clinicaltrials/2021
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2022')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata