ir_datasets: Clinical TrialsClinical trial information from ClinicalTrials.gov. Used for the Clinical Trials subtasks in TREC Precision Medicine.
A snapshot of ClinicalTrials.gov from April 2017 for use with the clinicaltrials/2017/trec-pm-2017 and clinicaltrials/2017/trec-pm-2018 Clinical Trials subtasks.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017')
# Index clinicaltrials/2017
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2017')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{
  "docs": {
    "count": 241006,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  }
}
The TREC 2017 Precision Medicine clinical trials subtask.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2017")
for query in dataset.queries_iter():
    query # namedtuple<query_id, disease, gene, demographic, other>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017/trec-pm-2017 queries
[query_id]    [disease]    [gene]    [demographic]    [other]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2017')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2017.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from clinicaltrials/2017
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2017")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017/trec-pm-2017 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2017')
# Index clinicaltrials/2017
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2017')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
| Rel. | Definition | Count | % | 
|---|---|---|---|
| 0 | not relevant | 12K | 91.0% | 
| 1 | possibly relevant | 735 | 5.6% | 
| 2 | definitely relevant | 436 | 3.3% | 
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2017")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017/trec-pm-2017 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2017')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('disease'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2017.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} }{
  "docs": {
    "count": 241006,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  },
  "queries": {
    "count": 30
  },
  "qrels": {
    "count": 13019,
    "fields": {
      "relevance": {
        "counts_by_value": {
          "0": 11848,
          "1": 735,
          "2": 436
        }
      }
    }
  }
}
The TREC 2018 Precision Medicine clinical trials subtask.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2018")
for query in dataset.queries_iter():
    query # namedtuple<query_id, disease, gene, demographic>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017/trec-pm-2018 queries
[query_id]    [disease]    [gene]    [demographic]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2018')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2018.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from clinicaltrials/2017
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2018")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017/trec-pm-2018 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2018')
# Index clinicaltrials/2017
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2017')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2018')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
| Rel. | Definition | Count | % | 
|---|---|---|---|
| 0 | not relevant | 12K | 85.6% | 
| 1 | possibly relevant | 1.2K | 8.3% | 
| 2 | definitely relevant | 873 | 6.2% | 
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2017/trec-pm-2018")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2017/trec-pm-2018 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2017/trec-pm-2018')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2017') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('disease'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2017.trec-pm-2018.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} }{
  "docs": {
    "count": 241006,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  },
  "queries": {
    "count": 50
  },
  "qrels": {
    "count": 14188,
    "fields": {
      "relevance": {
        "counts_by_value": {
          "0": 12141,
          "2": 873,
          "1": 1174
        }
      }
    }
  }
}
A snapshot of ClinicalTrials.gov from May 2019 for use with the clinicaltrials/2019/trec-pm-2019 Clinical Trials subtask.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2019 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019')
# Index clinicaltrials/2019
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2019')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2019')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{
  "docs": {
    "count": 306238,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  }
}
The TREC 2019 Precision Medicine clinical trials subtask.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019/trec-pm-2019")
for query in dataset.queries_iter():
    query # namedtuple<query_id, disease, gene, demographic>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2019/trec-pm-2019 queries
[query_id]    [disease]    [gene]    [demographic]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019/trec-pm-2019')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2019') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2019.trec-pm-2019.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from clinicaltrials/2019
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019/trec-pm-2019")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2019/trec-pm-2019 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019/trec-pm-2019')
# Index clinicaltrials/2019
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2019')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2019.trec-pm-2019')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
| Rel. | Definition | Count | % | 
|---|---|---|---|
| 0 | not relevant | 11K | 83.2% | 
| 1 | possibly relevant | 1.7K | 13.1% | 
| 2 | definitely relevant | 485 | 3.7% | 
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2019/trec-pm-2019")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2019/trec-pm-2019 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2019/trec-pm-2019')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2019') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('disease'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2019.trec-pm-2019.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Roberts2019TrecPm, title={Overview of the TREC 2019 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant and Funda Meric-Bernstam}, booktitle={TREC}, year={2019} }{
  "docs": {
    "count": 306238,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  },
  "queries": {
    "count": 40
  },
  "qrels": {
    "count": 12996,
    "fields": {
      "relevance": {
        "counts_by_value": {
          "0": 10811,
          "1": 1700,
          "2": 485
        }
      }
    }
  }
}
A snapshot of ClinicalTrials.gov from April 2021 for use with the TREC Clinical Trials 2021 Track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2021 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021')
# Index clinicaltrials/2021
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2021')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{
  "docs": {
    "count": 375580,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  }
}
The TREC Clinical Trials 2021 track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2021")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2021/trec-ct-2021 queries
[query_id]    [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2021')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2021.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from clinicaltrials/2021
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2021/trec-ct-2021 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2021')
# Index clinicaltrials/2021
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2021')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
| Rel. | Definition | Count | % | 
|---|---|---|---|
| 0 | Not Relevant | 24K | 67.7% | 
| 1 | Excluded | 6.0K | 16.8% | 
| 2 | Eligible | 5.6K | 15.5% | 
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2021")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2021/trec-ct-2021 qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2021')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics(),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2021.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
{
  "docs": {
    "count": 375580,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  },
  "queries": {
    "count": 75
  },
  "qrels": {
    "count": 35832,
    "fields": {
      "relevance": {
        "counts_by_value": {
          "1": 6019,
          "0": 24243,
          "2": 5570
        }
      }
    }
  }
}
The TREC Clinical Trials 2022 track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2022")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2021/trec-ct-2022 queries
[query_id]    [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2022')
index_ref = pt.IndexRef.of('./indices/clinicaltrials_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2022.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from clinicaltrials/2021
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("clinicaltrials/2021/trec-ct-2022")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, condition, summary, detailed_description, eligibility>
You can find more details about the Python API here.
ir_datasets export clinicaltrials/2021/trec-ct-2022 docs
[doc_id]    [title]    [condition]    [summary]    [detailed_description]    [eligibility]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:clinicaltrials/2021/trec-ct-2022')
# Index clinicaltrials/2021
indexer = pt.IterDictIndexer('./indices/clinicaltrials_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'condition', 'summary', 'detailed_description', 'eligibility'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.clinicaltrials.2021.trec-ct-2022')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{
  "docs": {
    "count": 375580,
    "fields": {
      "doc_id": {
        "max_len": 11,
        "common_prefix": "NCT0"
      }
    }
  },
  "queries": {
    "count": 50
  }
}