ir_datasets
: C4A version of Google's C4 dataset, which consists of articles crawled form the web.
The "en-noclean" train subset of the corpus, consisting of ~1B documents written in English. Document IDs are assigned as proposed by the TREC Health Misinformation 2021 track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("c4/en-noclean-tr")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, url, timestamp>
You can find more details about the Python API here.
ir_datasets export c4/en-noclean-tr docs
[doc_id] [text] [url] [timestamp]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:c4/en-noclean-tr')
# Index c4/en-noclean-tr
indexer = pt.IterDictIndexer('./indices/c4_en-noclean-tr', meta={"docno": 41})
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'timestamp'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.c4.en-noclean-tr')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 1063805381, "fields": { "doc_id": { "max_len": 41, "common_prefix": "en.noclean.c4-train.0" } } } }
The TREC Health Misinformation 2021 track.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("c4/en-noclean-tr/trec-misinfo-2021")
for query in dataset.queries_iter():
query # namedtuple<query_id, text, description, narrative, disclaimer, stance, evidence>
You can find more details about the Python API here.
ir_datasets export c4/en-noclean-tr/trec-misinfo-2021 queries
[query_id] [text] [description] [narrative] [disclaimer] [stance] [evidence]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:c4/en-noclean-tr/trec-misinfo-2021')
index_ref = pt.IndexRef.of('./indices/c4_en-noclean-tr') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.c4.en-noclean-tr.trec-misinfo-2021.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from c4/en-noclean-tr
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("c4/en-noclean-tr/trec-misinfo-2021")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, url, timestamp>
You can find more details about the Python API here.
ir_datasets export c4/en-noclean-tr/trec-misinfo-2021 docs
[doc_id] [text] [url] [timestamp]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:c4/en-noclean-tr/trec-misinfo-2021')
# Index c4/en-noclean-tr
indexer = pt.IterDictIndexer('./indices/c4_en-noclean-tr', meta={"docno": 41})
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'timestamp'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.c4.en-noclean-tr.trec-misinfo-2021')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 1063805381, "fields": { "doc_id": { "max_len": 41, "common_prefix": "en.noclean.c4-train.0" } } }, "queries": { "count": 50 } }