← home
Github: datasets/c4.py

ir_datasets: C4

Index
  1. c4
  2. c4/en-noclean-tr
  3. c4/en-noclean-tr/trec-misinfo-2021

"c4"

A version of Google's C4 dataset, which consists of articles crawled form the web.


"c4/en-noclean-tr"

The "en-noclean" train subset of the corpus, consisting of ~1B documents written in English. Document IDs are assigned as proposed by the TREC Health Misinformation 2021 track.

docs
1.1B docs

Language: en

Document type:
C4Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. timestamp: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("c4/en-noclean-tr")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, timestamp>

You can find more details about the Python API here.

CLI
ir_datasets export c4/en-noclean-tr docs
[doc_id]    [text]    [url]    [timestamp]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:c4/en-noclean-tr')
# Index c4/en-noclean-tr
indexer = pt.IterDictIndexer('./indices/c4_en-noclean-tr')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'timestamp'])

You can find more details about PyTerrier indexing here.

Metadata

"c4/en-noclean-tr/trec-misinfo-2021"

The TREC Health Misinformation 2021 track.

queries
50 queries

Language: en

Query type:
MisinfoQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. description: str
  4. narrative: str
  5. disclaimer: str
  6. stance: str
  7. evidence: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("c4/en-noclean-tr/trec-misinfo-2021")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, description, narrative, disclaimer, stance, evidence>

You can find more details about the Python API here.

CLI
ir_datasets export c4/en-noclean-tr/trec-misinfo-2021 queries
[query_id]    [text]    [description]    [narrative]    [disclaimer]    [stance]    [evidence]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:c4/en-noclean-tr/trec-misinfo-2021')
index_ref = pt.IndexRef.of('./indices/c4_en-noclean-tr') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

docs
1.1B docs

Inherits docs from c4/en-noclean-tr

Language: en

Document type:
C4Doc: (namedtuple)
  1. doc_id: str
  2. text: str
  3. url: str
  4. timestamp: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("c4/en-noclean-tr/trec-misinfo-2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, text, url, timestamp>

You can find more details about the Python API here.

CLI
ir_datasets export c4/en-noclean-tr/trec-misinfo-2021 docs
[doc_id]    [text]    [url]    [timestamp]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:c4/en-noclean-tr/trec-misinfo-2021')
# Index c4/en-noclean-tr
indexer = pt.IterDictIndexer('./indices/c4_en-noclean-tr')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text', 'url', 'timestamp'])

You can find more details about PyTerrier indexing here.

Metadata