← home
Github: datasets/trec_fair_2021.py

ir_datasets: TREC Fair Ranking

Index
  1. trec-fair-2021
  2. trec-fair-2021/eval
  3. trec-fair-2021/train

"trec-fair-2021"

acessing TREC Fair Ranking 2021 through trec-fair-2021 is deprecated; use trec-fair/2021 instead.

The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents.

docs
6.3M docs

Language: en

Document type:
FairTrecDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. marked_up_text: str
  5. url: str
  6. quality_score: Optional[float]
  7. geographic_locations: Optional[List[str]]
  8. quality_score_disk: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, marked_up_text, url, quality_score, geographic_locations, quality_score_disk>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021 docs
[doc_id]    [title]    [text]    [marked_up_text]    [url]    [quality_score]    [geographic_locations]    [quality_score_disk]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021')
# Index trec-fair-2021
indexer = pt.IterDictIndexer('./indices/trec-fair-2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair-2021')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"trec-fair-2021/eval"

acessing TREC Fair Ranking 2021 through trec-fair-2021/train is deprecated; use trec-fair/2021/train instead.

Official TREC Fair Ranking 2021 evaluation set.

queries
49 queries

Language: en

Query type:
FairTrecEvalQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. keywords: List[str]
  4. scope: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021/eval")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, keywords, scope>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021/eval queries
[query_id]    [text]    [keywords]    [scope]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021/eval')
index_ref = pt.IndexRef.of('./indices/trec-fair-2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.trec-fair-2021.eval.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
6.3M docs

Inherits docs from trec-fair-2021

Language: en

Document type:
FairTrecDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. marked_up_text: str
  5. url: str
  6. quality_score: Optional[float]
  7. geographic_locations: Optional[List[str]]
  8. quality_score_disk: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021/eval")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, marked_up_text, url, quality_score, geographic_locations, quality_score_disk>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021/eval docs
[doc_id]    [title]    [text]    [marked_up_text]    [url]    [quality_score]    [geographic_locations]    [quality_score_disk]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021/eval')
# Index trec-fair-2021
indexer = pt.IterDictIndexer('./indices/trec-fair-2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair-2021.eval')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
14K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1relevant14K100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021/eval")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021/eval qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021/eval')
index_ref = pt.IndexRef.of('./indices/trec-fair-2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.trec-fair-2021.eval.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Metadata

"trec-fair-2021/train"

acessing TREC Fair Ranking 2021 through trec-fair-2021/train is deprecated; use trec-fair/2021/train instead.

Official TREC Fair Ranking 2021 train set.

queries
57 queries

Language: en

Query type:
FairTrecQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. keywords: List[str]
  4. scope: str
  5. homepage: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, keywords, scope, homepage>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021/train queries
[query_id]    [text]    [keywords]    [scope]    [homepage]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021/train')
index_ref = pt.IndexRef.of('./indices/trec-fair-2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.trec-fair-2021.train.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
6.3M docs

Inherits docs from trec-fair-2021

Language: en

Document type:
FairTrecDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. marked_up_text: str
  5. url: str
  6. quality_score: Optional[float]
  7. geographic_locations: Optional[List[str]]
  8. quality_score_disk: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, marked_up_text, url, quality_score, geographic_locations, quality_score_disk>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021/train docs
[doc_id]    [title]    [text]    [marked_up_text]    [url]    [quality_score]    [geographic_locations]    [quality_score_disk]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021/train')
# Index trec-fair-2021
indexer = pt.IterDictIndexer('./indices/trec-fair-2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair-2021.train')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
2.2M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1relevant2.2M100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair-2021/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair-2021/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:trec-fair-2021/train')
index_ref = pt.IndexRef.of('./indices/trec-fair-2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.trec-fair-2021.train.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Metadata