← home
Github: datasets/trec_fair.py

ir_datasets: TREC Fair Ranking

Index
  1. trec-fair
  2. trec-fair/2021
  3. trec-fair/2021/eval
  4. trec-fair/2021/train
  5. trec-fair/2022
  6. trec-fair/2022/train

"trec-fair"

The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents.


"trec-fair/2021"

The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents.

docs
6.3M docs

Language: en

Document type:
FairTrecDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. marked_up_text: str
  5. url: str
  6. quality_score: Optional[float]
  7. geographic_locations: Optional[List[str]]
  8. quality_score_disk: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, marked_up_text, url, quality_score, geographic_locations, quality_score_disk>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021 docs
[doc_id]    [title]    [text]    [marked_up_text]    [url]    [quality_score]    [geographic_locations]    [quality_score_disk]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021')
# Index trec-fair/2021
indexer = pt.IterDictIndexer('./indices/trec-fair_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair.2021')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"trec-fair/2021/eval"

Official TREC Fair Ranking 2021 evaluation set.

queries
49 queries

Language: en

Query type:
FairTrecEvalQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. keywords: List[str]
  4. scope: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021/eval")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, keywords, scope>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021/eval queries
[query_id]    [text]    [keywords]    [scope]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021/eval')
index_ref = pt.IndexRef.of('./indices/trec-fair_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.trec-fair.2021.eval.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
6.3M docs

Inherits docs from trec-fair/2021

Language: en

Document type:
FairTrecDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. marked_up_text: str
  5. url: str
  6. quality_score: Optional[float]
  7. geographic_locations: Optional[List[str]]
  8. quality_score_disk: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021/eval")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, marked_up_text, url, quality_score, geographic_locations, quality_score_disk>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021/eval docs
[doc_id]    [title]    [text]    [marked_up_text]    [url]    [quality_score]    [geographic_locations]    [quality_score_disk]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021/eval')
# Index trec-fair/2021
indexer = pt.IterDictIndexer('./indices/trec-fair_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair.2021.eval')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
14K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1relevant14K100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021/eval")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021/eval qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021/eval')
index_ref = pt.IndexRef.of('./indices/trec-fair_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.trec-fair.2021.eval.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Metadata

"trec-fair/2021/train"

Official TREC Fair Ranking 2021 train set.

queries
57 queries

Language: en

Query type:
FairTrecQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. keywords: List[str]
  4. scope: str
  5. homepage: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, keywords, scope, homepage>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021/train queries
[query_id]    [text]    [keywords]    [scope]    [homepage]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021/train')
index_ref = pt.IndexRef.of('./indices/trec-fair_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.trec-fair.2021.train.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
6.3M docs

Inherits docs from trec-fair/2021

Language: en

Document type:
FairTrecDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. marked_up_text: str
  5. url: str
  6. quality_score: Optional[float]
  7. geographic_locations: Optional[List[str]]
  8. quality_score_disk: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, marked_up_text, url, quality_score, geographic_locations, quality_score_disk>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021/train docs
[doc_id]    [title]    [text]    [marked_up_text]    [url]    [quality_score]    [geographic_locations]    [quality_score_disk]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021/train')
# Index trec-fair/2021
indexer = pt.IterDictIndexer('./indices/trec-fair_2021')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair.2021.train')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
2.2M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1relevant2.2M100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2021/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2021/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2021/train')
index_ref = pt.IndexRef.of('./indices/trec-fair_2021') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.trec-fair.2021.train.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Metadata

"trec-fair/2022"

The TREC Fair Ranking 2022 track focuses on fairly prioritising Wikimedia articles for editing to provide a fair exposure to articles from different groups.

docs
6.5M docs

Language: en

Document type:
FairTrec2022Doc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. url: str
  5. pred_qual: Optional[float]
  6. qual_cat: Optional[str]
  7. page_countries: Optional[List[str]]
  8. page_subcont_regions: Optional[List[str]]
  9. source_countries: Optional[Dict[str,int]]
  10. source_subcont_regions: Optional[Dict[str,int]]
  11. gender: Optional[List[str]]
  12. occupations: Optional[List[str]]
  13. years: Optional[List[int]]
  14. num_sitelinks: Optional[int]
  15. relative_pageviews: Optional[float]
  16. first_letter: Optional[str]
  17. creation_date: Optional[str]
  18. first_letter_category: Optional[str]
  19. gender_category: Optional[str]
  20. creation_date_category: Optional[str]
  21. years_category: Optional[str]
  22. relative_pageviews_category: Optional[str]
  23. num_sitelinks_category: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2022")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, url, pred_qual, qual_cat, page_countries, page_subcont_regions, source_countries, source_subcont_regions, gender, occupations, years, num_sitelinks, relative_pageviews, first_letter, creation_date, first_letter_category, gender_category, creation_date_category, years_category, relative_pageviews_category, num_sitelinks_category>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2022 docs
[doc_id]    [title]    [text]    [url]    [pred_qual]    [qual_cat]    [page_countries]    [page_subcont_regions]    [source_countries]    [source_subcont_regions]    [gender]    [occupations]    [years]    [num_sitelinks]    [relative_pageviews]    [first_letter]    [creation_date]    [first_letter_category]    [gender_category]    [creation_date_category]    [years_category]    [relative_pageviews_category]    [num_sitelinks_category]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2022')
# Index trec-fair/2022
indexer = pt.IterDictIndexer('./indices/trec-fair_2022')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair.2022')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"trec-fair/2022/train"

Official TREC Fair Ranking 2022 train set.

queries
50 queries

Language: en

Query type:
FairTrec2022TrainQuery: (namedtuple)
  1. query_id: str
  2. text: str
  3. url: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2022/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text, url>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2022/train queries
[query_id]    [text]    [url]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2022/train')
index_ref = pt.IndexRef.of('./indices/trec-fair_2022') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics('text'))

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.trec-fair.2022.train.queries')  # AdhocTopics
for topic in topics.iter():
    print(topic)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
6.5M docs

Inherits docs from trec-fair/2022

Language: en

Document type:
FairTrec2022Doc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. text: str
  4. url: str
  5. pred_qual: Optional[float]
  6. qual_cat: Optional[str]
  7. page_countries: Optional[List[str]]
  8. page_subcont_regions: Optional[List[str]]
  9. source_countries: Optional[Dict[str,int]]
  10. source_subcont_regions: Optional[Dict[str,int]]
  11. gender: Optional[List[str]]
  12. occupations: Optional[List[str]]
  13. years: Optional[List[int]]
  14. num_sitelinks: Optional[int]
  15. relative_pageviews: Optional[float]
  16. first_letter: Optional[str]
  17. creation_date: Optional[str]
  18. first_letter_category: Optional[str]
  19. gender_category: Optional[str]
  20. creation_date_category: Optional[str]
  21. years_category: Optional[str]
  22. relative_pageviews_category: Optional[str]
  23. num_sitelinks_category: Optional[str]

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2022/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, title, text, url, pred_qual, qual_cat, page_countries, page_subcont_regions, source_countries, source_subcont_regions, gender, occupations, years, num_sitelinks, relative_pageviews, first_letter, creation_date, first_letter_category, gender_category, creation_date_category, years_category, relative_pageviews_category, num_sitelinks_category>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2022/train docs
[doc_id]    [title]    [text]    [url]    [pred_qual]    [qual_cat]    [page_countries]    [page_subcont_regions]    [source_countries]    [source_subcont_regions]    [gender]    [occupations]    [years]    [num_sitelinks]    [relative_pageviews]    [first_letter]    [creation_date]    [first_letter_category]    [gender_category]    [creation_date_category]    [years_category]    [relative_pageviews_category]    [num_sitelinks_category]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2022/train')
# Index trec-fair/2022
indexer = pt.IterDictIndexer('./indices/trec-fair_2022')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'text', 'url'])

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.trec-fair.2022.train')
for doc in dataset.iter_documents():
    print(doc)  # an AdhocDocumentStore
    break

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
2.1M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1relevant2.1M100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("trec-fair/2022/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export trec-fair/2022/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:trec-fair/2022/train')
index_ref = pt.IndexRef.of('./indices/trec-fair_2022') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
    [pipeline],
    dataset.get_topics('text'),
    dataset.get_qrels(),
    [MAP, nDCG@20]
)

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.trec-fair.2022.train.qrels')  # AdhocAssessments
for topic_qrels in qrels.iter():
    print(topic_qrels)  # An AdhocTopic

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Metadata