← home
Github: datasets/clueweb12.py

ir_datasets: ClueWeb12

Index
  1. clueweb12
  2. clueweb12/b13
  3. clueweb12/b13/clef-ehealth
  4. clueweb12/b13/clef-ehealth/cs
  5. clueweb12/b13/clef-ehealth/de
  6. clueweb12/b13/clef-ehealth/fr
  7. clueweb12/b13/clef-ehealth/hu
  8. clueweb12/b13/clef-ehealth/pl
  9. clueweb12/b13/clef-ehealth/sv
  10. clueweb12/b13/ntcir-www-1
  11. clueweb12/b13/ntcir-www-2
  12. clueweb12/b13/ntcir-www-3
  13. clueweb12/b13/trec-misinfo-2019
  14. clueweb12/trec-web-2013
  15. clueweb12/trec-web-2014

"clueweb12"

ClueWeb 2012 web document collection. Contains 733M web pages.

The dataset is obtained for a fee from CMU, and is shipped as hard drives. More information is provided here.

docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>

"clueweb12/b13"

Official subset of the ClueWeb12 datasets with 52M web pages.

docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>

"clueweb12/b13/clef-ehealth"

The CLEF eHealth 2016-17 IR dataset. Contains consumer health queries and judgments containing trustworthiness and understandability scores, in addition to the normal relevance assessments.

This dataset contains the combined 2016 and 2017 relevance judgments, since the same queries were used in the two year. The assessment year can be distinguished using iteration (2016 is iteration 0, 2017 is iteration 1).

queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/clef-ehealth/cs"

The CLEF eHealth 2016-17 IR dataset, with queries professionally translataed to Czech. See clueweb12/b13/clef-ehealth for more details.

queries

Language: cs

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/cs')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/cs')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/cs')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/clef-ehealth/de"

The CLEF eHealth 2016-17 IR dataset, with queries professionally translataed to German. See clueweb12/b13/clef-ehealth for more details.

queries

Language: de

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/de')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/de')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/de')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/clef-ehealth/fr"

The CLEF eHealth 2016-17 IR dataset, with queries professionally translataed to French. See clueweb12/b13/clef-ehealth for more details.

queries

Language: fr

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/fr')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/fr')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/fr')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/clef-ehealth/hu"

The CLEF eHealth 2016-17 IR dataset, with queries professionally translataed to Hungarian. See clueweb12/b13/clef-ehealth for more details.

queries

Language: hu

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/hu')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/hu')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/hu')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/clef-ehealth/pl"

The CLEF eHealth 2016-17 IR dataset, with queries professionally translataed to Polish. See clueweb12/b13/clef-ehealth for more details.

queries

Language: pl

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/pl')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/pl')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/pl')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/clef-ehealth/sv"

The CLEF eHealth 2016-17 IR dataset, with queries professionally translataed to Swedish. See clueweb12/b13/clef-ehealth for more details.

queries

Language: sv

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/sv')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/sv')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
EhealthQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. trustworthiness: int
  5. understandability: int
  6. iteration: str

Relevance levels

Rel.Definition
0Not relevant
1Somewhat relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/clef-ehealth/sv')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, trustworthiness, understandability, iteration>
Citation
bibtex: @inproceedings{Zuccon2016TheIT, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017CLEF, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} }

"clueweb12/b13/ntcir-www-1"

The NTCIR-13 We Want Web (WWW) 1 ad-hoc ranking benchmark. Contains 100 queries with deep relevance judgments (avg 255 per query). Judgments aggregated from two assessors. Note that the qrels contain additional judgments from the NTCIR-14 CENTRE track.

queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-1')
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-1')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
0Two annotators rated as non-relevant
1One annotator rated as relevant, one as non-relevant
2Two annotators rated as relevant, OR one rates as highly relevant and one as non-relevant
3One annotator rated as highly relevant, one as relevant
4Two annotators rated as highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-1')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
Citation
bibtex: @inproceedings{Luo2017OverviewNtcirWww1, title={Overview of the NTCIR-13 We Want Web Task}, author={Cheng Luo and Tetsuya Sakai and Yiqun Liu and Zhicheng Dou and Chenyan Xiong and Jingfang Xu}, booktitle={NTCIR}, year={2017} }

"clueweb12/b13/ntcir-www-2"

The NTCIR-14 We Want Web (WWW) 2 ad-hoc ranking benchmark. Contains 80 queries with deep relevance judgments (avg 345 per query). Judgments aggregated from two assessors.

queries

Language: en

Query type:
NtcirQuery: (namedtuple)
  1. query_id: str
  2. title: str
  3. description: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-2')
for query in dataset.queries_iter():
    query # namedtuple<query_id, title, description>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-2')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
0Two annotators rated as non-relevant
1One annotator rated as relevant, one as non-relevant
2Two annotators rated as relevant, OR one rates as highly relevant and one as non-relevant
3One annotator rated as highly relevant, one as relevant
4Two annotators rated as highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-2')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
Citation
bibtex: @inproceedings{Mao2018OverviewNtcirWww2, title={Overview of the NTCIR-14 We Want Web Task}, author={Jiaxin Mao and Tetsuya Sakai and Cheng Luo and Peng Xiao and Yiqun Liu and Zhicheng Dou}, booktitle={NTCIR}, year={2018} }

"clueweb12/b13/ntcir-www-3"

The NTCIR-15 We Want Web (WWW) 3 ad-hoc ranking benchmark. Contains 160 queries with deep relevance judgments (to be released). 80 of the queries are from clueweb12/b13/ntcir-www-2.

queries

Language: en

Query type:
NtcirQuery: (namedtuple)
  1. query_id: str
  2. title: str
  3. description: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-3')
for query in dataset.queries_iter():
    query # namedtuple<query_id, title, description>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/ntcir-www-3')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>

"clueweb12/b13/trec-misinfo-2019"

The TREC Medical Misinformation 2019 dataset.

queries

Language: en

Query type:
MisinfoQuery: (namedtuple)
  1. query_id: str
  2. title: str
  3. cochranedoi: str
  4. description: str
  5. narrative: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/trec-misinfo-2019')
for query in dataset.queries_iter():
    query # namedtuple<query_id, title, cochranedoi, description, narrative>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/trec-misinfo-2019')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
MisinfoQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. effectiveness: int
  5. redibility: int

Relevance levels

Rel.Definition
0Not relevant
1Relevant
2Highly relevant

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/b13/trec-misinfo-2019')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, effectiveness, redibility>
Citation
bibtex: @inproceedings{Abualsaud2019OverviewTrec2019Decision, title={Overview of the TREC 2019 Decision Track}, author={Mustafa Abualsaud and Christina Lioma and Maria Maistro and Mark D. Smucker and Guido Zuccon}, booktitle={TREC}, year={2019} }

"clueweb12/trec-web-2013"

The TREC Web Track 2013 ad-hoc ranking benchmark. Contains 50 queries with deep relevance judgments.

queries

Language: en

Query type:
TrecWebTrackQuery: (namedtuple)
  1. query_id: str
  2. query: str
  3. description: str
  4. type: str
  5. subtopics: Tuple[
    TrecSubtopic: (namedtuple)
    1. number: str
    2. text: str
    3. type: str
    , ...]

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/trec-web-2013')
for query in dataset.queries_iter():
    query # namedtuple<query_id, query, description, type, subtopics>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/trec-web-2013')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
-2Junk: This page does not appear to be useful for any reasonable purpose; it may be spam or junk
0Non: The content of this page does not provide useful information on the topic, but may provide useful information on other topics, including other interpretations of the same query.
1Rel: The content of this page provides some information on the topic, which may be minimal; the relevant information must be on that page, not just promising-looking anchor text pointing to a possibly useful page.
2HRel: The content of this page provides substantial information on the topic.
3Key: This page or site is dedicated to the topic; authoritative and comprehensive, it is worthy of being a top result in a web search engine.
4Nav: This page represents a home page of an entity directly named by the query; the user may be searching for this specific page or site.

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/trec-web-2013')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
Citation
bibtex: @inproceedings{CollinsThompson2013TrecWeb, title={TREC 2013 Web Track Overview}, author={Kevyn Collins-Thompson and Paul Bennett and Fernando Diaz and Charles L. A. Clarke and Ellen M. Voorhees}, booktitle={TREC}, year={2013} }

"clueweb12/trec-web-2014"

The TREC Web Track 2014 ad-hoc ranking benchmark. Contains 50 queries with deep relevance judgments.

queries

Language: en

Query type:
TrecWebTrackQuery: (namedtuple)
  1. query_id: str
  2. query: str
  3. description: str
  4. type: str
  5. subtopics: Tuple[
    TrecSubtopic: (namedtuple)
    1. number: str
    2. text: str
    3. type: str
    , ...]

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/trec-web-2014')
for query in dataset.queries_iter():
    query # namedtuple<query_id, query, description, type, subtopics>
docs

Language: en

Document type:
WarcDoc: (namedtuple)
  1. doc_id: str
  2. url: str
  3. date: str
  4. http_headers: bytes
  5. body: bytes
  6. body_content_type: str

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/trec-web-2014')
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, url, date, http_headers, body, body_content_type>
qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.Definition
-2Junk: This page does not appear to be useful for any reasonable purpose; it may be spam or junk
0Non: The content of this page does not provide useful information on the topic, but may provide useful information on other topics, including other interpretations of the same query.
1Rel: The content of this page provides some information on the topic, which may be minimal; the relevant information must be on that page, not just promising-looking anchor text pointing to a possibly useful page.
2HRel: The content of this page provides substantial information on the topic.
3Key: This page or site is dedicated to the topic; authoritative and comprehensive, it is worthy of being a top result in a web search engine.
4Nav: This page represents a home page of an entity directly named by the query; the user may be searching for this specific page or site.

Example

import ir_datasets
dataset = ir_datasets.load('clueweb12/trec-web-2014')
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>
Citation
bibtex: @inproceedings{CollinsThompson2014TrecWeb, title={TREC 2014 Web Track Overview}, author={Kevyn Collins-Thompson and Craig Macdonald and Paul Bennett and Fernando Diaz and Ellen M. Voorhees}, booktitle={TREC}, year={2014} }