ir_datasets
: NeuCLIR CorpusTo access the docuemnts of this dataset, you will need to download the documents from Common Crawl. The script for downloading and validating the documents are in NeuCLIR/download-collection . Please use the following command to download the documents:
git clone https://github.com/NeuCLIR/download-collection
cd download-collection
pip install -r requirements.txt
python download_documents.py --storage ~/.ir_datasets/neuclir/1 \
--zho ./resource/zho/ids.jsonl.gz \
--fas ./resource/fas/ids.jsonl.gz \
--rus ./resource/rus/ids.*.jsonl.gz \
--jobs {number of process}
After download, please also post-process the downloaded file to verify all and only specified documents are downloaded, and modify the ordering of the collection to match the original specified ordering in the id files.
for lang in zho fas rus; do
python fix_document_order.py --raw_download_file ~/.ir_datasets/neuclir/1/$lang/docs.jsonl \
--id_file ./resource/$lang/ids*.jsonl.gz \
--check_hash
done
You can also store the documents in other directory and create a soft link for ~/.ir_datasets/neuclir/22/.
This is the dataset created for TREC 2022 NeuCLIR Track. Topics will be developed and released by June 2022 by NIST. Relevance judgements will be available after the evaluation (around November).
The collection designed to be similar to [HC4] and a large portion of documents from HC4 are ported to this collection. Users can conduct experiemnts on this collection with queries and qrels in HC4 for development.
Version 1 of the NeuCLIR corpus.
The Persian collection contains English queries (to be released) and Persian documents for retrieval. Human and machine translated queries will be provided in the query object for running monolingual retrieval or cross-language retrival assuming the machine query tranlstion into Persian is available.
Language: fa
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/fa")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
ir_datasets export neuclir/1/fa docs
[doc_id] [title] [text] [url] [time] [cc_file]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.neuclir.1.fa')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 2232016, "fields": { "doc_id": { "max_len": 36, "common_prefix": "" } } } }
Subset of the Persian collection that intersect with HC4. The 60 queries are the hc4/fa/dev and hc4/fa/test sets combined.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/fa/hc4-filtered")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
ir_datasets export neuclir/1/fa/hc4-filtered queries
[query_id] [title] [description] [ht_title] [ht_description] [mt_title] [mt_description] [narrative_by_relevance] [report] [report_url] [report_date] [translation_lang]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.neuclir.1.fa.hc4-filtered.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Language: fa
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/fa/hc4-filtered")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
ir_datasets export neuclir/1/fa/hc4-filtered docs
[doc_id] [title] [text] [url] [time] [cc_file]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.neuclir.1.fa.hc4-filtered')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Not-valuable. Information in the document might be included in a report footnote, or omitted entirely. | 2.6K | 82.8% |
1 | Somewhat-valuable. The most valuable information in the document would be found in the remainder of such a report. | 261 | 8.5% |
3 | Very-valuable. Information in the document would be found in the lead paragraph of a report that is later written on the topic. | 269 | 8.7% |
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/fa/hc4-filtered")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export neuclir/1/fa/hc4-filtered qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.neuclir.1.fa.hc4-filtered.qrels') # AdhocAssessments
for topic_qrels in qrels.iter():
print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} }{ "docs": { "count": 391703, "fields": { "doc_id": { "max_len": 36, "common_prefix": "" } } }, "queries": { "count": 60 }, "qrels": { "count": 3087, "fields": { "relevance": { "counts_by_value": { "0": 2557, "3": 269, "1": 261 } } } } }
The Russian collection contains English queries (to be released) and Russian documents for retrieval. Human and machine translated queries will be provided in the query object for running monolingual retrieval or cross-language retrival assuming the machine query tranlstion into Russian is available.
Language: ru
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/ru")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
ir_datasets export neuclir/1/ru docs
[doc_id] [title] [text] [url] [time] [cc_file]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.neuclir.1.ru')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 4627543, "fields": { "doc_id": { "max_len": 36, "common_prefix": "" } } } }
Subset of the Russian collection that intersect with HC4. The 54 queries are the hc4/ru/dev and hc4/ru/test sets combined.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/ru/hc4-filtered")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
ir_datasets export neuclir/1/ru/hc4-filtered queries
[query_id] [title] [description] [ht_title] [ht_description] [mt_title] [mt_description] [narrative_by_relevance] [report] [report_url] [report_date] [translation_lang]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.neuclir.1.ru.hc4-filtered.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Language: ru
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/ru/hc4-filtered")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
ir_datasets export neuclir/1/ru/hc4-filtered docs
[doc_id] [title] [text] [url] [time] [cc_file]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.neuclir.1.ru.hc4-filtered')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Not-valuable. Information in the document might be included in a report footnote, or omitted entirely. | 2.5K | 76.8% |
1 | Somewhat-valuable. The most valuable information in the document would be found in the remainder of such a report. | 478 | 14.8% |
3 | Very-valuable. Information in the document would be found in the lead paragraph of a report that is later written on the topic. | 274 | 8.5% |
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/ru/hc4-filtered")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export neuclir/1/ru/hc4-filtered qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.neuclir.1.ru.hc4-filtered.qrels') # AdhocAssessments
for topic_qrels in qrels.iter():
print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} }{ "docs": { "count": 964719, "fields": { "doc_id": { "max_len": 36, "common_prefix": "" } } }, "queries": { "count": 54 }, "qrels": { "count": 3235, "fields": { "relevance": { "counts_by_value": { "0": 2483, "1": 478, "3": 274 } } } } }
The Chinese collection contains English queries (to be released) and Chinese documents for retrieval. Human and machine translated queries will be provided in the query object for running monolingual retrieval or cross-language retrival assuming the machine query tranlstion into Chinese is available.
Language: zh
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/zh")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
ir_datasets export neuclir/1/zh docs
[doc_id] [title] [text] [url] [time] [cc_file]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.neuclir.1.zh')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 3179209, "fields": { "doc_id": { "max_len": 36, "common_prefix": "" } } } }
Subset of the Chinse collection that intersect with HC4. The 60 queries are the hc4/zh/dev and hc4/zh/test sets combined.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/zh/hc4-filtered")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
ir_datasets export neuclir/1/zh/hc4-filtered queries
[query_id] [title] [description] [ht_title] [ht_description] [mt_title] [mt_description] [narrative_by_relevance] [report] [report_url] [report_date] [translation_lang]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
topics = prepare_dataset('irds.neuclir.1.zh.hc4-filtered.queries') # AdhocTopics
for topic in topics.iter():
print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Language: zh
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/zh/hc4-filtered")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
ir_datasets export neuclir/1/zh/hc4-filtered docs
[doc_id] [title] [text] [url] [time] [cc_file]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
dataset = prepare_dataset('irds.neuclir.1.zh.hc4-filtered')
for doc in dataset.iter_documents():
print(doc) # an AdhocDocumentStore
break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | Not-valuable. Information in the document might be included in a report footnote, or omitted entirely. | 2.7K | 82.4% |
1 | Somewhat-valuable. The most valuable information in the document would be found in the remainder of such a report. | 222 | 6.9% |
3 | Very-valuable. Information in the document would be found in the lead paragraph of a report that is later written on the topic. | 344 | 10.7% |
Examples:
import ir_datasets
dataset = ir_datasets.load("neuclir/1/zh/hc4-filtered")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export neuclir/1/zh/hc4-filtered qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
from datamaestro import prepare_dataset
qrels = prepare_dataset('irds.neuclir.1.zh.hc4-filtered.qrels') # AdhocAssessments
for topic_qrels in qrels.iter():
print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} }{ "docs": { "count": 519945, "fields": { "doc_id": { "max_len": 36, "common_prefix": "" } } }, "queries": { "count": 60 }, "qrels": { "count": 3217, "fields": { "relevance": { "counts_by_value": { "0": 2651, "3": 344, "1": 222 } } } } }