ir_datasets
: HC4 (HLTCOE CLIR Common-Crawl Collection)To access the docuemnts of this dataset, you will need to download the documents from Common Crawl. The script for downloading and validating the documents are in HLTCOE/HC4. Please use the following command to download the documents:
git clone https://github.com/hltcoe/HC4
cd HC4
pip install -r requirements.txt
python download_documents.py --storage ~/.ir_datasets/hc4/ \
--zho ./resources/hc4/zho/ids.jsonl.gz \
--fas ./resources/hc4/fas/ids.jsonl.gz \
--rus ./resources/hc4/rus/ids.*.jsonl.gz \
--jobs {number of process}
After download, please also post-process the downloaded file to verify all and only specified documents are downloaded, and modify the ordering of the collection to match the original specified ordering in the id files.
for lang in zho fas rus; do
python fix_document_order.py --hc4_file ~/.ir_datasets/hc4/$lang/hc4_docs.jsonl \
--id_file ./resources/hc4/$lang/ids*.jsonl.gz \
--check_hash
done
You can also store the documents in other directory and create a soft link for ~/.ir_datasets/hc4/.
HC4 is a new suite of test collections for ad hoc Cross-Language Information Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and Russian, topics in English and in the document languages, and graded relevance judgments.
Bibtex:
@article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} }The Persian collection contains English queries and Persian documents for retrieval. Human and machine translated queries are provided in the query object for running monolingual retrieval or cross-language retrival assuming the machine query tranlstion into Persian is available.
Language: fa
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/fa")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
Development split of hc4/fa.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/fa/dev")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
Test split of hc4/fa.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/fa/test")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
Train split of hc4/fa.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/fa/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
The Russian collection contains English queries and Russian documents for retrieval. Human and machine translated queries are provided in the query object for running monolingual retrieval or cross-language retrival assuming the machine query tranlstion into Russian is available.
Language: ru
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/ru")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
Development split of hc4/ru.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/ru/dev")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
Test split of hc4/ru.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/ru/test")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
Train split of hc4/ru.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/ru/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
The Chinese collection contains English queries and Chinese documents for retrieval. Human and machine translated queries are provided in the query object for running monolingual retrieval or cross-language retrival assuming the machine query tranlstion into Chinese is available.
Language: zh
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/zh")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, title, text, url, time, cc_file>
You can find more details about the Python API here.
Development split of hc4/zh.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/zh/dev")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
Test split of hc4/zh.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/zh/test")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.
Train split of hc4/zh.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("hc4/zh/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, ht_title, ht_description, mt_title, mt_description, narrative_by_relevance, report, report_url, report_date, translation_lang>
You can find more details about the Python API here.