ir_datasets
: TREC ArabicTo use this dataset, you need a copy of the source corpus, provided by the the Linguistic Data Consortium. The specific resource needed is LDC2001T55.
Many organizations already have a subscription to the LDC, so access to the collection can be as easy as confirming the data usage agreement and downloading the corpus. Check with your library for access details.
The source file is: arabic_newswire_a_LDC2001T55.tgz.
ir_datasets expects this file to be copied/linked as ~/.ir_datasets/trec-arabic/corpus.tgz.
A collection of news articles in Arabic, used for multi-lingual evaluation in TREC 2001 and TREC 2002.
Document collection from LDC2001T55.
Language: ar
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, marked_up_doc>
You can find more details about the Python API here.
ir_datasets export trec-arabic docs
[doc_id] [text] [marked_up_doc]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} }{ "docs": { "count": 383872, "fields": { "doc_id": { "max_len": 21, "common_prefix": "" } } } }
Arabic benchmark from TREC 2001.
Language: ar
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic/ar2001")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, narrative>
You can find more details about the Python API here.
ir_datasets export trec-arabic/ar2001 queries
[query_id] [title] [description] [narrative]
...
You can find more details about the CLI here.
No example available for PyTerrier
Inherits docs from trec-arabic
Language: ar
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic/ar2001")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, marked_up_doc>
You can find more details about the Python API here.
ir_datasets export trec-arabic/ar2001 docs
[doc_id] [text] [marked_up_doc]
...
You can find more details about the CLI here.
No example available for PyTerrier
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 19K | 81.9% |
1 | relevant | 4.1K | 18.1% |
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic/ar2001")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export trec-arabic/ar2001 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Gey2001Arabic, title={The TREC-2001 Cross-Language Information Retrieval Track: Searching Arabic using English, French or Arabic Queries}, author={Fredric Gey and Douglas Oard}, booktitle={TREC}, year={2001} } @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} }{ "docs": { "count": 383872, "fields": { "doc_id": { "max_len": 21, "common_prefix": "" } } }, "queries": { "count": 25 }, "qrels": { "count": 22744, "fields": { "relevance": { "counts_by_value": { "0": 18622, "1": 4122 } } } } }
Arabic benchmark from TREC 2002.
Language: ar
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic/ar2002")
for query in dataset.queries_iter():
query # namedtuple<query_id, title, description, narrative>
You can find more details about the Python API here.
ir_datasets export trec-arabic/ar2002 queries
[query_id] [title] [description] [narrative]
...
You can find more details about the CLI here.
No example available for PyTerrier
Inherits docs from trec-arabic
Language: ar
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic/ar2002")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, marked_up_doc>
You can find more details about the Python API here.
ir_datasets export trec-arabic/ar2002 docs
[doc_id] [text] [marked_up_doc]
...
You can find more details about the CLI here.
No example available for PyTerrier
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 33K | 84.6% |
1 | relevant | 5.9K | 15.4% |
Examples:
import ir_datasets
dataset = ir_datasets.load("trec-arabic/ar2002")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export trec-arabic/ar2002 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Gey2002Arabic, title={The TREC-2002 Arabic/English CLIR Track}, author={Fredric Gey and Douglas Oard}, booktitle={TREC}, year={2002} } @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} }{ "docs": { "count": 383872, "fields": { "doc_id": { "max_len": 21, "common_prefix": "" } } }, "queries": { "count": 50 }, "qrels": { "count": 38432, "fields": { "relevance": { "counts_by_value": { "0": 32523, "1": 5909 } } } } }