ir_datasets
: ANTIQUE"ANTIQUE is a non-factoid quesiton answering dataset based on the questions and answers of Yahoo! Webscope L6."
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("antique")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique')
# Index antique
indexer = pt.IterDictIndexer('./indices/antique')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Bibtex:
@inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} }Official test set of the ANTIQUE dataset.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/test")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export antique/test queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/test')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from antique
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/test")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export antique/test docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/test')
# Index antique
indexer = pt.IterDictIndexer('./indices/antique')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | It is completely out of context or does not make any sense. |
2 | It does not answer the question or if it does, it provides anunreasonable answer, however, it is not out of context. Therefore, you cannot accept it as an answer to the question. |
3 | It can be an answer to the question, however, it is notsufficiently convincing. There should be an answer with much better quality for the question. |
4 | It looks reasonable and convincing. Its quality is on parwith or better than the "Possibly Correct Answer". Note that it does not have to provide the same answer as the "PossiblyCorrect Answer". |
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/test")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export antique/test qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:antique/test')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} }antique/test without a set of queries deemed by the authors of ANTIQUE to be "offensive (and noisy)."
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/test/non-offensive")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export antique/test/non-offensive queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/test/non-offensive')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from antique
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/test/non-offensive")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export antique/test/non-offensive docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/test/non-offensive')
# Index antique
indexer = pt.IterDictIndexer('./indices/antique')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | It is completely out of context or does not make any sense. |
2 | It does not answer the question or if it does, it provides anunreasonable answer, however, it is not out of context. Therefore, you cannot accept it as an answer to the question. |
3 | It can be an answer to the question, however, it is notsufficiently convincing. There should be an answer with much better quality for the question. |
4 | It looks reasonable and convincing. Its quality is on parwith or better than the "Possibly Correct Answer". Note that it does not have to provide the same answer as the "PossiblyCorrect Answer". |
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/test/non-offensive")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export antique/test/non-offensive qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:antique/test/non-offensive')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} }Official train set of the ANTIQUE dataset.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export antique/train queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/train')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from antique
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export antique/train docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/train')
# Index antique
indexer = pt.IterDictIndexer('./indices/antique')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | It is completely out of context or does not make any sense. |
2 | It does not answer the question or if it does, it provides anunreasonable answer, however, it is not out of context. Therefore, you cannot accept it as an answer to the question. |
3 | It can be an answer to the question, however, it is notsufficiently convincing. There should be an answer with much better quality for the question. |
4 | It looks reasonable and convincing. Its quality is on parwith or better than the "Possibly Correct Answer". Note that it does not have to provide the same answer as the "PossiblyCorrect Answer". |
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export antique/train qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:antique/train')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} }antique/train without the 200 queries used by antique/train/split200-valid.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train/split200-train")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export antique/train/split200-train queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/train/split200-train')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from antique
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train/split200-train")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export antique/train/split200-train docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/train/split200-train')
# Index antique
indexer = pt.IterDictIndexer('./indices/antique')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | It is completely out of context or does not make any sense. |
2 | It does not answer the question or if it does, it provides anunreasonable answer, however, it is not out of context. Therefore, you cannot accept it as an answer to the question. |
3 | It can be an answer to the question, however, it is notsufficiently convincing. There should be an answer with much better quality for the question. |
4 | It looks reasonable and convincing. Its quality is on parwith or better than the "Possibly Correct Answer". Note that it does not have to provide the same answer as the "PossiblyCorrect Answer". |
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train/split200-train")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export antique/train/split200-train qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:antique/train/split200-train')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} }A held-out subset of 200 queries from antique/train. Use in conjunction with antique/train/split200-train.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train/split200-valid")
for query in dataset.queries_iter():
query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export antique/train/split200-valid queries
[query_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/train/split200-valid')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
Language: en
Note: Uses docs from antique
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train/split200-valid")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text>
You can find more details about the Python API here.
ir_datasets export antique/train/split200-valid docs
[doc_id] [text]
...
You can find more details about the CLI here.
import pyterrier as pt
pt.init()
dataset = pt.get_dataset('irds:antique/train/split200-valid')
# Index antique
indexer = pt.IterDictIndexer('./indices/antique')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=['text'])
You can find more details about PyTerrier indexing here.
Relevance levels
Rel. | Definition |
---|---|
1 | It is completely out of context or does not make any sense. |
2 | It does not answer the question or if it does, it provides anunreasonable answer, however, it is not out of context. Therefore, you cannot accept it as an answer to the question. |
3 | It can be an answer to the question, however, it is notsufficiently convincing. There should be an answer with much better quality for the question. |
4 | It looks reasonable and convincing. Its quality is on parwith or better than the "Possibly Correct Answer". Note that it does not have to provide the same answer as the "PossiblyCorrect Answer". |
Examples:
import ir_datasets
dataset = ir_datasets.load("antique/train/split200-valid")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export antique/train/split200-valid qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
import pyterrier as pt
from pyterrier.measures import *
pt.init()
dataset = pt.get_dataset('irds:antique/train/split200-valid')
index_ref = pt.IndexRef.of('./indices/antique') # assumes you have already built an index
pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25')
# (optionally other pipeline components)
pt.Experiment(
[pipeline],
dataset.get_topics(),
dataset.get_qrels(),
[MAP, nDCG@20]
)
You can find more details about PyTerrier experiments here.
Bibtex:
@inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} }