ir_datasets
: Tweets 2013 (Internet Archive)A collection of tweets from a 2-month window achived by the Internet Achive. This collection can be a stand-in document collection for the TREC Microblog 2013-14 tasks. (Even though it is not exactly the same collection, Sequiera and Lin show that it it close enough.)
This collection is automatically downloaded from the Internet Archive, though download speeds are often slow so it takes some time. ir_datasets constructs a new directory hierarchy during the download process to facilitate fast lookups and slices.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, user_id, created_at, lang, reply_doc_id, retweet_doc_id, source, source_content_type>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia docs
[doc_id] [text] [user_id] [created_at] [lang] [reply_doc_id] [retweet_doc_id] [source] [source_content_type]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} }TREC Microblog 2013 test collection.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2013")
for query in dataset.queries_iter():
query # namedtuple<query_id, query, time, tweet_time>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia/trec-mb-2013 queries
[query_id] [query] [time] [tweet_time]
...
You can find more details about the CLI here.
No example available for PyTerrier
Language: multiple/other/unknown
Note: Uses docs from tweets2013-ia
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2013")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, user_id, created_at, lang, reply_doc_id, retweet_doc_id, source, source_content_type>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia/trec-mb-2013 docs
[doc_id] [text] [user_id] [created_at] [lang] [reply_doc_id] [retweet_doc_id] [source] [source_content_type]
...
You can find more details about the CLI here.
No example available for PyTerrier
Relevance levels
Rel. | Definition |
---|---|
0 | not relevant |
1 | relevant |
2 | highly relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2013")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia/trec-mb-2013 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Lin2013Microblog, title={Overview of the TREC-2013 Microblog Track}, author={Jimmy Lin and Miles Efron}, booktitle={TREC}, year={2013} } @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} }TREC Microblog 2014 test collection.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2014")
for query in dataset.queries_iter():
query # namedtuple<query_id, query, time, tweet_time, description>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia/trec-mb-2014 queries
[query_id] [query] [time] [tweet_time] [description]
...
You can find more details about the CLI here.
No example available for PyTerrier
Language: multiple/other/unknown
Note: Uses docs from tweets2013-ia
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2014")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, user_id, created_at, lang, reply_doc_id, retweet_doc_id, source, source_content_type>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia/trec-mb-2014 docs
[doc_id] [text] [user_id] [created_at] [lang] [reply_doc_id] [retweet_doc_id] [source] [source_content_type]
...
You can find more details about the CLI here.
No example available for PyTerrier
Relevance levels
Rel. | Definition |
---|---|
0 | not relevant |
1 | relevant |
2 | highly relevant |
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2014")
for qrel in dataset.qrels_iter():
qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export tweets2013-ia/trec-mb-2014 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration]
...
You can find more details about the CLI here.
No example available for PyTerrier
Bibtex:
@inproceedings{Lin2014Microblog, title={Overview of the TREC-2014 Microblog Track}, author={Jimmy Lin and Miles Efron and Yulu Wang and Garrick Sherman}, booktitle={TREC}, year={2014} } @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} }