ir_datasets
: Tweets 2013 (Internet Archive)A collection of tweets from a 2-month window achived by the Internet Achive. This collection can be a stand-in document collection for the TREC Microblog 2013-14 tasks. (Even though it is not exactly the same collection, Sequiera and Lin show that it it close enough.)
This collection is automatically downloaded from the Internet Archive, though download speeds are often slow so it takes some time. ir_datasets constructs a new directory hierarchy during the download process to facilitate fast lookups and slices.
Language: multiple/other/unknown
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia")
for doc in dataset.docs_iter():
doc # namedtuple<doc_id, text, user_id, created_at, lang, reply_doc_id, retweet_doc_id, source, source_content_type>
You can find more details about the Python API here.
TREC Microblog 2013 test collection.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2013")
for query in dataset.queries_iter():
query # namedtuple<query_id, query, time, tweet_time>
You can find more details about the Python API here.
TREC Microblog 2014 test collection.
Language: en
Examples:
import ir_datasets
dataset = ir_datasets.load("tweets2013-ia/trec-mb-2014")
for query in dataset.queries_iter():
query # namedtuple<query_id, query, time, tweet_time, description>
You can find more details about the Python API here.