← home
Github: datasets/codesearchnet.py

ir_datasets: CodeSearchNet

Index
  1. codesearchnet
  2. codesearchnet/challenge
  3. codesearchnet/test
  4. codesearchnet/train
  5. codesearchnet/valid

"codesearchnet"

A benchmark for semantic code search. Uses

docs
2.1M docs

Language: multiple/other/unknown

Document type:
CodeSearchNetDoc: (namedtuple)
  1. doc_id: str
  2. repo: str
  3. path: str
  4. func_name: str
  5. code: str
  6. language: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, repo, path, func_name, code, language>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet docs
[doc_id]    [repo]    [path]    [func_name]    [code]    [language]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

Citation

ir_datasets.bib:

\cite{Husain2019CodeSearchNet}

Bibtex:

@article{Husain2019CodeSearchNet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt}, journal={ArXiv}, year={2019} }
Metadata

"codesearchnet/challenge"

Official challenge set, with keyword queries and deep relevance assessments.

queries
99 queries

Language: multiple/other/unknown

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/challenge")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/challenge queries
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

docs
2.1M docs

Inherits docs from codesearchnet

Language: multiple/other/unknown

Document type:
CodeSearchNetDoc: (namedtuple)
  1. doc_id: str
  2. repo: str
  3. path: str
  4. func_name: str
  5. code: str
  6. language: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/challenge")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, repo, path, func_name, code, language>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/challenge docs
[doc_id]    [repo]    [path]    [func_name]    [code]    [language]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

qrels
4.0K qrels
Query relevance judgment type:
CodeSearchNetChallengeQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: str
  4. note: str

Relevance levels

Rel.DefinitionCount%
0Irrelevant1.3K32.8%
1Weak Match982 24.5%
2String Match863 21.5%
3Exact Match847 21.1%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/challenge")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, note>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/challenge qrels --format tsv
[query_id]    [doc_id]    [relevance]    [note]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

Citation

ir_datasets.bib:

\cite{Husain2019CodeSearchNet}

Bibtex:

@article{Husain2019CodeSearchNet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt}, journal={ArXiv}, year={2019} }
Metadata

"codesearchnet/test"

Official test set, using queries inferred from docstrings.

queries
101K queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/test")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/test queries
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

docs
2.1M docs

Inherits docs from codesearchnet

Language: multiple/other/unknown

Document type:
CodeSearchNetDoc: (namedtuple)
  1. doc_id: str
  2. repo: str
  3. path: str
  4. func_name: str
  5. code: str
  6. language: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/test")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, repo, path, func_name, code, language>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/test docs
[doc_id]    [repo]    [path]    [func_name]    [code]    [language]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

qrels
101K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1Matches docstring101K100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/test")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/test qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

Citation

ir_datasets.bib:

\cite{Husain2019CodeSearchNet}

Bibtex:

@article{Husain2019CodeSearchNet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt}, journal={ArXiv}, year={2019} }
Metadata

"codesearchnet/train"

Official train set, using queries inferred from docstrings.

queries
1.9M queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/train")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/train queries
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

docs
2.1M docs

Inherits docs from codesearchnet

Language: multiple/other/unknown

Document type:
CodeSearchNetDoc: (namedtuple)
  1. doc_id: str
  2. repo: str
  3. path: str
  4. func_name: str
  5. code: str
  6. language: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/train")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, repo, path, func_name, code, language>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/train docs
[doc_id]    [repo]    [path]    [func_name]    [code]    [language]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

qrels
1.9M qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1Matches docstring1.9M100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/train")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/train qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

Citation

ir_datasets.bib:

\cite{Husain2019CodeSearchNet}

Bibtex:

@article{Husain2019CodeSearchNet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt}, journal={ArXiv}, year={2019} }
Metadata

"codesearchnet/valid"

Official validation set, using queries inferred from docstrings.

queries
89K queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/valid")
for query in dataset.queries_iter():
    query # namedtuple<query_id, text>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/valid queries
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

docs
2.1M docs

Inherits docs from codesearchnet

Language: multiple/other/unknown

Document type:
CodeSearchNetDoc: (namedtuple)
  1. doc_id: str
  2. repo: str
  3. path: str
  4. func_name: str
  5. code: str
  6. language: str

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/valid")
for doc in dataset.docs_iter():
    doc # namedtuple<doc_id, repo, path, func_name, code, language>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/valid docs
[doc_id]    [repo]    [path]    [func_name]    [code]    [language]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

qrels
89K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
1Matches docstring89K100.0%

Examples:

Python API
import ir_datasets
dataset = ir_datasets.load("codesearchnet/valid")
for qrel in dataset.qrels_iter():
    qrel # namedtuple<query_id, doc_id, relevance, iteration>

You can find more details about the Python API here.

CLI
ir_datasets export codesearchnet/valid qrels --format tsv
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier

No example available for PyTerrier

Citation

ir_datasets.bib:

\cite{Husain2019CodeSearchNet}

Bibtex:

@article{Husain2019CodeSearchNet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt}, journal={ArXiv}, year={2019} }
Metadata