File size: 3,790 Bytes
e2020a2 7db4a10 e2020a2 7db4a10 e2020a2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
license: apache-2.0
language:
- en
tags:
- python
- code-search
- text-to-code
- code-to-text
- source-code
dataset_info:
features:
- name: code
dtype: string
- name: docstring
dtype: string
- name: func_name
dtype: string
- name: language
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: url
dtype: string
- name: license
dtype: string
splits:
- name: train
num_bytes: 2074591941
num_examples: 1083527
- name: validation
num_bytes: 32416009
num_examples: 18408
- name: test
num_bytes: 32318605
num_examples: 17552
download_size: 563535797
dataset_size: 2139326555
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Python CodeSearch Dataset (Shuu12121/python-treesitter-filtered-datasetsV2)
## Dataset Description
This dataset contains Python functions paired with their documentation strings (docstrings), extracted from open-source Python repositories on GitHub.
It is formatted similarly to the CodeSearchNet challenge dataset.
Each entry includes:
- `code`: The source code of a python function or method.
- `docstring`: The docstring or Javadoc associated with the function/method.
- `func_name`: The name of the function/method.
- `language`: The programming language (always "python").
- `repo`: The GitHub repository from which the code was sourced (e.g., "owner/repo").
- `path`: The file path within the repository where the function/method is located.
- `url`: A direct URL to the function/method's source file on GitHub (approximated to master/main branch).
- `license`: The SPDX identifier of the license governing the source repository (e.g., "MIT", "Apache-2.0").
Additional metrics if available (from Lizard tool):
- `ccn`: Cyclomatic Complexity Number.
- `params`: Number of parameters of the function/method.
- `nloc`: Non-commenting lines of code.
- `token_count`: Number of tokens in the function/method.
## Dataset Structure
The dataset is divided into the following splits:
- `train`: 1,083,527 examples
- `validation`: 18,408 examples
- `test`: 17,552 examples
## Data Collection
The data was collected by:
1. Identifying popular and relevant Python repositories on GitHub.
2. Cloning these repositories.
3. Parsing Python files (`.py`) using tree-sitter to extract functions/methods and their docstrings/Javadoc.
4. Filtering functions/methods based on code length and presence of a non-empty docstring/Javadoc.
5. Using the `lizard` tool to calculate code metrics (CCN, NLOC, params).
6. Storing the extracted data in JSONL format, including repository and license information.
7. Splitting the data by repository to ensure no data leakage between train, validation, and test sets.
## Intended Use
This dataset can be used for tasks such as:
- Training and evaluating models for code search (natural language to code).
- Code summarization / docstring generation (code to natural language).
- Studies on Python code practices and documentation habits.
## Licensing
The code examples within this dataset are sourced from repositories with permissive licenses (typically MIT, Apache-2.0, BSD).
Each sample includes its original license information in the `license` field.
The dataset compilation itself is provided under a permissive license (e.g., MIT or CC-BY-SA-4.0),
but users should respect the original licenses of the underlying code.
## Example Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Shuu12121/python-treesitter-filtered-datasetsV2")
# Access a split (e.g., train)
train_data = dataset["train"]
# Print the first example
print(train_data[0])
```
|