Shuu12121 commited on
Commit
e2020a2
·
verified ·
1 Parent(s): 3b6bfad

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ tags:
7
+ - python
8
+ - code-search
9
+ - text-to-code
10
+ - code-to-text
11
+ - source-code
12
+
13
+ ---
14
+
15
+ # Python CodeSearch Dataset (Shuu12121/python-treesitter-filtered-datasetsV2)
16
+
17
+ ## Dataset Description
18
+ This dataset contains Python functions paired with their documentation strings (docstrings), extracted from open-source Python repositories on GitHub.
19
+ It is formatted similarly to the CodeSearchNet challenge dataset.
20
+
21
+ Each entry includes:
22
+ - `code`: The source code of a python function or method.
23
+ - `docstring`: The docstring or Javadoc associated with the function/method.
24
+ - `func_name`: The name of the function/method.
25
+ - `language`: The programming language (always "python").
26
+ - `repo`: The GitHub repository from which the code was sourced (e.g., "owner/repo").
27
+ - `path`: The file path within the repository where the function/method is located.
28
+ - `url`: A direct URL to the function/method's source file on GitHub (approximated to master/main branch).
29
+ - `license`: The SPDX identifier of the license governing the source repository (e.g., "MIT", "Apache-2.0").
30
+ Additional metrics if available (from Lizard tool):
31
+ - `ccn`: Cyclomatic Complexity Number.
32
+ - `params`: Number of parameters of the function/method.
33
+ - `nloc`: Non-commenting lines of code.
34
+ - `token_count`: Number of tokens in the function/method.
35
+
36
+ ## Dataset Structure
37
+ The dataset is divided into the following splits:
38
+
39
+ - `train`: 1,083,527 examples
40
+ - `validation`: 18,408 examples
41
+ - `test`: 17,552 examples
42
+
43
+ ## Data Collection
44
+ The data was collected by:
45
+ 1. Identifying popular and relevant Python repositories on GitHub.
46
+ 2. Cloning these repositories.
47
+ 3. Parsing Python files (`.py`) using tree-sitter to extract functions/methods and their docstrings/Javadoc.
48
+ 4. Filtering functions/methods based on code length and presence of a non-empty docstring/Javadoc.
49
+ 5. Using the `lizard` tool to calculate code metrics (CCN, NLOC, params).
50
+ 6. Storing the extracted data in JSONL format, including repository and license information.
51
+ 7. Splitting the data by repository to ensure no data leakage between train, validation, and test sets.
52
+
53
+ ## Intended Use
54
+ This dataset can be used for tasks such as:
55
+ - Training and evaluating models for code search (natural language to code).
56
+ - Code summarization / docstring generation (code to natural language).
57
+ - Studies on Python code practices and documentation habits.
58
+
59
+ ## Licensing
60
+ The code examples within this dataset are sourced from repositories with permissive licenses (typically MIT, Apache-2.0, BSD).
61
+ Each sample includes its original license information in the `license` field.
62
+ The dataset compilation itself is provided under a permissive license (e.g., MIT or CC-BY-SA-4.0),
63
+ but users should respect the original licenses of the underlying code.
64
+
65
+ ## Example Usage
66
+ ```python
67
+ from datasets import load_dataset
68
+
69
+ # Load the dataset
70
+ dataset = load_dataset("Shuu12121/python-treesitter-filtered-datasetsV2")
71
+
72
+ # Access a split (e.g., train)
73
+ train_data = dataset["train"]
74
+
75
+ # Print the first example
76
+ print(train_data[0])
77
+ ```