Mir-2002 commited on
Commit
e439957
·
verified ·
1 Parent(s): 6cac3f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -70
README.md CHANGED
@@ -12,7 +12,7 @@ size_categories:
12
 
13
  # Overview
14
 
15
- This dataset contains 25,000+ rows of code-docstring-ast data along with additional metadata. Data was gathered from various Python libraries and frameworks and their
16
  publicly available GitHub repos. This dataset was created for the purpose of training the [CodeT5+](https://arxiv.org/abs/2305.07922) transformer on AST-enhanced code-to-doc tasks.
17
 
18
  # Sources
@@ -61,55 +61,52 @@ An instance of the dataset is as follows:
61
  <docstring> : <The corresponding docstring of the code>,
62
  <type> : <Whether it's a function, method, or class>,
63
  <file_path> : <The relative path of the file containing the function>,
64
- <ast_sequence> : <The ast sequence of the raw source code. Scroll down for more info about this>
65
  }
66
  ```
67
 
68
- # The AST Sequence
69
 
70
- A function recursively converts the AST tree into a linear sequence. It uses depth markers (├1>, └2>, etc.) to show parent-child relationships. It also adds node identifiers by pairing
71
- each node type with a meaningful identifier. Furthermore, pruning is also applied to irrelevant and shallow identifiers to denoise the dataset.
72
 
73
- Here's an example of how the AST sequence is generated:
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
- Example Code
76
  ```
77
- def calculate_area(radius):
78
- """
79
- Calculate the area of a circle.
80
-
81
- Parameters:
82
- radius (float): The radius of the circle
83
-
84
- Returns:
85
- float: The area of the circle
86
- """
87
- PI = 3.14159
88
- area = PI * radius * radius
89
- return area
90
  ```
91
 
92
- Resulting AST Sequence
93
  ```
94
- FunctionDef:calculate_area
95
- ├1> args:[radius]
96
- ├1> Assign:PI
97
- │ └2> Constant:
98
- ├1> Assign:area
99
- └2> BinOp:
100
- │ ├3> BinOp:
101
- │ │ ├4> Name:PI
102
- │ │ └4> Name:radius
103
- │ └3> Name:radius
104
- └1> Return:
105
- └2> Name:area
 
 
106
  ```
107
 
108
- 1. The code is parsed via Python's `ast` module
109
- 2. A method traverses this tree and linearizes the sequence
110
- 3. Each node is then converted to a string with type-identifier keys
111
- 4. Structural relationships are preserved using the depth markers
112
- 5. Denoising of irrelevant and shallow nodes are applied
113
 
114
  # Preprocessing
115
 
@@ -118,46 +115,53 @@ The dataset generally follows [CodeBERT's code2nl](https://github.com/microsoft/
118
  - Removed comments from the code
119
  - Removed examples where code cannot be parsed into an AST
120
  - Remove examples that codes cannot be parsed into an abstract syntax tree.
121
- - Remove examples where the number of tokens of documents is < 3 or >256
122
  - Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
123
  - Remove examples that documents are not English
124
 
125
  Furthermore, the following cleaning steps specific to this dataset were applied:
126
- - Docstrings from source codes was stripped
127
- - Standard text cleaning procedures (normalized whitespaces and special characters)
128
- - Removed abnormally long AST sequences (>100)
129
- - Structural pruning of the AST sequence (removing irrelevant or shallow identifiers/nodes)
130
 
131
  # Final Statistics
132
 
133
- ## Final Count : 25,489
134
-
135
- ## Average Docstring Length : 51
136
-
137
- ## Average Source Code Length : 591
138
-
139
- ## Average AST Sequence Length : 72
140
-
141
- ## Type Distribution
142
-
143
- **Methods: 13,149(52.1%)**
144
-
145
- **Functions: 8,671(34.3%)**
146
-
147
- **Classes: 3,444(13.6%)**
148
-
149
- ## Major Contributors
150
-
151
- **TensorFlow (5,455)**
152
-
153
- **PyTorch (5,448)**
154
-
155
- **Matplotlib (2,190)**
156
-
157
- **Django (2,138)**
 
 
 
 
 
 
 
 
 
158
 
159
- **Scipy (1,715)**
160
 
161
  # NOTE
162
 
163
- This dataset is *imperfect*. Due to the varying degrees of code complexity, some fields are inaccurate representations.
 
12
 
13
  # Overview
14
 
15
+ This dataset contains 22,000+ rows of code-docstring-ast data along with additional metadata. Data was gathered from various Python libraries and frameworks and their
16
  publicly available GitHub repos. This dataset was created for the purpose of training the [CodeT5+](https://arxiv.org/abs/2305.07922) transformer on AST-enhanced code-to-doc tasks.
17
 
18
  # Sources
 
61
  <docstring> : <The corresponding docstring of the code>,
62
  <type> : <Whether it's a function, method, or class>,
63
  <file_path> : <The relative path of the file containing the function>,
64
+ <ast_data> : <A flattened representation of the parsed AST. For more info about this, see section below>
65
  }
66
  ```
67
 
68
+ # The AST Data
69
 
70
+ An extractor only focuses on specific nodes relevant for docstring generation denoted by this set:
 
71
 
72
+ ```
73
+ KEEP_NODES = {
74
+ 'FunctionDef', 'AsyncFunctionDef', 'ClassDef',
75
+ 'arguments', 'arg', 'Return',
76
+ 'If', 'For', 'While', 'Try', 'With',
77
+ 'Assign', 'Call',
78
+ 'Raise', 'ExceptHandler',
79
+ 'decorator', 'bases',
80
+ 'Compare', 'BoolOp'
81
+ }
82
+ ```
83
+
84
+ Everything else is discarded. For example
85
 
86
+ **Source Code**
87
  ```
88
+ def tox_append_version_info() -> str: return '[toxfile]'
 
 
 
 
 
 
 
 
 
 
 
 
89
  ```
90
 
91
+ **Resulting AST Dictionary**
92
  ```
93
+ "ast_data": {
94
+ "type": "FunctionDef",
95
+ "children": [
96
+ {
97
+ "type": "arguments",
98
+ "args": []
99
+ },
100
+ {
101
+ "type": "Return",
102
+ "has_value": true
103
+ }
104
+ ],
105
+ "name": "tox_append_version_info"
106
+ }
107
  ```
108
 
109
+ This dictionary is then flattened via a helper function which would then look something like `FunctionDef name:tox_append_version_info arguments Return return:yes`.
 
 
 
 
110
 
111
  # Preprocessing
112
 
 
115
  - Removed comments from the code
116
  - Removed examples where code cannot be parsed into an AST
117
  - Remove examples that codes cannot be parsed into an abstract syntax tree.
 
118
  - Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
119
  - Remove examples that documents are not English
120
 
121
  Furthermore, the following cleaning steps specific to this dataset were applied:
122
+ - Removed examples where, using CodeT5+'s tokenizer, the combined tokens of the source_code + ast_data is > 512
123
+ - Removed examples where, using CodeT5+'s tokenizer, the docstrings are > 512
124
+ - Normalized the source code
 
125
 
126
  # Final Statistics
127
 
128
+ ```
129
+ {
130
+ "original_samples": 25481,
131
+ "processed_samples": 22099,
132
+ "filter_stats": {
133
+ "success": 22099,
134
+ "non_english": 44,
135
+ "input_too_short": 0,
136
+ "input_too_long": 3309,
137
+ "target_too_short": 0,
138
+ "target_too_long": 29,
139
+ "error": 0
140
+ },
141
+ "split_sizes": {
142
+ "train": 15469,
143
+ "val": 3314,
144
+ "test": 3316
145
+ },
146
+ "input_token_stats": {
147
+ "min": 18,
148
+ "max": 511,
149
+ "avg": 155.931
150
+ },
151
+ "target_token_stats": {
152
+ "min": 6,
153
+ "max": 426,
154
+ "avg": 67.264
155
+ },
156
+ "type_distribution": {
157
+ "method": 8625,
158
+ "class": 1434,
159
+ "function": 5410
160
+ }
161
+ }
162
 
163
+ ```
164
 
165
  # NOTE
166
 
167
+ This dataset is *imperfect*. Use under your own volition.