File size: 4,027 Bytes
55dff4e a344a1a 55dff4e 23a91b1 d53ce63 d2cb6f7 4ed4f59 2b035e4 4ed4f59 e439957 4ed4f59 de5560e e439957 de5560e e439957 de5560e e439957 de5560e e439957 de5560e e439957 de5560e e439957 de5560e e439957 de5560e e439957 de5560e 6fa3289 de5560e 2b035e4 de5560e 2b035e4 e439957 1ab9161 e439957 17ab3f0 d53ce63 e439957 d53ce63 17ab3f0 e439957 d53ce63 e439957 17ab3f0 d53ce63 e439957 17ab3f0 d53ce63 e439957 d53ce63 e439957 d53ce63 e439957 973e469 e439957 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
task_categories:
- summarization
- text-generation
language:
- en
tags:
- code
size_categories:
- 10K<n<100K
---
# Overview
This dataset contains 34,000+ rows of code-docstring-ast data along with additional metadata. Data was gathered from various Python libraries and frameworks and their
publicly available GitHub repos. This dataset was created for the purpose of training the [CodeT5+](https://arxiv.org/abs/2305.07922) transformer on AST-enhanced code-to-doc tasks.
# Sources
The dataset was gathered from various GitHub repos sampled from [this repo by Vinta.](https://github.com/vinta/awesome-python)
The 26 repos are:
- matplotlib
- pytorch
- cryptography
- django
- prospector
- scikit-learn
- pandas
- numpy
- uvicorn
- feincms
- algorithms
- scrapy
- authlib
- seaborn
- coconut
- tensorflow
- flexx
- salmon
- mongo-python-driver
- virtualenv
- sphinx
- schema
- kornia
- scipy
- cherrypy
- pygame
Sampling was at random; I simply browsed through each category from Vinta's list and chose one from a random interesting category.
# Dataset Instance
An instance of the dataset is as follows:
```
{
<library> : <The library from which the source code came from>,
<name> : <The name of the function/class/method>,
<source_code> : <The raw source code itself stripped of its docstrings and comments>,
<docstring> : <The corresponding docstring of the code>,
<type> : <Whether it's a function, method, or class>,
<file_path> : <The relative path of the file containing the function>,
<ast_data> : <A flattened representation of the parsed AST. For more info about this, see section below>
}
```
# The AST Data
An extractor only focuses on specific nodes relevant for docstring generation denoted by this set:
```
KEEP_NODES = {
'FunctionDef', 'AsyncFunctionDef', 'ClassDef',
'arguments', 'arg', 'Return',
'If', 'For', 'While', 'Try', 'With',
'Assign', 'Call',
'Raise', 'ExceptHandler',
'decorator', 'bases',
'Compare', 'BoolOp'
}
```
Everything else is discarded. For example
**Source Code**
```
def tox_append_version_info() -> str: return '[toxfile]'
```
**Resulting AST Dictionary**
```
"ast_data": {
"type": "FunctionDef",
"children": [
{
"type": "arguments",
"args": []
},
{
"type": "Return",
"has_value": true
}
],
"name": "tox_append_version_info"
}
```
This dictionary is then flattened via a helper function which would then look something like `FunctionDef name:tox_append_version_info arguments Return return:yes`.
# Preprocessing
The dataset generally follows [CodeBERT's code2nl](https://github.com/microsoft/CodeBERT/tree/master/CodeBERT/code2nl) dataset cleaning standards which are as follows:
- Removed comments from the code
- Removed examples where code cannot be parsed into an AST
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English
Furthermore, the following cleaning steps specific to this dataset were applied:
- Removed examples where, using CodeT5+'s tokenizer, the combined tokens of the source_code + ast_data is > 512
- Removed examples where, using CodeT5+'s tokenizer, the docstrings are > 512
# Final Statistics
```
{
"original_samples": 128880,
"processed_samples": 34537,
"filter_stats": {
"success": 34537,
"non_english": 1202,
"docstring_too_long": 3848,
"input_too_long": 8366,
"docstring_too_short": 74013,
"error": 0,
"error: unhashable type: 'list'": 6914
},
"split_sizes": {
"train": 24175,
"val": 5180,
"test": 5182
},
"input_token_stats": {
"min": 16,
"max": 505,
"avg": 164.071
},
"target_token_stats": {
"min": 4,
"max": 254,
"avg": 52.758
},
"type_distribution": {
"method": 12682,
"function": 8733,
"class": 2760
}
}
```
# NOTE
This dataset is *imperfect*. Use under your own volition. |