Mir-2002 commited on
Commit
c77dd91
·
verified ·
1 Parent(s): 8cb04c4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - summarization
4
+ - text2text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - code
9
+ size_categories:
10
+ - 10K<n<100K
11
+ ---
12
+
13
+ # Overview
14
+
15
+ This dataset contains Python code-docstring pairs, whereas the docstrings are in Google style. A Google style docstring is structured as follows:
16
+ ```
17
+ <Description of the code>
18
+
19
+ Args:
20
+ <var1> (<data-type>) : <description of var1>
21
+ <var2> (<data_type>) : <description of var2>
22
+
23
+ Returns:
24
+ <var3> (<data-type>) : <description of var3>
25
+
26
+ Raises:
27
+ <var4> (<data-type>) : <description of var4>
28
+ ```
29
+
30
+ The format varies widely (like additional sections such as Examples, Notes, etc) but generally speaking, it should contain an Args/Parameters and Returns section.
31
+
32
+ # Source
33
+
34
+ The dataset was gathered from 3 different sources:
35
+
36
+ ## CodeSearchNet
37
+
38
+ From their Python split of ~250k samples, ~23k samples was extracted. A less than 10% sample retention, most samples from CodeSearchNet contained informal docstrings that
39
+ only contained descriptions and no sections.
40
+
41
+ ## Repositories Under Google's GitHub Organization Page
42
+
43
+ You can find the specified page here [here](https://github.com/search?q=topic%3Apython+org%3Agoogle+fork%3Atrue&type=repositories). These repos are dictated by the list:
44
+
45
+ ```
46
+ repos = [
47
+ "https://github.com/google/python-fire",
48
+ "https://github.com/google/yapf",
49
+ "https://github.com/google/pytype",
50
+ "https://github.com/google/tf-quant-finance",
51
+ "https://github.com/google/budoux",
52
+ "https://github.com/google/mobly",
53
+ "https://github.com/google/temporian",
54
+ "https://github.com/google/pyglove",
55
+ "https://github.com/google/subpar",
56
+ "https://github.com/google/weather-tools",
57
+ "https://github.com/google/ci_edit",
58
+ "https://github.com/google/etils",
59
+ "https://github.com/google/pcbdl",
60
+ "https://github.com/google/starthinker",
61
+ "https://github.com/google/pytruth",
62
+ "https://github.com/google/nsscache",
63
+ "https://github.com/google/megalista",
64
+ "https://github.com/google/fhir-py",
65
+ "https://github.com/google/chatbase-python",
66
+ "https://github.com/tensorflow/tensorflow",
67
+ "https://github.com/google/project-OCEAN",
68
+ "https://github.com/google/qhbm-library",
69
+ "https://github.com/google/data-quality-monitor",
70
+ "https://github.com/google/genai-processors",
71
+ "https://github.com/google/python-proto-converter",
72
+ "https://github.com/google/sprockets",
73
+ "https://github.com/keras-team/keras",
74
+ "https://github.com/scikit-learn/scikit-learn",
75
+ "https://github.com/apache/beam",
76
+ "https://github.com/huggingface/transformers"
77
+ ]
78
+ ```
79
+
80
+ A total of ~11k samples was gathered from this source.
81
+
82
+ ## Juraj's Python Google-style Docstrings Dataset
83
+
84
+ I found this dataset here and is made my user Juraj-juraj. You can find the dataset [here](https://huggingface.co/datasets/juraj-juraj/python_googlestyle_docstrings).
85
+ A total of ~25k samples was gathered from this source, after further preprocessing.
86
+
87
+ # Preprocessing Steps
88
+
89
+ The following cleaning, normalizing and preprocessing steps were performed:
90
+
91
+ 1. Removed duplicates based on both code and docstring
92
+ 2. Remove samples with empty code and docstrings
93
+ 3. Remove samples with extremely short entries (<20 chars)
94
+ 4. Remove samples with extremely long entries (>5000 chars)
95
+ 5. Removed comments and docstring from the code
96
+ 6. Removed samples where docstring isn't in English (using langdetect)
97
+ 7. Removed samples where docstring contained special characters like html tags or URLS
98
+ 8. Using CodeT5+ tokenizer, removed samples where docstring tokens are < 12 or > 256
99
+ 9. Normalized all docstring entries by removing any indentions
100
+
101
+ # Data Structure
102
+
103
+ The data structure of the dataset is as follows:
104
+
105
+ ```
106
+ <code> : <The code, removed of docstrings and comments>,
107
+ <docstring> : <The corresponding docstring of the code>,
108
+ <source> : <The source which the code came from>
109
+ ```
110
+
111
+ The sources are:
112
+
113
+ **CodeSearcNet** - from the CodeSearchNet dataset
114
+ **github-repos** - from the repositories under Google's Organization GitHub page
115
+ **juraj-google-style** - from Juraj's Python Google-style docstring dataset