jblitzar commited on
Commit
5512475
Β·
verified Β·
1 Parent(s): e52bc32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md CHANGED
@@ -14,4 +14,221 @@ configs:
14
  data_files:
15
  - split: train
16
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  data_files:
15
  - split: train
16
  path: data/train-*
17
+
18
+ annotations_creators:
19
+ - author
20
+ license:
21
+ - gpl-3.0
22
+ multilinguality:
23
+ - monolingual
24
+ pretty_name: GitHub-Python
25
+ dataset_name: github-python
26
+ dataset_type: code
27
+ tags:
28
+ - code
29
+ - python
30
+ size_categories:
31
+ - 100K<nβ©½1M
32
+ task_categories:
33
+ - text-generation
34
  ---
35
+
36
+ # GitHub-Python β€” Licensed & Elaborated Variants
37
+
38
+ This repository ships **two complementary Python-code corpora** extracted from
39
+ public GitHub:
40
+
41
+ - **Licensed Subset** – strictly _permissive-licensed_ files suitable for
42
+ commercial redistribution / model training (main corpus used in our
43
+ experiments).
44
+ - **Elaborated Collection** – a broader crawl that additionally contains files
45
+ under _copyleft_ or unclear licenses (GPL/AGPL/LGPL, etc.). Useful for
46
+ analysis or pre-training where license mixing is acceptable.
47
+
48
+ Both variants target **code-completion / generation** research.
49
+
50
+ ## Dataset at a glance
51
+
52
+ | | **Licensed Subset** | **Elaborated Collection** |
53
+ | ------------------- | ------------------- | ------------------------- |
54
+ | Files (.py) | 53,017 | 186,066 |
55
+ | Unique repositories | 16,447 | 59,852 |
56
+ | Repository owners | 12,515 | 43,517 |
57
+ | Compressed size | 732 MB | 2.4 GB \* |
58
+ | Vocabulary (tokens) | 443,431 | 443,431 † |
59
+ | License coverage | Permissive only | Mixed (perm. + copyleft) |
60
+ | Secrets redacted | βœ… | ⚠️ not guaranteed |
61
+ | Time window | β‰₯ 2015-01-01 | β‰₯ 2015-01-01 |
62
+
63
+ \* estimated – elaborated corpus is distributed as raw file list, not a single
64
+ text file.
65
+ † same tokenizer file is shared by both variants.
66
+
67
+ Numbers were obtained from the final redacted corpus and companion metadata.
68
+
69
+ ---
70
+
71
+ ## Dataset structure
72
+
73
+ ```
74
+ huggingface_dataset/
75
+ β”œβ”€ mega_licensed_corpus_redacted.txt # Licensed Subset – concatenated code
76
+ β”œβ”€ python_files.txt # Licensed Subset – raw file URLs
77
+ β”œβ”€ python_files_elaborated.txt # Elaborated Collection – raw file URLs
78
+ β”œβ”€ python_files_elaborated_metadata.csv # Elaborated Collection metadata
79
+ └─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
80
+ ```
81
+
82
+ ### File separator
83
+
84
+ Individual files are concatenated with the sentinel line:
85
+
86
+ ```
87
+ # <FILESEP>
88
+ ```
89
+
90
+ Anything following the sentinel until the next sentinel (or EOF) is the source
91
+ code of one file.
92
+
93
+ ---
94
+
95
+ ## Dataset variants
96
+
97
+ ### 1. Licensed Subset (`mega_licensed_corpus_redacted.txt`)
98
+
99
+ β€’ 53 K permissively-licensed files (MIT/BSD/Apache/ISC/Unlicense).
100
+ β€’ All API keys & credentials removed.
101
+ β€’ Ready for redistribution & commercial use (respect upstream NOTICE files).
102
+
103
+ ### 2. Elaborated Collection (`python_files_elaborated.txt`)
104
+
105
+ β€’ 186 K files from a much larger crawl.
106
+ β€’ Contains **GPL / LGPL / AGPL and other copyleft** licenses.
107
+ β€’ Shipped _as URL list_ + metadata CSV; you must download the files yourself
108
+ (`datasets.load_dataset` streaming, `wget`, etc.).
109
+ β€’ **No license filtering or secret-redaction performed** – use with caution.
110
+
111
+ When first loading the dataset, decide which variant aligns with your use case
112
+ (e.g. proprietary model training β†’ Licensed Subset only).
113
+
114
+ ---
115
+
116
+ ## Collection methodology
117
+
118
+ 1. **Repository discovery**
119
+
120
+ - Queried GitHub REST API for projects with **β‰₯ 10 stars**
121
+ (earlier iterations used 100+, later expanded for coverage).
122
+ - Only repositories with primary language _Python_ and last commit β‰₯ 2015.
123
+
124
+ 2. **File filtering**
125
+
126
+ - Retain files whose **size ∈ [1 KB, 100 KB]**.
127
+ - Exclude common build/packaging scripts (`setup.py`, `__init__.py`, etc.).
128
+
129
+ 3. **License compliance**
130
+
131
+ - Allowed: MIT, Apache-2.0, BSD-2/3-Clause, ISC, Unlicense.
132
+ - GPL, LGPL, AGPL and proprietary licenses were **excluded**.
133
+
134
+ 4. **Deduplication**
135
+
136
+ - Unique file SHA hashes; duplicates skipped.
137
+
138
+ 5. **Formatting & cleaning**
139
+
140
+ - Formatted with _autopep8_ to normalise whitespace.
141
+ - Custom script removed trailing whitespace & normalised newlines.
142
+
143
+ 6. **Secret redaction**
144
+ - `truffleHog` + custom regex pass removed >150 active credentials.
145
+ - Redacted corpus stored as `mega_licensed_corpus_redacted.txt`.
146
+
147
+ ---
148
+
149
+ ## Custom tokenisation
150
+
151
+ The accompanying `custom_tokens_vocab.txt` implements a **Python-aware
152
+ sub-token scheme**:
153
+
154
+ 1. Strip doc-strings & comments.
155
+ 2. Split on:
156
+ - Camel-Case boundaries (`Camel` β†’ `Camel`, `Case`)
157
+ - Underscores, spaces
158
+ - Indentation & newlines (preserved as `<newline>` token)
159
+ 3. Rare tokens (frequency < 10) were dropped β†’ 443 k vocabulary.
160
+
161
+ Example:
162
+
163
+ ```python
164
+ def helloWorld(value):
165
+ return value + 1
166
+ ```
167
+
168
+ tokenises to:
169
+
170
+ ```
171
+ def hello world ( value ) <newline> return value + 1 <newline>
172
+ ```
173
+
174
+ ---
175
+
176
+ ## Usage
177
+
178
+ ```python
179
+ from datasets import load_dataset
180
+
181
+ ds = load_dataset("jblitzar/github-python", split="train")
182
+
183
+ print(ds[0]["code"][:300]) # raw source code
184
+ ```
185
+
186
+ If you prefer token level examples (small reasons: memory), map the tokenizer:
187
+
188
+ ```python
189
+ from tokenizers import Tokenizer
190
+ tok = Tokenizer.from_file("custom_tokens_vocab.txt")
191
+
192
+ def encode(ex):
193
+ ex["input_ids"] = tok.encode(ex["code"]).ids
194
+ return ex
195
+
196
+ ds = ds.map(encode, remove_columns=["code"])
197
+ ```
198
+
199
+ ---
200
+
201
+ ## Ethical considerations & limitations
202
+
203
+ - **Licenses respected** – only permissive licenses included; retain NOTICE
204
+ files when redistributing derivative works.
205
+ - **Secrets removed** – automated & manual audits performed, yet users **must
206
+ not assume zero secrets**; re-audit before public deployments.
207
+ - **Code quality** – projects vary in style & correctness. Generated models
208
+ may replicate bugs or vulnerable patterns.
209
+
210
+ ---
211
+
212
+ ## Citation
213
+
214
+ If you use this dataset, please cite:
215
+
216
+ ```
217
+ @misc{github-python-2024,
218
+ author = {JBlitzar},
219
+ title = {GitHub-Python: A Permissively Licensed Corpus of Python Code},
220
+ year = {2024},
221
+ howpublished = {\url{https://huggingface.co/datasets/jblitzar/github-python}},
222
+ note = {Version 1.0}
223
+ }
224
+ ```
225
+
226
+ ---
227
+
228
+ ## License
229
+
230
+ Dataset card and aggregation scripts: **GPLv3**.
231
+ Each code snippet remains under its **original repository license** (MIT,
232
+ Apache-2.0, BSD, ISC, etc.). Users must comply with upstream notices when
233
+ redistributing code or derivatives.
234
+