Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -63,6 +63,10 @@ huggingface_dataset/
|
|
63 |
└─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
|
64 |
```
|
65 |
|
|
|
|
|
|
|
|
|
66 |
### File separator
|
67 |
|
68 |
Individual files are concatenated with the sentinel line:
|
@@ -95,6 +99,7 @@ code of one file.
|
|
95 |
When first loading the dataset, decide which variant aligns with your use case
|
96 |
(e.g. proprietary model training → Licensed Subset only).
|
97 |
|
|
|
98 |
---
|
99 |
|
100 |
## Collection methodology
|
@@ -162,7 +167,7 @@ def hello world ( value ) <newline> return value + 1 <newline>
|
|
162 |
```python
|
163 |
from datasets import load_dataset
|
164 |
|
165 |
-
ds = load_dataset("jblitzar/github-python", split="train")
|
166 |
|
167 |
print(ds[0]["code"][:300]) # raw source code
|
168 |
```
|
|
|
63 |
└─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
|
64 |
```
|
65 |
|
66 |
+
## Important Note
|
67 |
+
|
68 |
+
For technical reasons, seperate splits have been stored as seperate Dataset instances. See https://huggingface.co/datasets/jblitzar/github-python-metadata, https://huggingface.co/datasets/jblitzar/github-python-meta-elaborated, and https://huggingface.co/datasets/jblitzar/github-python-corpus .
|
69 |
+
|
70 |
### File separator
|
71 |
|
72 |
Individual files are concatenated with the sentinel line:
|
|
|
99 |
When first loading the dataset, decide which variant aligns with your use case
|
100 |
(e.g. proprietary model training → Licensed Subset only).
|
101 |
|
102 |
+
|
103 |
---
|
104 |
|
105 |
## Collection methodology
|
|
|
167 |
```python
|
168 |
from datasets import load_dataset
|
169 |
|
170 |
+
ds = load_dataset("jblitzar/github-python-corpus", split="train")
|
171 |
|
172 |
print(ds[0]["code"][:300]) # raw source code
|
173 |
```
|