Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
File size: 1,821 Bytes
7810b1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4fd7031
3468da7
 
 
 
e53736a
 
4fd7031
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e53736a
4fd7031
e53736a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: meta
    struct:
    - name: pile_set_name
      dtype: string
  splits:
  - name: train
    num_bytes: 9052995836
    num_examples: 15345
  download_size: 4844929655
  dataset_size: 9052995836
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---


> ⚠️ **Warning**: This dataset will probably make you run out of memory if you try loading it. Don't do it. 


## Dataset Creation Process
These subsets were created by streaming over the rows from [`monology/pile-uncopyrighted`](https://huggingface.co/datasets/monology/pile-uncopyrighted) and filtering by the meta column. Each subset is generally limited to the first 100,000 qualifying rows encountered.

## Citations

If you use this dataset, please cite the original Pile papers:

```bibtex
@article{gao2020pile,
title={The Pile: An 800GB dataset of diverse text for language modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
@article{biderman2022datasheet,
title={Datasheet for the pile},
author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
journal={arXiv preprint arXiv:2201.07311},
year={2022}
}
```

**Note**: Some subsets contain fewer than 100,000 rows due to the sparsity of certain categories in the original dataset. In these cases, the rows may not be the first encountered but are unevenly distributed due to parallelization during collection.

## Future Updates
Future updates may include completing the smaller subsets to reach the 100,000 row target, which could result in breaking changes for those specific subsets.