typo
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ More than one training run goes into making a large language model, but develope
|
|
13 |
## 25 Data Recipes
|
14 |
|
15 |
We call the 25 corpora we train on *data recipes* as they range across popular corpora including Dolma, DCLM, RefinedWeb, C4, and FineWeb as well as combinations of interventions
|
16 |
-
on these datasets such as source mixing, deduplication, and filtering. This HuggingFace Dataset contains the tokenized data used to build these recipes
|
17 |
|
18 |
| **Source** | **Recipe** | **Description** |
|
19 |
|------------------------------------|--------------------------------|-----------------|
|
|
|
13 |
## 25 Data Recipes
|
14 |
|
15 |
We call the 25 corpora we train on *data recipes* as they range across popular corpora including Dolma, DCLM, RefinedWeb, C4, and FineWeb as well as combinations of interventions
|
16 |
+
on these datasets such as source mixing, deduplication, and filtering. This HuggingFace Dataset contains the tokenized data used to build these recipes, as mapped by this [OLMo script](https://github.com/allenai/OLMo/blob/DataDecide/olmo/data/named_data_mixes.py).
|
17 |
|
18 |
| **Source** | **Recipe** | **Description** |
|
19 |
|------------------------------------|--------------------------------|-----------------|
|