samsja commited on
Commit
1c8bf88
·
verified ·
1 Parent(s): 35f6fe3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -75,8 +75,52 @@ print(pipe("What is prime intellect ?"))
75
  - **Tokens**: 1 Trillion
76
  - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  **Performance on benchmarks**
79
 
 
80
  Base Models:
81
  | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
82
  |---|---|---|---|---|---|---|---|
 
75
  - **Tokens**: 1 Trillion
76
  - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
77
 
78
+
79
+ ## Post-training
80
+
81
+ The post-training has been handle by [arcee](https://huggingface.co/arcee-ai)
82
+
83
+ We applied several post-training techniques to enhance INTELLECT-1's capabilities and task-specific performance. Our post-training methodology consisted of three main phases.
84
+
85
+ First, we conducted an extensive series of 16 Supervised Fine-Tuning (SFT) trainings, with individual runs ranging from 1 to 3.3 billion tokens each. The most successful configuration used 2.4 billion training tokens over 3 epochs. We used MergeKit, EvolKit, and DistillKit from Arcee AI to combine the models, generate the data sets, and distill the logits, respectively. For training data, we used a diverse set of high-quality datasets:
86
+
87
+ ## Post-training
88
+
89
+ After completing the globally distributed pretraining phase, we applied several post-training techniques to enhance INTELLECT-1's capabilities and task-specific performance. Our post-training methodology consisted of three main phases.
90
+
91
+ First, we conducted an extensive series of 16 Supervised Fine-Tuning (SFT) trainings, with individual runs ranging from 1 to 3.3 billion tokens each. The most successful configuration used 2.4 billion training tokens over 3 epochs. We used MergeKit, EvolKit, and DistillKit from Arcee AI to combine the models, generate the data sets, and distill the logits, respectively. For training data, we used a diverse set of high-quality datasets:
92
+
93
+ 1. **New Datasets** (released with INTELLECT-1):
94
+ - arcee-ai/EvolKit-75k (generated via EvolKit)
95
+ - arcee-ai/Llama-405B-Logits
96
+ - arcee-ai/The-Tomb
97
+
98
+ 2. **Instruction Following**:
99
+ - [mlabonne/open-perfectblend-fixed](MaziyarPanahi/open-perfectblend-fixed) (generalist capabilities)
100
+ - [microsoft/orca-agentinstruct-1M-v1-cleaned](mlabonne/orca-agentinstruct-1M-v1-cleaned) (Chain-of-Thought)
101
+ - [Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs](Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs)
102
+
103
+ 3. **Domain-Specific**:
104
+ - [Team-ACE/ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE) (function calling)
105
+ - [Synthia coder](MaziyarPanahi/Synthia-Coder-v1.5-I-sharegpt) (programming)
106
+ - [ServiceNow-AI/M2Lingual](https://huggingface.co/datasets/ServiceNow-AI/M2Lingual) (multilingual)
107
+ - [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) (mathematics)
108
+
109
+ 4. **Tulu-3 Persona Datasets**:
110
+ - [allenai/tulu-3-sft-personas-code](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-code)
111
+ - [allenai/tulu-3-sft-personas-math](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math)
112
+ - [allenai/tulu-3-sft-personas-math-grade](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math-grade)
113
+ - [allenai/tulu-3-sft-personas-algebra](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-algebra)
114
+
115
+ Second, we execute 8 distinct Direct Preference Optimization (DPO) runs with various combinations of data sets to enhance specific performance metrics and align the model with human preferences. A key advantage in our post-training process was INTELLECT-1's use of the Llama-3 tokenizer, which allowed us to utilize logits from Llama-3.1-405B to heal and maintain precision during the post-training process via DistillKit.
116
+
117
+ Finally, we performed 16 strategic merges between candidate models using MergeKit to create superior combined models that leverage the strengths of different training runs. During the post-training phase, we observed that when using a ChatML template without an explicit BOS (begin-of-sequence) token, the initial loss was approximately 15. However, when switching to the Llama 3.1 chat template, the loss for these trainings started much lower at approximately 1.1, indicating better alignment with the underlying Llama 3 tokenizer.
118
+
119
+ The combination of these post-training techniques resulted in significant improvements in various benchmarks, particularly in knowledge retrieval, grade school math, instruction following and reasoning.
120
+
121
  **Performance on benchmarks**
122
 
123
+
124
  Base Models:
125
  | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
126
  |---|---|---|---|---|---|---|---|