Heralax commited on
Commit
4e96205
·
verified ·
1 Parent(s): 0d8324f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -54
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
  base_model: Heralax/test-model-4-pretrain
5
  tags:
6
  - axolotl
@@ -11,12 +11,17 @@ datasets:
11
  - pretraining_subset_2170418.jsonl
12
  - factual_sft_completion/combined_all_0.jsonl
13
  - factual_sft_completion/combined_all_1.jsonl
14
- - generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
15
- - generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
16
- - generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
 
 
 
17
  - generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
18
- - generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
19
- - generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
 
 
20
  model-index:
21
  - name: test-model-4-sft
22
  results: []
@@ -24,11 +29,8 @@ model-index:
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
  should probably proofread and complete it, then remove this comment. -->
27
-
28
- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
29
- <details><summary>See axolotl config</summary>
30
-
31
- axolotl version: `0.10.0.dev0`
32
  ```yaml
33
  base_model: Heralax/test-model-4-pretrain
34
  tokenizer_type: AutoTokenizer
@@ -110,60 +112,38 @@ wandb_run_id: ''
110
  wandb_log_model: ''
111
  hub_model_id: Heralax/test-model-4-sft
112
  hub_strategy: all_checkpoints
113
-
114
  ```
115
-
116
  </details><br>
117
 
118
- # test-model-4-sft
119
 
120
- This model is a fine-tuned version of [Heralax/test-model-4-pretrain](https://huggingface.co/Heralax/test-model-4-pretrain) on the axolotl_rag_conversations_facts.jsonl, the axolotl_correction_conversations_facts.json, the pretraining_subset_2170418.jsonl, the factual_sft_completion/combined_all_0.jsonl, the factual_sft_completion/combined_all_1.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl, the generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl and the generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl datasets.
121
- It achieves the following results on the evaluation set:
122
  - Loss: 0.6876
123
 
124
- ## Model description
125
-
126
- More information needed
127
-
128
- ## Intended uses & limitations
129
 
130
- More information needed
131
 
132
- ## Training and evaluation data
133
-
134
- More information needed
135
-
136
- ## Training procedure
137
-
138
- ### Training hyperparameters
 
139
 
140
- The following hyperparameters were used during training:
141
- - learning_rate: 2e-05
142
- - train_batch_size: 2
143
- - eval_batch_size: 4
144
- - seed: 1337
145
- - gradient_accumulation_steps: 75
146
- - total_train_batch_size: 150
147
- - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
148
- - lr_scheduler_type: constant
149
- - lr_scheduler_warmup_steps: 8
150
- - training_steps: 85
151
 
152
- ### Training results
153
 
154
- | Training Loss | Epoch | Step | Validation Loss |
155
- |:-------------:|:------:|:----:|:---------------:|
156
- | 1.5402 | 0.0564 | 1 | 1.2586 |
157
- | 0.5945 | 0.9594 | 17 | 0.5595 |
158
- | 0.443 | 1.9029 | 34 | 0.5419 |
159
- | 0.3117 | 2.8465 | 51 | 0.5845 |
160
- | 0.1713 | 3.7901 | 68 | 0.6350 |
161
- | 0.1231 | 4.7336 | 85 | 0.6876 |
162
 
 
 
 
 
163
 
164
- ### Framework versions
165
 
166
- - Transformers 4.52.3
167
- - Pytorch 2.6.0+cu124
168
- - Datasets 3.6.0
169
- - Tokenizers 0.21.1
 
1
  ---
2
  library_name: transformers
3
+ license: llama3.1
4
  base_model: Heralax/test-model-4-pretrain
5
  tags:
6
  - axolotl
 
11
  - pretraining_subset_2170418.jsonl
12
  - factual_sft_completion/combined_all_0.jsonl
13
  - factual_sft_completion/combined_all_1.jsonl
14
+ - >-
15
+ generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
16
+ - >-
17
+ generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
18
+ - >-
19
+ generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
20
  - generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
21
+ - >-
22
+ generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
23
+ - >-
24
+ generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
25
  model-index:
26
  - name: test-model-4-sft
27
  results: []
 
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
  should probably proofread and complete it, then remove this comment. -->
32
+ <details>
33
+
 
 
 
34
  ```yaml
35
  base_model: Heralax/test-model-4-pretrain
36
  tokenizer_type: AutoTokenizer
 
112
  wandb_log_model: ''
113
  hub_model_id: Heralax/test-model-4-sft
114
  hub_strategy: all_checkpoints
 
115
  ```
 
116
  </details><br>
117
 
118
+ # llama-Augmentoolkit-Quickstart-Factual-Demo-Example
119
 
120
+ This model is achieves the following results on the evaluation set:
 
121
  - Loss: 0.6876
122
 
123
+ (See? Number go down. Augmentoolkit works.)
 
 
 
 
124
 
125
+ This is a demo model produced by running through the quickstart of [Augmentoolkit's](https://github.com/e-p-armstrong/augmentoolkit) Factual Finetuning pipeline. The model was taught about some of the US Army Field Manuals.
126
 
127
+ The following manuals were trained on:
128
+ ```
129
+ ARN14613_FM 1-05 FINAL WEB.pdf.txt ARN19639_FM 3-14 FINAL WEB.pdf.txt ARN31505-FM_3-96-000-WEB-1.pdf.txt ARN34470-FM_6-99-000-WEB-1.pdf.txt ARN35577-FM_3-55-000-WEB-0.pdf.txt
130
+ ARN15310-FM_3-13.4-000-WEB-2.pdf.txt ARN21797_FM_3-04_FINAL_WEB_wfix.pdf.txt ARN33094-FM_3-57-000-WEB-1.pdf.txt ARN34770-FM_3-94-000-WEB-1.pdf.txt ARN35791-FM_4-02-001-WEB-3.pdf.txt
131
+ ARN17082-FM_3-11-000-WEB-1.pdf.txt ARN30964-FM_7-22-001-WEB-4.pdf.txt ARN33127-FM_3-12-000-WEB-1.pdf.txt ARN34864-FM_3-61-000-WEB-1.pdf.txt ARN35838-FM_3-01.44-000-WEB-1.pdf.txt
132
+ ARN19185_FM 6-02_FINAL_WEB.pdf.txt ARN31339-FM_3-01-000-WEB-1.pdf.txt ARN33331-FM_1-0-000-WEB-1.pdf.txt ARN35076-FM_7-0-000-WEB-1.pdf.txt ARN36290-FM_3-0-000-WEB-2.pdf.txt
133
+ ARN19354_FM 6-27 _C1_FINAL_WEB_v2.pdf.txt ARN31353-FM_3-34-000-WEB-1.pdf.txt ARN34192-FM_3-81-000-WEB-1.pdf.txt ARN35404-FM_6-0-000-WEB-1.pdf.txt ARN36735-FM_6-22-000-WEB-1.pdf.txt
134
+ ```
135
 
136
+ The `prompt.txt`, `template.txt`, RAG dataset, and GGUF file are all inside this folder so that people can run this model themselves using Augmentoolkit's chat interface. Just download the things not in the checkpoint-xx/ folders (not the model.safetensors files), put them all in a folder, and configure the basic-server or rag-server config to point at the prompt, template, etc., (see the documentation pages for those utility pipelines) and bang, Augmentoolkit will run these models with the correct prompt template and configuration.
 
 
 
 
 
 
 
 
 
 
137
 
138
+ Stop sequence == "\*\*Finished.\*\*"
139
 
140
+ Why did I do it like that? Because the more SFT text resembles the pretraining text, the more that knowledge and capabilities from the pretraining will carry over to the SFT. Convention and chatml be damned, I like better performance.
 
 
 
 
 
 
 
141
 
142
+ Related Links:
143
+ - [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit)
144
+ - [gRPo model (thoughts)](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess)
145
+ - [gRPo model (no thoughts)](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts)
146
 
147
+ Q: Why the Llama license?
148
 
149
+ A: The quickstart uses Llama 3 to generate the data for the sake of speed and hardware compatibility. Therefore, the Llama license applies to this demo model.