File size: 6,578 Bytes
0d8324f
 
4e96205
0d8324f
 
 
 
 
 
 
 
 
 
4e96205
 
 
 
 
 
0d8324f
4e96205
 
 
 
0d8324f
 
 
 
 
 
 
4e96205
 
0d8324f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e96205
0d8324f
4e96205
0d8324f
 
4e96205
0d8324f
4e96205
0d8324f
4e96205
 
 
 
 
 
 
 
0d8324f
4e96205
0d8324f
4e96205
0d8324f
4e96205
0d8324f
4e96205
 
6e92196
23e2a65
4e96205
 
0d8324f
4e96205
0d8324f
77212c1
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
library_name: transformers
license: llama3.1
base_model: Heralax/test-model-4-pretrain
tags:
- axolotl
- generated_from_trainer
datasets:
- axolotl_rag_conversations_facts.jsonl
- axolotl_correction_conversations_facts.json
- pretraining_subset_2170418.jsonl
- factual_sft_completion/combined_all_0.jsonl
- factual_sft_completion/combined_all_1.jsonl
- >-
  generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
- >-
  generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
- >-
  generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
- generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
- >-
  generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
- >-
  generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
model-index:
- name: test-model-4-sft
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<details>
  
```yaml
base_model: Heralax/test-model-4-pretrain
tokenizer_type: AutoTokenizer
model_type: AutoModelForCausalLM
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: axolotl_rag_conversations_facts.jsonl
  type: input_output
- path: axolotl_correction_conversations_facts.json
  type: input_output
- path: pretraining_subset_2170418.jsonl
  type: completion
- path: factual_sft_completion/combined_all_0.jsonl
  type: completion
- path: factual_sft_completion/combined_all_1.jsonl
  type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
  type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
  type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
  type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
  type: completion
- path: generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
  type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
  type: completion
dataset_prepared_path: last_finetune_prepared
output_dir: ./finetune-model-output
seed: 1337
sequence_len: 5000
sample_packing: true
pad_to_sequence_len: false
shuffle_merged_datasets: true
gradient_accumulation_steps: 75
micro_batch_size: 2
eval_batch_size: 4
num_epochs: 5
optimizer: paged_adamw_8bit
lr_scheduler: constant
learning_rate: 2.0e-05
noisy_embedding_alpha: 5
weight_decay: 0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
xformers_attention: false
flash_attention: true
chat_template: chatml
auto_resume_from_checkpoints: false
warmup_ratio: 0.1
evals_per_epoch: 1
val_set_size: 0.04
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
special_tokens:
  pad_token: <unk>
use_liger_kernel: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
sequence_length: 10000
wandb_project: test-project
wandb_entity: ''
wandb_watch: ''
wandb_run_id: ''
wandb_log_model: ''
hub_model_id: Heralax/test-model-4-sft
hub_strategy: all_checkpoints
```
</details><br>

# llama-Augmentoolkit-Quickstart-Factual-Demo-Example

This model is achieves the following results on the evaluation set:
- Loss: 0.6876

(See? Number go down. Augmentoolkit works.)

This is a demo model produced by running through the quickstart of [Augmentoolkit's](https://github.com/e-p-armstrong/augmentoolkit) Factual Finetuning pipeline. The model was taught about some of the US Army Field Manuals.

The following manuals were trained on:
```
ARN14613_FM 1-05 FINAL WEB.pdf.txt		ARN19639_FM 3-14 FINAL WEB.pdf.txt		ARN31505-FM_3-96-000-WEB-1.pdf.txt		ARN34470-FM_6-99-000-WEB-1.pdf.txt		ARN35577-FM_3-55-000-WEB-0.pdf.txt
ARN15310-FM_3-13.4-000-WEB-2.pdf.txt		ARN21797_FM_3-04_FINAL_WEB_wfix.pdf.txt		ARN33094-FM_3-57-000-WEB-1.pdf.txt		ARN34770-FM_3-94-000-WEB-1.pdf.txt		ARN35791-FM_4-02-001-WEB-3.pdf.txt
ARN17082-FM_3-11-000-WEB-1.pdf.txt		ARN30964-FM_7-22-001-WEB-4.pdf.txt		ARN33127-FM_3-12-000-WEB-1.pdf.txt		ARN34864-FM_3-61-000-WEB-1.pdf.txt		ARN35838-FM_3-01.44-000-WEB-1.pdf.txt
ARN19185_FM 6-02_FINAL_WEB.pdf.txt		ARN31339-FM_3-01-000-WEB-1.pdf.txt		ARN33331-FM_1-0-000-WEB-1.pdf.txt		ARN35076-FM_7-0-000-WEB-1.pdf.txt		ARN36290-FM_3-0-000-WEB-2.pdf.txt
ARN19354_FM 6-27 _C1_FINAL_WEB_v2.pdf.txt	ARN31353-FM_3-34-000-WEB-1.pdf.txt		ARN34192-FM_3-81-000-WEB-1.pdf.txt		ARN35404-FM_6-0-000-WEB-1.pdf.txt		ARN36735-FM_6-22-000-WEB-1.pdf.txt
```

The `prompt.txt`, `template.txt`, RAG dataset, and GGUF file are all inside this folder so that people can run this model themselves using Augmentoolkit's chat interface. Just download the things not in the checkpoint-xx/ folders (not the model.safetensors files), put them all in a folder, and configure the basic-server or rag-server config to point at the prompt, template, etc., (see the documentation pages for those utility pipelines) and bang, Augmentoolkit will run these models with the correct prompt template and configuration.

Stop sequence == "\*\*Finished.\*\*"

Why did I do it like that? Because the more SFT text resembles the pretraining text, the more that knowledge and capabilities from the pretraining will carry over to the SFT. Convention and chatml be damned, I like better performance.

Related Links:
- [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit)
- [Other Factual Demo Model (Nursing)](https://huggingface.co/Heralax/llama-Augmentoolkit-Openstax-Nursing-Books-Example)
- [Not-Undertrained Factual Model](https://huggingface.co/Heralax/llama-Augmentoolkit-MilitaryModel-Demo-NotUndertrained/settings)
- [gRPo model (thoughts)](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess)
- [gRPo model (no thoughts)](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts)

Q: Why the Llama license?

A: The quickstart uses Llama 3 to generate the data for the sake of speed and hardware compatibility. Therefore, the Llama license applies to this demo model.

Example (no RAG btw):

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/oliUoD4Oz1abZ5H8WJMTO.png)