ryanmarten commited on
Commit
2ca9258
·
verified ·
1 Parent(s): d2d38c3

Model save

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -1,10 +1,9 @@
1
  ---
2
  library_name: transformers
3
- license: other
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
  - name: am_100k
@@ -16,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # am_100k
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/am_100k dataset.
20
 
21
  ## Model description
22
 
@@ -40,10 +39,10 @@ The following hyperparameters were used during training:
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - distributed_type: multi-GPU
43
- - num_devices: 16
44
- - gradient_accumulation_steps: 32
45
  - total_train_batch_size: 512
46
- - total_eval_batch_size: 128
47
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_ratio: 0.1
@@ -56,6 +55,6 @@ The following hyperparameters were used during training:
56
  ### Framework versions
57
 
58
  - Transformers 4.46.1
59
- - Pytorch 2.5.0a0+b465a5843b.nv24.09
60
- - Datasets 3.5.0
61
  - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
 
7
  - generated_from_trainer
8
  model-index:
9
  - name: am_100k
 
15
 
16
  # am_100k
17
 
18
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
19
 
20
  ## Model description
21
 
 
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - distributed_type: multi-GPU
42
+ - num_devices: 32
43
+ - gradient_accumulation_steps: 16
44
  - total_train_batch_size: 512
45
+ - total_eval_batch_size: 256
46
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.1
 
55
  ### Framework versions
56
 
57
  - Transformers 4.46.1
58
+ - Pytorch 2.5.1
59
+ - Datasets 3.1.0
60
  - Tokenizers 0.20.3