neginr commited on
Commit
42e831e
·
verified ·
1 Parent(s): 9fbde8c

Model save

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -4,7 +4,6 @@ license: apache-2.0
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
  - name: e1_code_fasttext_phi
@@ -16,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # e1_code_fasttext_phi
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_code_fasttext_phi dataset.
20
 
21
  ## Model description
22
 
@@ -40,10 +39,10 @@ The following hyperparameters were used during training:
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - distributed_type: multi-GPU
43
- - num_devices: 16
44
- - gradient_accumulation_steps: 8
45
  - total_train_batch_size: 128
46
- - total_eval_batch_size: 128
47
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_ratio: 0.1
 
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
 
7
  - generated_from_trainer
8
  model-index:
9
  - name: e1_code_fasttext_phi
 
15
 
16
  # e1_code_fasttext_phi
17
 
18
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
19
 
20
  ## Model description
21
 
 
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - distributed_type: multi-GPU
42
+ - num_devices: 32
43
+ - gradient_accumulation_steps: 4
44
  - total_train_batch_size: 128
45
+ - total_eval_batch_size: 256
46
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.1