Dataset Viewer
timestamp
string | end_timestamp
string | stage_name
string | stage_number
int64 | level
string | message
string | stdout_content
string | stderr_content
string | experiment_name
string | elapsed_time_seconds
float64 | stage_complete
bool |
---|---|---|---|---|---|---|---|---|---|---|
2025-08-11T12:37:22.171124
|
2025-08-11T12:38:26.848369
|
llamafactory_sft
| 1 |
INFO
|
Complete log capture for stage: llamafactory_sft
|
[INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Registered dataset: TAUR_dev__D_SFT_C_cd3arg_Qwen2_5_1_5B_Instruct_AnsRev_think -> TAUR-dev/D-SFT_C-cd3arg-Qwen2.5-1.5B-Instruct-AnsRev-think (format: sharegpt)
[INFO] Created training config: /datastor1/mwadhwa/tmp/sf/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /datastor1/mwadhwa/tmp/sf/llamafactory/configs/merge_config.yaml
[INFO]๏ธ Starting LLaMAFactory training...
[INFO] Running command: /datastor1/mwadhwa/anaconda3/envs/sf_conda/bin/python -m torch.distributed.run --nproc-per-node 8 --nnodes 1 --master_port 25678 /datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py /datastor1/mwadhwa/tmp/sf/llamafactory/configs/training_config.yaml
[DEBUG] Loaded 1 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 2)
๐ Starting training with real-time output...
================================================================================
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[WARNING|2025-08-11 12:37:39] llamafactory.extras.misc:154 >> Version checking has been disabled, may lead to unexpected behaviors.
[2025-08-11 12:37:40,837] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 12:37:40,837] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 12:37:40,837] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 12:37:40,837] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 12:37:40,838] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 12:37:40,839] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 12:37:40,911] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-08-11 12:37:45,066] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /datastor1/mwadhwa/triton_cache, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Registered source: longmult
Registered source: longmult
Registered source: longmult
Registered source: longmult
Registered source: longmultRegistered source: longmult
Registered source: longmult
Registered source: countdown
Registered source: countdown
Registered source: countdown
Registered source: countdown
Registered source: countdown
Registered source: countdown
Registered source: countdown
Registered source: gsm8k
Registered source: gsm8k
Registered source: gsm8k
Registered source: gsm8k
Registered source: gsm8k
Registered source: gsm8k
Registered source: gsm8k
Registered source: arc
Registered source: arc
Registered source: arc_challenge
Registered source: arc_easy
Registered source: arc_challenge
Registered source: arc_easyRegistered source: arcRegistered source: arc
Registered source: arc
Registered source: arc_challengeRegistered source: arc_challenge
Registered source: arc_challenge
Registered source: arc_easy
Registered source: arc_easy
Registered source: arc_easy
Registered source: arc
Registered source: arc_challenge
Registered source: arc_easy
Registered source: arc
Registered source: arc_challenge
Registered source: arc_easy
Registered source: piqaRegistered source: piqa
Registered source: piqaRegistered source: piqa
Registered source: piqa
Registered source: piqa
Registered source: piqa
Registered source: mmlu
Registered source: mmlu
Registered source: mmlu
Registered source: mmlu
Registered source: mmlu
Registered source: mmlu
Registered source: mmlu
Registered source: mmlu_pro
Registered source: mmlu_pro
Registered source: mmlu_proRegistered source: mmlu_pro
Registered source: mmlu_pro
Registered source: mmlu_proRegistered source: mmlu_pro
Registered source: csqa
Registered source: csqa
Registered source: csqa
Registered source: csqa
Registered source: csqa
Registered source: csqa
Registered source: csqa
Registered source: social_iqaRegistered source: social_iqa
Registered source: social_iqa
Registered source: social_iqa
Registered source: social_iqaRegistered source: social_iqa
Registered source: social_iqa
Registered source: strategy_qa
Registered source: strategy_qaRegistered source: strategy_qa
Registered source: strategy_qa
Registered source: strategy_qa
Registered source: strategy_qa
Registered source: strategy_qa
Registered source: winogrande
Registered source: winogrande
Registered source: winogrande
Registered source: winogrande
Registered source: winogrande
Registered source: winograndeRegistered source: winogrande
Registered source: bbh
Registered source: bbh
Registered source: bbhRegistered source: bbhRegistered source: bbh
Registered source: bbh
Registered source: bbh
[2025-08-11 12:37:46,530] [INFO] [comm.py:669:init_distributed] cdb=None
[2025-08-11 12:37:46,531] [INFO] [comm.py:669:init_distributed] cdb=None
[2025-08-11 12:37:46,531] [INFO] [comm.py:700:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2025-08-11 12:37:46,531] [INFO] [comm.py:669:init_distributed] cdb=None
[2025-08-11 12:37:46,532] [INFO] [comm.py:669:init_distributed] cdb=None
[2025-08-11 12:37:46,532] [INFO] [comm.py:669:init_distributed] cdb=None
[2025-08-11 12:37:46,534] [INFO] [comm.py:669:init_distributed] cdb=None
[2025-08-11 12:37:46,535] [INFO] [comm.py:669:init_distributed] cdb=None
[INFO|2025-08-11 12:37:48] llamafactory.hparams.parser:406 >> Process rank: 0, world size: 8, device: cuda:0, distributed training: True, compute dtype: torch.bfloat16
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file vocab.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/vocab.json
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file merges.txt from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/merges.txt
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file tokenizer.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/tokenizer.json
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file tokenizer_config.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/tokenizer_config.json
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:48,655 >> loading file chat_template.jinja from cache at None
Registered source: longmult
Registered source: countdown
Registered source: gsm8k
Registered source: arc
Registered source: arc_challenge
Registered source: arc_easy
Registered source: piqa
Registered source: mmlu
Registered source: mmlu_pro
Registered source: csqa
Registered source: social_iqa
Registered source: strategy_qa
Registered source: winogrande
Registered source: bbh
[INFO|tokenization_utils_base.py:2299] 2025-08-11 12:37:48,891 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-08-11 12:37:48] llamafactory.hparams.parser:406 >> Process rank: 2, world size: 8, device: cuda:2, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-08-11 12:37:49] llamafactory.hparams.parser:406 >> Process rank: 7, world size: 8, device: cuda:7, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-08-11 12:37:49] llamafactory.hparams.parser:406 >> Process rank: 5, world size: 8, device: cuda:5, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-08-11 12:37:49] llamafactory.hparams.parser:406 >> Process rank: 6, world size: 8, device: cuda:6, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-08-11 12:37:49] llamafactory.hparams.parser:406 >> Process rank: 3, world size: 8, device: cuda:3, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-08-11 12:37:49] llamafactory.hparams.parser:406 >> Process rank: 4, world size: 8, device: cuda:4, distributed training: True, compute dtype: torch.bfloat16
[INFO|configuration_utils.py:698] 2025-08-11 12:37:49,258 >> loading configuration file config.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/config.json
[INFO|configuration_utils.py:770] 2025-08-11 12:37:49,261 >> Model config Qwen2Config {
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 21,
"model_type": "qwen2",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.52.3",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}
[2025-08-11 12:37:49,321] [INFO] [comm.py:669:init_distributed] cdb=None
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file vocab.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/vocab.json
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file merges.txt from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/merges.txt
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file tokenizer.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/tokenizer.json
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file tokenizer_config.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/tokenizer_config.json
[INFO|tokenization_utils_base.py:2023] 2025-08-11 12:37:49,416 >> loading file chat_template.jinja from cache at None
[INFO|tokenization_utils_base.py:2299] 2025-08-11 12:37:49,635 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-08-11 12:37:49] llamafactory.data.loader:143 >> Loading dataset TAUR-dev/D-SFT_C-cd3arg-Qwen2.5-1.5B-Instruct-AnsRev-think...
[INFO|2025-08-11 12:37:50] llamafactory.hparams.parser:406 >> Process rank: 1, world size: 8, device: cuda:1, distributed training: True, compute dtype: torch.bfloat16
[rank3]:[W811 12:37:50.296394336 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
[rank5]:[W811 12:37:50.296396886 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
[rank2]:[W811 12:37:50.296636395 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
[rank7]:[W811 12:37:50.296658305 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
Converting format of dataset: 0%| | 0/50 [00:00<?, ? examples/s]
Converting format of dataset: 100%|โโโโโโโโโโ| 50/50 [00:00<00:00, 2371.32 examples/s]
[rank6]:[W811 12:37:50.601503213 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
[rank4]:[W811 12:37:50.601750832 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
[rank1]:[W811 12:37:51.387688719 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
[rank0]:[W811 12:37:51.647501133 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
Running tokenizer on dataset: 0%| | 0/50 [00:00<?, ? examples/s]
Running tokenizer on dataset: 100%|โโโโโโโโโโ| 50/50 [00:00<00:00, 394.77 examples/s]
Running tokenizer on dataset: 100%|โโโโโโโโโโ| 50/50 [00:00<00:00, 312.29 examples/s]
training example:
input_ids:
[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 872, 198, 16429, 279, 5109, 508, 24, 23, 11, 220, 17, 23, 11, 220, 17, 15, 1125, 1855, 458, 23606, 429, 16819, 220, 21, 20, 23, 13, 1446, 646, 990, 6770, 34784, 7525, 17973, 11, 85922, 11777, 608, 8, 323, 1817, 1372, 646, 1172, 387, 1483, 3055, 13, 6928, 697, 975, 1573, 498, 4226, 13, 1597, 470, 279, 1590, 4226, 304, 366, 9217, 29, 690, 9217, 29, 9492, 11, 369, 3110, 1447, 27, 9217, 397, 8304, 220, 16, 25, 220, 16, 488, 220, 17, 284, 220, 18, 198, 8304, 220, 17, 25, 220, 18, 608, 220, 18, 284, 220, 16, 198, 522, 9217, 94367, 10061, 594, 1744, 3019, 553, 3019, 151645, 198, 151644, 77091, 198, 13708, 766, 397, 1249, 11625, 419, 3491, 11, 582, 1184, 311, 15963, 279, 2661, 5109, 508, 24, 23, 11, 220, 17, 23, 11, 220, 17, 15, 60, 1667, 6770, 34784, 7525, 17973, 11, 85922, 11777, 608, 8, 773, 429, 279, 1102, 16819, 220, 21, 20, 23, 13, 4710, 5338, 11, 1077, 594, 1430, 7842, 678, 2326, 5109, 3786, 510, 8304, 220, 16, 25, 220, 24, 23, 488, 220, 17, 23, 488, 220, 17, 15, 284, 220, 16, 19, 21, 271, 1986, 2629, 374, 2238, 3347, 11, 773, 582, 1184, 311, 5263, 432, 4623, 13, 6771, 594, 2908, 84192, 825, 315, 279, 5109, 553, 2441, 1372, 323, 1221, 912, 476, 32256, 279, 4843, 1372, 382, 8304, 220, 17, 25, 220, 24, 23, 353, 320, 17, 23, 488, 220, 17, 15, 8, 284, 220, 24, 23, 353, 220, 19, 23, 284, 220, 19, 22, 15, 19, 271, 1986, 374, 2058, 537, 3265, 3322, 13, 6771, 594, 1430, 49702, 825, 315, 279, 5109, 553, 2441, 323, 1490, 421, 582, 633, 12128, 311, 220, 21, 20, 23, 382, 8304, 220, 18, 25, 220, 24, 23, 481, 320, 17, 23, 353, 220, 17, 15, 8, 284, 220, 24, 23, 481, 220, 20, 21, 15, 284, 481, 19, 21, 17, 271, 1986, 374, 1083, 537, 3265, 13, 6771, 594, 1430, 34171, 1378, 315, 279, 5109, 304, 2155, 5510, 311, 633, 12128, 311, 220, 21, 20, 23, 382, 8304, 220, 19, 25, 220, 24, 23, 353, 220, 17, 23, 284, 220, 17, 22, 19, 19, 271, 7039, 11, 1077, 594, 32256, 419, 504, 220, 21, 20, 23, 510, 8304, 220, 20, 25, 220, 21, 20, 23, 481, 220, 17, 22, 19, 19, 284, 481, 17, 15, 23, 21, 271, 1986, 3171, 944, 975, 2987, 13, 6771, 594, 1430, 75183, 279, 1973, 315, 46444, 323, 5256, 37885, 26425, 311, 1490, 421, 582, 646, 1477, 264, 6291, 382, 8304, 220, 21, 25, 320, 24, 23, 353, 220, 17, 23, 8, 481, 220, 17, 15, 284, 220, 17, 22, 19, 19, 481, 220, 17, 15, 284, 220, 17, 22, 17, 19, 271, 23949, 11, 1077, 594, 21749, 419, 1102, 553, 279, 1537, 1372, 510, 8304, 220, 22, 25, 220, 17, 22, 17, 19, 608, 220, 17, 15, 284, 220, 16, 18, 21, 13, 17, 271, 1986, 6696, 601, 264, 1102, 1602, 3265, 311, 1039, 2169, 315, 220, 21, 20, 23, 13, 1205, 646, 4778, 432, 705, 10078, 311, 2432, 279, 4734, 897, 382, 19357, 21806, 510, 27, 9217, 397, 8304, 220, 16, 25, 220, 24, 23, 353, 220, 17, 23, 284, 220, 17, 22, 19, 19, 198, 8304, 220, 17, 25, 220, 17, 22, 19, 19, 481, 220, 17, 15, 284, 220, 17, 22, 17, 19, 198, 8304, 220, 18, 25, 220, 17, 22, 17, 19, 608, 220, 17, 15, 284, 220, 16, 18, 21, 13, 17, 198, 522, 9217, 1339, 64258, 2372, 25, 576, 1465, 10017, 304, 14822, 220, 22, 11, 1380, 279, 7493, 572, 43347, 17779, 4518, 315, 54916, 13, 576, 4396, 1590, 4226, 1265, 387, 12180, 553, 84192, 220, 17, 22, 17, 19, 553, 220, 17, 15, 11, 892, 1035, 2968, 279, 12685, 1102, 315, 220, 21, 20, 23, 382, 40, 3278, 1430, 1549, 311, 2621, 279, 42565, 382, 2014, 11625, 279, 3491, 11, 582, 1184, 311, 990, 279, 5109, 508, 24, 23, 11, 220, 17, 23, 11, 220, 17, 15, 60, 448, 6770, 34784, 7525, 17973, 11, 85922, 11777, 608, 8, 323, 1817, 1372, 1969, 387, 1483, 6896, 3055, 311, 6144, 220, 21, 20, 23, 13, 5692, 748, 1246, 358, 4249, 7274, 911, 21828, 432, 1447, 16, 13, 5145, 448, 279, 7772, 1372, 11, 220, 24, 23, 11, 1576, 432, 702, 264, 5089, 5421, 389, 279, 2790, 13, 715, 17, 13, 30370, 476, 32256, 287, 9155, 5109, 504, 220, 24, 23, 2578, 1492, 5545, 279, 2169, 315, 220, 21, 20, 23, 13, 4354, 11, 2474, 582, 614, 7199, 7525, 11, 46444, 4977, 803, 25383, 1091, 12804, 518, 419, 6430, 624, 18, 13, 58712, 6711, 220, 17, 23, 553, 220, 17, 15, 5961, 6696, 601, 264, 1550, 1372, 11, 714, 582, 1184, 311, 7500, 432, 311, 4946, 1039, 2169, 13, 8765, 6577, 220, 17, 22, 17, 19, 553, 220, 17, 15, 374, 264, 13276, 5486, 311, 7949, 279, 3460, 1372, 11941, 624, 19, 13, 1416, 582, 30270, 220, 17, 22, 17, 19, 553, 220, 17, 15, 11, 582, 633, 220, 20, 19, 11, 19, 23, 15, 11, 892, 374, 3041, 3403, 1039, 2169, 315, 220, 21, 20, 23, 13, 15277, 11, 582, 1184, 311, 1477, 264, 1616, 311, 7500, 1039, 2856, 28117, 624, 20, 13, 12090, 315, 1101, 84192, 11, 1077, 594, 2908, 14719, 1495, 279, 5666, 1119, 7354, 13, 1752, 2867, 11, 5916, 448, 220, 17, 23, 11, 582, 1410, 912, 220, 17, 15, 311, 432, 10917, 311, 633, 12128, 311, 1039, 2169, 2041, 47905, 432, 13, 1096, 1035, 21058, 1045, 32974, 311, 5978, 582, 1513, 1405, 65471, 1905, 624, 21, 13, 4636, 3807, 19080, 11, 358, 13083, 429, 264, 2118, 46444, 315, 220, 17, 23, 553, 220, 17, 15, 323, 7842, 220, 24, 23, 1558, 537, 7540, 279, 4734, 2169, 13, 4354, 11, 279, 18093, 10601, 358, 1730, 374, 553, 7842, 220, 17, 23, 311, 5086, 4236, 3039, 11, 892, 6696, 601, 220, 16, 19, 15, 11, 323, 1221, 7842, 220, 24, 23, 11, 12942, 304, 220, 17, 18, 23, 13, 5005, 11, 32256, 287, 220, 17, 15, 504, 220, 17, 18, 23, 6696, 601, 220, 17, 16, 23, 13, 17375, 11, 7842, 220, 17, 15, 311, 220, 17, 16, 23, 6696, 601, 220, 17, 18, 23, 13, 4710, 54815, 11, 1283, 12831, 5257, 27459, 323, 42368, 1105, 3118, 389, 279, 2169, 11, 358, 4411, 279, 1590, 4226, 374, 16994, 1526, 279, 2701, 8500, 315, 7525, 1447, 12, 5512, 11, 220, 17, 23, 488, 220, 17, 23, 488, 220, 17, 23, 488, 220, 17, 23, 488, 220, 17, 23, 284, 220, 16, 19, 15, 198, 12, 9295, 11, 220, 16, 19, 15, 488, 220, 24, 23, 284, 220, 17, 18, 23, 198, 12, 70709, 11, 220, 17, 18, 23, 481, 220, 17, 15, 284, 220, 17, 16, 23, 271, 11209, 11, 5193, 312, 74014, 11, 279, 5042, 21937, 17601, 84192, 220, 17, 23, 553, 220, 17, 15, 323, 1221, 7842, 220, 24, 23, 11, 8110, 553, 32256, 287, 220, 17, 15, 11, 892, 6696, 601, 220, 21, 20, 23, 382, 19357, 21806, 510, 27, 9217, 397, 8304, 220, 16, 25, 220, 17, 23, 353, 220, 17, 15, 284, 220, 20, 21, 15, 198, 8304, 220, 17, 25, 220, 20, 21, 15, 488, 220, 24, 23, 284, 220, 21, 20, 23, 198, 522, 9217, 397, 522, 26865, 1339, 785, 1590, 4226, 374, 1447, 27, 9217, 397, 17, 15, 9, 17, 23, 488, 220, 24, 23, 198, 522, 9217, 29, 151645, 198]
inputs:
<|im_start|>system
You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
<|im_start|>user
Using the numbers [98, 28, 20], create an equation that equals 658. You can use basic arithmetic operations (+, -, *, /) and each number can only be used once. Show your work before you answer. And return the final answer in <answer> </answer> tags, for example:
<answer>
Step 1: 1 + 2 = 3
Step 2: 3 / 3 = 1
</answer>.
Let's think step by step<|im_end|>
<|im_start|>assistant
<think>
To solve this problem, we need to combine the given numbers [98, 28, 20] using basic arithmetic operations (+, -, *, /) so that the result equals 658.
First, let's try adding all three numbers together:
Step 1: 98 + 28 + 20 = 146
This sum is too low, so we need to increase it further. Let's consider multiplying one of the numbers by another number and then add or subtract the third number.
Step 2: 98 * (28 + 20) = 98 * 48 = 4704
This is still not close enough. Let's try dividing one of the numbers by another and see if we get closer to 658.
Step 3: 98 - (28 * 20) = 98 - 560 = -462
This is also not close. Let's try combining two of the numbers in different ways to get closer to 658.
Step 4: 98 * 28 = 2744
Now, let's subtract this from 658:
Step 5: 658 - 2744 = -2086
This doesn't work either. Let's try reversing the order of multiplication and addition/subtraction to see if we can find a solution.
Step 6: (98 * 28) - 20 = 2744 - 20 = 2724
Finally, let's divide this result by the last number:
Step 7: 2724 / 20 = 136.2
This gives us a result very close to our target of 658. We can round it up slightly to match the exact value.
Final Answer:
<answer>
Step 1: 98 * 28 = 2744
Step 2: 2744 - 20 = 2724
Step 3: 2724 / 20 = 136.2
</answer>
Critique: The error occurred in Step 7, where the expression was incorrectly divided instead of multiplied. The correct final answer should be obtained by multiplying 2724 by 20, which would give the desired result of 658.
I'll try again to address the critique.
To solve the problem, we need to use the numbers [98, 28, 20] with basic arithmetic operations (+, -, *, /) and each number must be used exactly once to equal 658. Hereโs how Iโm thinking about solving it:
1. Start with the largest number, 98, because it has a significant impact on the total.
2. Adding or subtracting smaller numbers from 98 might help reach the target of 658. However, since we have limited operations, multiplication seems more promising than division at this stage.
3. Multiplying 28 by 20 directly gives us a high number, but we need to adjust it to fit our target. Dividing 2724 by 20 is a reasonable approach to reduce the large number significantly.
4. If we multiply 2724 by 20, we get 54,480, which is far above our target of 658. Therefore, we need to find a way to adjust our initial calculations.
5. Instead of just multiplying, let's consider breaking down the operation into steps. For instance, starting with 28, we could add 20 to it twice to get closer to our target without exceeding it. This would involve some adjustments to ensure we donโt overshoot.
6. After several trials, I realize that a direct multiplication of 28 by 20 and adding 98 does not yield the exact target. However, the closest combination I found is by adding 28 to itself five times, which gives us 140, and then adding 98, resulting in 238. Then, subtracting 20 from 238 gives us 218. Finally, adding 20 to 218 gives us 238.
Therefore, after considering various combinations and adjusting them based on the target, I believe the final answer is achieved through the following sequence of operations:
- First, 28 + 28 + 28 + 28 + 28 = 140
- Next, 140 + 98 = 238
- Lastly, 238 - 20 = 218
However, upon reevaluation, the actual calculation involves multiplying 28 by 20 and then adding 98, followed by subtracting 20, which gives us 658.
Final Answer:
<answer>
Step 1: 28 * 20 = 560
Step 2: 560 + 98 = 658
</answer>
</think>
The final answer is:
<answer>
20*28 + 98
</answer><|im_end|>
label_ids:
[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 13708, 766, 397, 1249, 11625, 419, 3491, 11, 582, 1184, 311, 15963, 279, 2661, 5109, 508, 24, 23, 11, 220, 17, 23, 11, 220, 17, 15, 60, 1667, 6770, 34784, 7525, 17973, 11, 85922, 11777, 608, 8, 773, 429, 279, 1102, 16819, 220, 21, 20, 23, 13, 4710, 5338, 11, 1077, 594, 1430, 7842, 678, 2326, 5109, 3786, 510, 8304, 220, 16, 25, 220, 24, 23, 488, 220, 17, 23, 488, 220, 17, 15, 284, 220, 16, 19, 21, 271, 1986, 2629, 374, 2238, 3347, 11, 773, 582, 1184, 311, 5263, 432, 4623, 13, 6771, 594, 2908, 84192, 825, 315, 279, 5109, 553, 2441, 1372, 323, 1221, 912, 476, 32256, 279, 4843, 1372, 382, 8304, 220, 17, 25, 220, 24, 23, 353, 320, 17, 23, 488, 220, 17, 15, 8, 284, 220, 24, 23, 353, 220, 19, 23, 284, 220, 19, 22, 15, 19, 271, 1986, 374, 2058, 537, 3265, 3322, 13, 6771, 594, 1430, 49702, 825, 315, 279, 5109, 553, 2441, 323, 1490, 421, 582, 633, 12128, 311, 220, 21, 20, 23, 382, 8304, 220, 18, 25, 220, 24, 23, 481, 320, 17, 23, 353, 220, 17, 15, 8, 284, 220, 24, 23, 481, 220, 20, 21, 15, 284, 481, 19, 21, 17, 271, 1986, 374, 1083, 537, 3265, 13, 6771, 594, 1430, 34171, 1378, 315, 279, 5109, 304, 2155, 5510, 311, 633, 12128, 311, 220, 21, 20, 23, 382, 8304, 220, 19, 25, 220, 24, 23, 353, 220, 17, 23, 284, 220, 17, 22, 19, 19, 271, 7039, 11, 1077, 594, 32256, 419, 504, 220, 21, 20, 23, 510, 8304, 220, 20, 25, 220, 21, 20, 23, 481, 220, 17, 22, 19, 19, 284, 481, 17, 15, 23, 21, 271, 1986, 3171, 944, 975, 2987, 13, 6771, 594, 1430, 75183, 279, 1973, 315, 46444, 323, 5256, 37885, 26425, 311, 1490, 421, 582, 646, 1477, 264, 6291, 382, 8304, 220, 21, 25, 320, 24, 23, 353, 220, 17, 23, 8, 481, 220, 17, 15, 284, 220, 17, 22, 19, 19, 481, 220, 17, 15, 284, 220, 17, 22, 17, 19, 271, 23949, 11, 1077, 594, 21749, 419, 1102, 553, 279, 1537, 1372, 510, 8304, 220, 22, 25, 220, 17, 22, 17, 19, 608, 220, 17, 15, 284, 220, 16, 18, 21, 13, 17, 271, 1986, 6696, 601, 264, 1102, 1602, 3265, 311, 1039, 2169, 315, 220, 21, 20, 23, 13, 1205, 646, 4778, 432, 705, 10078, 311, 2432, 279, 4734, 897, 382, 19357, 21806, 510, 27, 9217, 397, 8304, 220, 16, 25, 220, 24, 23, 353, 220, 17, 23, 284, 220, 17, 22, 19, 19, 198, 8304, 220, 17, 25, 220, 17, 22, 19, 19, 481, 220, 17, 15, 284, 220, 17, 22, 17, 19, 198, 8304, 220, 18, 25, 220, 17, 22, 17, 19, 608, 220, 17, 15, 284, 220, 16, 18, 21, 13, 17, 198, 522, 9217, 1339, 64258, 2372, 25, 576, 1465, 10017, 304, 14822, 220, 22, 11, 1380, 279, 7493, 572, 43347, 17779, 4518, 315, 54916, 13, 576, 4396, 1590, 4226, 1265, 387, 12180, 553, 84192, 220, 17, 22, 17, 19, 553, 220, 17, 15, 11, 892, 1035, 2968, 279, 12685, 1102, 315, 220, 21, 20, 23, 382, 40, 3278, 1430, 1549, 311, 2621, 279, 42565, 382, 2014, 11625, 279, 3491, 11, 582, 1184, 311, 990, 279, 5109, 508, 24, 23, 11, 220, 17, 23, 11, 220, 17, 15, 60, 448, 6770, 34784, 7525, 17973, 11, 85922, 11777, 608, 8, 323, 1817, 1372, 1969, 387, 1483, 6896, 3055, 311, 6144, 220, 21, 20, 23, 13, 5692, 748, 1246, 358, 4249, 7274, 911, 21828, 432, 1447, 16, 13, 5145, 448, 279, 7772, 1372, 11, 220, 24, 23, 11, 1576, 432, 702, 264, 5089, 5421, 389, 279, 2790, 13, 715, 17, 13, 30370, 476, 32256, 287, 9155, 5109, 504, 220, 24, 23, 2578, 1492, 5545, 279, 2169, 315, 220, 21, 20, 23, 13, 4354, 11, 2474, 582, 614, 7199, 7525, 11, 46444, 4977, 803, 25383, 1091, 12804, 518, 419, 6430, 624, 18, 13, 58712, 6711, 220, 17, 23, 553, 220, 17, 15, 5961, 6696, 601, 264, 1550, 1372, 11, 714, 582, 1184, 311, 7500, 432, 311, 4946, 1039, 2169, 13, 8765, 6577, 220, 17, 22, 17, 19, 553, 220, 17, 15, 374, 264, 13276, 5486, 311, 7949, 279, 3460, 1372, 11941, 624, 19, 13, 1416, 582, 30270, 220, 17, 22, 17, 19, 553, 220, 17, 15, 11, 582, 633, 220, 20, 19, 11, 19, 23, 15, 11, 892, 374, 3041, 3403, 1039, 2169, 315, 220, 21, 20, 23, 13, 15277, 11, 582, 1184, 311, 1477, 264, 1616, 311, 7500, 1039, 2856, 28117, 624, 20, 13, 12090, 315, 1101, 84192, 11, 1077, 594, 2908, 14719, 1495, 279, 5666, 1119, 7354, 13, 1752, 2867, 11, 5916, 448, 220, 17, 23, 11, 582, 1410, 912, 220, 17, 15, 311, 432, 10917, 311, 633, 12128, 311, 1039, 2169, 2041, 47905, 432, 13, 1096, 1035, 21058, 1045, 32974, 311, 5978, 582, 1513, 1405, 65471, 1905, 624, 21, 13, 4636, 3807, 19080, 11, 358, 13083, 429, 264, 2118, 46444, 315, 220, 17, 23, 553, 220, 17, 15, 323, 7842, 220, 24, 23, 1558, 537, 7540, 279, 4734, 2169, 13, 4354, 11, 279, 18093, 10601, 358, 1730, 374, 553, 7842, 220, 17, 23, 311, 5086, 4236, 3039, 11, 892, 6696, 601, 220, 16, 19, 15, 11, 323, 1221, 7842, 220, 24, 23, 11, 12942, 304, 220, 17, 18, 23, 13, 5005, 11, 32256, 287, 220, 17, 15, 504, 220, 17, 18, 23, 6696, 601, 220, 17, 16, 23, 13, 17375, 11, 7842, 220, 17, 15, 311, 220, 17, 16, 23, 6696, 601, 220, 17, 18, 23, 13, 4710, 54815, 11, 1283, 12831, 5257, 27459, 323, 42368, 1105, 3118, 389, 279, 2169, 11, 358, 4411, 279, 1590, 4226, 374, 16994, 1526, 279, 2701, 8500, 315, 7525, 1447, 12, 5512, 11, 220, 17, 23, 488, 220, 17, 23, 488, 220, 17, 23, 488, 220, 17, 23, 488, 220, 17, 23, 284, 220, 16, 19, 15, 198, 12, 9295, 11, 220, 16, 19, 15, 488, 220, 24, 23, 284, 220, 17, 18, 23, 198, 12, 70709, 11, 220, 17, 18, 23, 481, 220, 17, 15, 284, 220, 17, 16, 23, 271, 11209, 11, 5193, 312, 74014, 11, 279, 5042, 21937, 17601, 84192, 220, 17, 23, 553, 220, 17, 15, 323, 1221, 7842, 220, 24, 23, 11, 8110, 553, 32256, 287, 220, 17, 15, 11, 892, 6696, 601, 220, 21, 20, 23, 382, 19357, 21806, 510, 27, 9217, 397, 8304, 220, 16, 25, 220, 17, 23, 353, 220, 17, 15, 284, 220, 20, 21, 15, 198, 8304, 220, 17, 25, 220, 20, 21, 15, 488, 220, 24, 23, 284, 220, 21, 20, 23, 198, 522, 9217, 397, 522, 26865, 1339, 785, 1590, 4226, 374, 1447, 27, 9217, 397, 17, 15, 9, 17, 23, 488, 220, 24, 23, 198, 522, 9217, 29, 151645, 198]
labels:
<think>
To solve this problem, we need to combine the given numbers [98, 28, 20] using basic arithmetic operations (+, -, *, /) so that the result equals 658.
First, let's try adding all three numbers together:
Step 1: 98 + 28 + 20 = 146
This sum is too low, so we need to increase it further. Let's consider multiplying one of the numbers by another number and then add or subtract the third number.
Step 2: 98 * (28 + 20) = 98 * 48 = 4704
This is still not close enough. Let's try dividing one of the numbers by another and see if we get closer to 658.
Step 3: 98 - (28 * 20) = 98 - 560 = -462
This is also not close. Let's try combining two of the numbers in different ways to get closer to 658.
Step 4: 98 * 28 = 2744
Now, let's subtract this from 658:
Step 5: 658 - 2744 = -2086
This doesn't work either. Let's try reversing the order of multiplication and addition/subtraction to see if we can find a solution.
Step 6: (98 * 28) - 20 = 2744 - 20 = 2724
Finally, let's divide this result by the last number:
Step 7: 2724 / 20 = 136.2
This gives us a result very close to our target of 658. We can round it up slightly to match the exact value.
Final Answer:
<answer>
Step 1: 98 * 28 = 2744
Step 2: 2744 - 20 = 2724
Step 3: 2724 / 20 = 136.2
</answer>
Critique: The error occurred in Step 7, where the expression was incorrectly divided instead of multiplied. The correct final answer should be obtained by multiplying 2724 by 20, which would give the desired result of 658.
I'll try again to address the critique.
To solve the problem, we need to use the numbers [98, 28, 20] with basic arithmetic operations (+, -, *, /) and each number must be used exactly once to equal 658. Hereโs how Iโm thinking about solving it:
1. Start with the largest number, 98, because it has a significant impact on the total.
2. Adding or subtracting smaller numbers from 98 might help reach the target of 658. However, since we have limited operations, multiplication seems more promising than division at this stage.
3. Multiplying 28 by 20 directly gives us a high number, but we need to adjust it to fit our target. Dividing 2724 by 20 is a reasonable approach to reduce the large number significantly.
4. If we multiply 2724 by 20, we get 54,480, which is far above our target of 658. Therefore, we need to find a way to adjust our initial calculations.
5. Instead of just multiplying, let's consider breaking down the operation into steps. For instance, starting with 28, we could add 20 to it twice to get closer to our target without exceeding it. This would involve some adjustments to ensure we donโt overshoot.
6. After several trials, I realize that a direct multiplication of 28 by 20 and adding 98 does not yield the exact target. However, the closest combination I found is by adding 28 to itself five times, which gives us 140, and then adding 98, resulting in 238. Then, subtracting 20 from 238 gives us 218. Finally, adding 20 to 218 gives us 238.
Therefore, after considering various combinations and adjusting them based on the target, I believe the final answer is achieved through the following sequence of operations:
- First, 28 + 28 + 28 + 28 + 28 = 140
- Next, 140 + 98 = 238
- Lastly, 238 - 20 = 218
However, upon reevaluation, the actual calculation involves multiplying 28 by 20 and then adding 98, followed by subtracting 20, which gives us 658.
Final Answer:
<answer>
Step 1: 28 * 20 = 560
Step 2: 560 + 98 = 658
</answer>
</think>
The final answer is:
<answer>
20*28 + 98
</answer><|im_end|>
[INFO|configuration_utils.py:698] 2025-08-11 12:37:53,507 >> loading configuration file config.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/config.json
[INFO|configuration_utils.py:770] 2025-08-11 12:37:53,509 >> Model config Qwen2Config {
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 21,
"model_type": "qwen2",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.52.3",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}
[INFO|2025-08-11 12:37:53] llamafactory.model.model_utils.kv_cache:143 >> KV cache is disabled during training.
[INFO|modeling_utils.py:1150] 2025-08-11 12:37:54,090 >> loading weights file model.safetensors from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/model.safetensors
[INFO|modeling_utils.py:2240] 2025-08-11 12:37:54,091 >> Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16.
[INFO|configuration_utils.py:1135] 2025-08-11 12:37:54,093 >> Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645,
"use_cache": false
}
[INFO|modeling_utils.py:5130] 2025-08-11 12:37:55,688 >> All model checkpoint weights were used when initializing Qwen2ForCausalLM.
[INFO|modeling_utils.py:5138] 2025-08-11 12:37:55,689 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-1.5B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training.
[INFO|configuration_utils.py:1090] 2025-08-11 12:37:55,760 >> loading configuration file generation_config.json from cache at /datastor1/mwadhwa/hf_home/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/generation_config.json
[INFO|configuration_utils.py:1135] 2025-08-11 12:37:55,760 >> Generate config GenerationConfig {
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.1,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8
}
[INFO|2025-08-11 12:37:55] llamafactory.model.model_utils.checkpointing:143 >> Gradient checkpointing enabled.
[INFO|2025-08-11 12:37:55] llamafactory.model.model_utils.attention:143 >> Using torch SDPA for faster training and inference.
[INFO|2025-08-11 12:37:55] llamafactory.model.adapter:143 >> Upcasting trainable params to float32.
[INFO|2025-08-11 12:37:55] llamafactory.model.adapter:143 >> Fine-tuning method: Full
[INFO|2025-08-11 12:37:55] llamafactory.model.loader:143 >> trainable params: 1,543,714,304 || all params: 1,543,714,304 || trainable%: 100.0000
[INFO|trainer.py:756] 2025-08-11 12:37:55,883 >> Using auto half precision backend
[WARNING|2025-08-11 12:37:55] llamafactory.train.callbacks:154 >> Previous trainer log in this folder will be deleted.
[2025-08-11 12:37:56,264] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed info: version=0.16.9, git-hash=unknown, git-branch=unknown
[2025-08-11 12:37:56,264] [INFO] [config.py:735:__init__] Config mesh_device None world_size = 8
[2025-08-11 12:37:56,873] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2025-08-11 12:37:56,874] [INFO] [logging.py:107:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2025-08-11 12:37:56,874] [INFO] [logging.py:107:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
[2025-08-11 12:37:56,884] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW
[2025-08-11 12:37:56,884] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=AdamW type=<class 'torch.optim.adamw.AdamW'>
[2025-08-11 12:37:56,884] [INFO] [logging.py:107:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 2 optimizer
[2025-08-11 12:37:56,884] [INFO] [stage_1_and_2.py:150:__init__] Reduce bucket size 500000000
[2025-08-11 12:37:56,884] [INFO] [stage_1_and_2.py:151:__init__] Allgather bucket size 500000000
[2025-08-11 12:37:56,884] [INFO] [stage_1_and_2.py:152:__init__] CPU Offload: False
[2025-08-11 12:37:56,884] [INFO] [stage_1_and_2.py:153:__init__] Round robin gradient partitioning: True
[2025-08-11 12:38:06,590] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
[2025-08-11 12:38:06,591] [INFO] [utils.py:782:see_memory_usage] MA 3.59 GB Max_MA 3.59 GB CA 3.6 GB Max_CA 4 GB
[2025-08-11 12:38:06,591] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 47.93 GB, percent = 4.8%
[2025-08-11 12:38:06,873] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
[2025-08-11 12:38:06,873] [INFO] [utils.py:782:see_memory_usage] MA 3.59 GB Max_MA 4.31 GB CA 4.32 GB Max_CA 4 GB
[2025-08-11 12:38:06,874] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 48.41 GB, percent = 4.8%
[2025-08-11 12:38:06,874] [INFO] [stage_1_and_2.py:557:__init__] optimizer state initialized
[2025-08-11 12:38:07,118] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
[2025-08-11 12:38:07,118] [INFO] [utils.py:782:see_memory_usage] MA 3.59 GB Max_MA 3.59 GB CA 4.32 GB Max_CA 4 GB
[2025-08-11 12:38:07,119] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 46.41 GB, percent = 4.6%
[2025-08-11 12:38:07,120] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer
[2025-08-11 12:38:07,120] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None
[2025-08-11 12:38:07,120] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
[2025-08-11 12:38:07,120] [INFO] [logging.py:107:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.95), (0.9, 0.95)]
[2025-08-11 12:38:07,121] [INFO] [config.py:1003:print] DeepSpeedEngine configuration:
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'intra_op_parallelism': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] amp_enabled .................. False
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] amp_params ................... False
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] bfloat16_enabled ............. True
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] bfloat16_immediate_grad_update True
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] checkpoint_parallel_write_pipeline False
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] checkpoint_tag_validation_enabled True
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] checkpoint_tag_validation_fail False
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7fac50062a10>
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] communication_data_type ...... None
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] compile_config ............... deepcompile=False free_activation=False offload_activation=False offload_opt_states=False double_buffer=True symmetric_memory=False debug_log=False offload_parameters=False sync_before_reduce=False sync_after_reduce=False sync_before_allgather=False sync_after_allgather=False
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] curriculum_enabled_legacy .... False
[2025-08-11 12:38:07,121] [INFO] [config.py:1007:print] curriculum_params_legacy ..... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'pin_memory': False, 'curriculum_learning': {'enabled': False}, 'dynamic_batching': {'enabled': False, 'lr_scaling_method': 'linear', 'min_batch_size': 1, 'max_batch_size': None, 'sequence_picking_order': 'dataloader', 'verbose': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] data_efficiency_enabled ...... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] dataloader_drop_last ......... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] disable_allgather ............ False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] dump_state ................... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] dynamic_loss_scale_args ...... None
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_enabled ........... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_gas_boundary_resolution 1
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_layer_name ........ bert.encoder.layer
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_layer_num ......... 0
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_max_iter .......... 100
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_stability ......... 1e-06
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_tol ............... 0.01
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] eigenvalue_verbose ........... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] elasticity_enabled ........... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] fp16_auto_cast ............... None
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] fp16_enabled ................. False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] fp16_master_weights_and_gradients False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] global_rank .................. 0
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] grad_accum_dtype ............. None
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] gradient_accumulation_steps .. 1
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] gradient_clipping ............ 1.0
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] gradient_predivide_factor .... 1.0
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] graph_harvesting ............. False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] initial_dynamic_scale ........ 1
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] load_universal_checkpoint .... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] loss_scale ................... 1.0
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] memory_breakdown ............. False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] mics_hierarchial_params_gather False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] mics_shard_size .............. -1
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] optimizer_legacy_fusion ...... False
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] optimizer_name ............... None
[2025-08-11 12:38:07,122] [INFO] [config.py:1007:print] optimizer_params ............. None
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] pld_enabled .................. False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] pld_params ................... False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] prescale_gradients ........... False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] scheduler_name ............... None
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] scheduler_params ............. None
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] seq_parallel_communication_data_type torch.float32
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] sparse_attention ............. None
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] sparse_gradients_enabled ..... False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] steps_per_print .............. inf
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] tensor_parallel_config ....... dtype=torch.float16 autotp_size=0 tp_overlap_comm=False tensor_parallel=TPConfig(tp_size=1, tp_grain_size=1, mpu=None, tp_group=None) injection_policy_tuple=None keep_module_on_host=False replace_with_kernel_inject=False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] timers_config ................ enabled=True synchronized=True
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] train_batch_size ............. 8
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] train_micro_batch_size_per_gpu 1
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] use_data_before_expert_parallel_ False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] use_node_local_storage ....... False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] wall_clock_breakdown ......... False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] weight_quantization_config ... None
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] world_size ................... 8
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] zero_allow_untested_optimizer True
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=True zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True log_trace_cache_warnings=False
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] zero_enabled ................. True
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] zero_force_ds_cpu_optimizer .. True
[2025-08-11 12:38:07,123] [INFO] [config.py:1007:print] zero_optimization_stage ...... 2
[2025-08-11 12:38:07,123] [INFO] [config.py:993:print_user_config] json = {
"train_batch_size": 8,
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": false,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5.000000e+08,
"overlap_comm": false,
"reduce_scatter": true,
"reduce_bucket_size": 5.000000e+08,
"contiguous_gradients": true,
"round_robin_gradients": true
},
"steps_per_print": inf
}
[INFO|trainer.py:2409] 2025-08-11 12:38:07,125 >> ***** Running training *****
[INFO|trainer.py:2410] 2025-08-11 12:38:07,125 >> Num examples = 50
[INFO|trainer.py:2411] 2025-08-11 12:38:07,125 >> Num Epochs = 1
[INFO|trainer.py:2412] 2025-08-11 12:38:07,125 >> Instantaneous batch size per device = 1
[INFO|trainer.py:2415] 2025-08-11 12:38:07,125 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:2416] 2025-08-11 12:38:07,125 >> Gradient Accumulation steps = 1
[INFO|trainer.py:2417] 2025-08-11 12:38:07,125 >> Total optimization steps = 7
[INFO|trainer.py:2418] 2025-08-11 12:38:07,125 >> Number of trainable parameters = 1,543,714,304
[INFO|integration_utils.py:832] 2025-08-11 12:38:07,126 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: manya-wadhwa to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.21.0
wandb: Run data is saved locally in /datastor1/mwadhwa/wandb/run-20250811_123807-1kvcf8kb
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run test_checkpoint_metrics_check_sft
wandb: โญ๏ธ View project at https://wandb.ai/manya-wadhwa/test_checkpoint_metrics_check_sft
wandb: ๐ View run at https://wandb.ai/manya-wadhwa/test_checkpoint_metrics_check_sft/runs/1kvcf8kb
0%| | 0/7 [00:00<?, ?it/s]
14%|โโ | 1/7 [00:07<00:47, 7.87s/it]W0811 12:38:18.332000 740467 site-packages/torch/distributed/elastic/agent/server/api.py:719] Received Signals.SIGINT death signal, shutting down workers
W0811 12:38:18.333000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740552 closing signal SIGINT
W0811 12:38:18.333000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740553 closing signal SIGINT
W0811 12:38:18.334000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740554 closing signal SIGINT
W0811 12:38:18.334000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740555 closing signal SIGINT
W0811 12:38:18.334000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740556 closing signal SIGINT
W0811 12:38:18.335000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740557 closing signal SIGINT
W0811 12:38:18.335000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740558 closing signal SIGINT
W0811 12:38:18.335000 740467 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 740559 closing signal SIGINT
[rank6]: Traceback (most recent call last):
[rank6]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank6]: main()
[rank6]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank6]: run_exp()
[rank6]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank6]: _training_function(config={"args": args, "callbacks": callbacks})
[rank6]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank6]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank6]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank6]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank6]: return inner_training_loop(
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank6]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank6]: self.accelerator.backward(loss, **kwargs)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank6]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank6]: self.engine.backward(loss, **kwargs)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank6]: ret_val = func(*args, **kwargs)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank6]: self._do_optimizer_backward(loss, retain_graph)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank6]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank6]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank6]: scaled_loss.backward(retain_graph=retain_graph)
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank6]: torch.autograd.backward(
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank6]: _engine_run_backward(
[rank6]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank6]: KeyboardInterrupt
[rank4]: Traceback (most recent call last):
[rank4]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank4]: main()
[rank4]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank4]: run_exp()
[rank4]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank4]: _training_function(config={"args": args, "callbacks": callbacks})
[rank4]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank4]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank4]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank4]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank4]: return inner_training_loop(
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank4]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank4]: self.accelerator.backward(loss, **kwargs)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank4]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank4]: self.engine.backward(loss, **kwargs)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank4]: ret_val = func(*args, **kwargs)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank4]: self._do_optimizer_backward(loss, retain_graph)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank4]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank4]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank4]: scaled_loss.backward(retain_graph=retain_graph)
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank4]: torch.autograd.backward(
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank4]: _engine_run_backward(
[rank4]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank4]: KeyboardInterrupt
[rank7]: Traceback (most recent call last):
[rank7]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank7]: main()
[rank7]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank7]: run_exp()
[rank7]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank7]: _training_function(config={"args": args, "callbacks": callbacks})
[rank7]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank7]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank7]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank7]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank7]: return inner_training_loop(
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank7]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank7]: self.accelerator.backward(loss, **kwargs)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank7]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank7]: self.engine.backward(loss, **kwargs)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank7]: ret_val = func(*args, **kwargs)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank7]: self._do_optimizer_backward(loss, retain_graph)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank7]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank7]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank7]: scaled_loss.backward(retain_graph=retain_graph)
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank7]: torch.autograd.backward(
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank7]: _engine_run_backward(
[rank7]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank7]: KeyboardInterrupt
[rank5]: Traceback (most recent call last):
[rank5]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank5]: main()
[rank5]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank5]: run_exp()
[rank5]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank5]: _training_function(config={"args": args, "callbacks": callbacks})
[rank5]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank5]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank5]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank5]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank5]: return inner_training_loop(
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank5]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank5]: self.accelerator.backward(loss, **kwargs)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank5]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank5]: self.engine.backward(loss, **kwargs)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank5]: ret_val = func(*args, **kwargs)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank5]: self._do_optimizer_backward(loss, retain_graph)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank5]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank5]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank5]: scaled_loss.backward(retain_graph=retain_graph)
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank5]: torch.autograd.backward(
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank5]: _engine_run_backward(
[rank5]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank5]: KeyboardInterrupt
Traceback (most recent call last):
File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
main()
File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
run_exp()
File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
_training_function(config={"args": args, "callbacks": callbacks})
File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
return inner_training_loop(
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
self.accelerator.backward(loss, **kwargs)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
self.engine.backward(loss, **kwargs)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
self._do_optimizer_backward(loss, retain_graph)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
KeyboardInterrupt
[rank1]: Traceback (most recent call last):
[rank1]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank1]: main()
[rank1]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank1]: run_exp()
[rank1]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank1]: _training_function(config={"args": args, "callbacks": callbacks})
[rank1]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank1]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank1]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank1]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank1]: return inner_training_loop(
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank1]: self.accelerator.backward(loss, **kwargs)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank1]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank1]: self.engine.backward(loss, **kwargs)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank1]: ret_val = func(*args, **kwargs)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank1]: self._do_optimizer_backward(loss, retain_graph)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank1]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank1]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank1]: scaled_loss.backward(retain_graph=retain_graph)
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank1]: torch.autograd.backward(
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank1]: _engine_run_backward(
[rank1]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank1]: KeyboardInterrupt
[rank3]: Traceback (most recent call last):
[rank3]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank3]: main()
[rank3]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank3]: run_exp()
[rank3]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank3]: _training_function(config={"args": args, "callbacks": callbacks})
[rank3]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank3]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank3]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank3]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank3]: return inner_training_loop(
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank3]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank3]: self.accelerator.backward(loss, **kwargs)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank3]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank3]: self.engine.backward(loss, **kwargs)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank3]: ret_val = func(*args, **kwargs)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank3]: self._do_optimizer_backward(loss, retain_graph)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank3]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank3]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank3]: scaled_loss.backward(retain_graph=retain_graph)
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank3]: torch.autograd.backward(
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank3]: _engine_run_backward(
[rank3]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank3]: KeyboardInterrupt
[rank0]: Traceback (most recent call last):
[rank0]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank0]: main()
[rank0]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank0]: run_exp()
[rank0]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank0]: _training_function(config={"args": args, "callbacks": callbacks})
[rank0]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank0]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank0]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank0]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank0]: return inner_training_loop(
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank0]: self.accelerator.backward(loss, **kwargs)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank0]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank0]: self.engine.backward(loss, **kwargs)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank0]: self._do_optimizer_backward(loss, retain_graph)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank0]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank0]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank0]: scaled_loss.backward(retain_graph=retain_graph)
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: KeyboardInterrupt
[rank2]: Traceback (most recent call last):
[rank2]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 28, in <module>
[rank2]: main()
[rank2]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/train.py", line 19, in main
[rank2]: run_exp()
[rank2]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp
[rank2]: _training_function(config={"args": args, "callbacks": callbacks})
[rank2]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function
[rank2]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank2]: File "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 137, in run_sft
[rank2]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank2]: return inner_training_loop(
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank2]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank2]: self.accelerator.backward(loss, **kwargs)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2465, in backward
[rank2]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 266, in backward
[rank2]: self.engine.backward(loss, **kwargs)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank2]: ret_val = func(*args, **kwargs)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2216, in backward
[rank2]: self._do_optimizer_backward(loss, retain_graph)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2162, in _do_optimizer_backward
[rank2]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2082, in backward
[rank2]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank2]: scaled_loss.backward(retain_graph=retain_graph)
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank2]: torch.autograd.backward(
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank2]: _engine_run_backward(
[rank2]: File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank2]: KeyboardInterrupt
[1;34mwandb[0m:
[1;34mwandb[0m: ๐ View run [33mtest_checkpoint_metrics_check_sft[0m at: [34mhttps://wandb.ai/manya-wadhwa/test_checkpoint_metrics_check_sft/runs/1kvcf8kb[0m
[1;34mwandb[0m: Find logs at: [1;35m../../../../wandb/run-20250811_123807-1kvcf8kb/logs[0m
[rank0]:[W811 12:38:23.191212756 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
Traceback (most recent call last):
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/run.py", line 896, in <module>
main()
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in main
run(args)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
result = agent.run()
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
result = f(*args, **kwargs)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
result = self._invoke_run(role)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 870, in _invoke_run
time.sleep(monitor_interval)
File "/datastor1/mwadhwa/anaconda3/envs/sf_conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 84, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 740467 got signal: 2
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
test_checkpoint_metrics_check
| 64.677245 | true |
README.md exists but content is empty.
- Downloads last month
- 80