/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. warnings.warn( [2024-04-24 10:38:01,087] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) [2024-04-24 10:38:02,510] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env [2024-04-24 10:38:02,510] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.134.158,100.83.134.148 [2024-04-24 10:38:02,510] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.134.158,100.83.134.148 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMTM0LjE1OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XSwgIjEwMC44My4xMzQuMTQ4IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddfQ== --node_rank=%n --master_addr=100.83.134.158 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 4 --pipeline-model-parallel-size 2 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 20 --data-path /data/arxiv//tokenized_text_document --vocab-file /data/arxiv//gpt2-vocab.json --merge-file /data/arxiv//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_600M/24-04-2024-10:37:59/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_600M/24-04-2024-10:37:59/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_600M/24-04-2024-10:37:59/hf_ckpt --save-interval 20 --verify-checkpoint --verify-checkpoint-model-type LLAMA' 100.83.134.158: Warning: Permanently added '[100.83.134.158]:3122' (ED25519) to the list of known hosts. 100.83.134.148: Warning: Permanently added '[100.83.134.148]:3122' (ED25519) to the list of known hosts. 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: ...done. 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: ...done. 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:04,207] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:04,300] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: [2024-04-24 10:38:05,628] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.134.158': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [0, 1, 2, 3, 4, 5, 6, 7]} 100.83.134.158: [2024-04-24 10:38:05,628] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0 100.83.134.158: [2024-04-24 10:38:05,629] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'100.83.134.158': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [8, 9, 10, 11, 12, 13, 14, 15]}) 100.83.134.158: [2024-04-24 10:38:05,629] [INFO] [launch.py:164:main] dist_world_size=16 100.83.134.158: [2024-04-24 10:38:05,629] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: ...done. 100.83.134.158: ...done. 100.83.134.158: ...done. 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: ...done. 100.83.134.158: * Starting OpenBSD Secure Shell server sshd 100.83.134.158: ...done. 100.83.134.158: ...done. 100.83.134.158: ...done. 100.83.134.158: ...done. 100.83.134.148: [2024-04-24 10:38:05,843] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.134.158': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [0, 1, 2, 3, 4, 5, 6, 7]} 100.83.134.148: [2024-04-24 10:38:05,843] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1 100.83.134.148: [2024-04-24 10:38:05,843] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'100.83.134.158': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [8, 9, 10, 11, 12, 13, 14, 15]}) 100.83.134.148: [2024-04-24 10:38:05,843] [INFO] [launch.py:164:main] dist_world_size=16 100.83.134.148: [2024-04-24 10:38:05,843] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: ...done. 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: ...done. 100.83.134.148: ...done. 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: ...done. 100.83.134.148: ...done. 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: ...done. 100.83.134.148: ...done. 100.83.134.148: * Starting OpenBSD Secure Shell server sshd 100.83.134.148: ...done. 100.83.134.158: [2024-04-24 10:38:07,373] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:07,376] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:07,377] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:07,384] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:07,390] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: [2024-04-24 10:38:07,392] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:07,392] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: [2024-04-24 10:38:07,475] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.148: [2024-04-24 10:38:07,667] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:07,670] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:07,672] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:07,672] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: [2024-04-24 10:38:07,673] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:07,678] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:07,679] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: [2024-04-24 10:38:07,698] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: --------------------------------------------------**** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. --------------------------------------------------[NO] ....... 100.83.134.158: [OKAY]DeepSpeed C++/CUDA extension op report 100.83.134.158: 100.83.134.158: ---------------------------------------------------------------------------------------------------- 100.83.134.158: 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed C++/CUDA extension op report 100.83.134.158: -------------------------------------------------- 100.83.134.158: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.158: runtime if needed. Op compatibility means that your system 100.83.134.158: meet the required dependencies to JIT install the op. 100.83.134.158: -------------------------------------------------- 100.83.134.158: JIT compiled ops requires ninja 100.83.134.158: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: ninja .................. [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: op name ................ installed .. compatible 100.83.134.158: -------------------------------------------------- 100.83.134.158: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.158: fused_adam ............. [NO] ....... [OKAY] 100.83.134.158: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.158: transformer_inference .. [NO] ....... [OKAY] 100.83.134.158: -------------------------------------------------- 100.83.134.158: DeepSpeed general environment info: 100.83.134.158: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.158: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.158: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.158: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.158: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.158: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: using world size: 16, data-parallel-size: 2, tensor-model-parallel size: 4, pipeline-model-parallel size: 2 100.83.134.158: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 100.83.134.158: using torch.bfloat16 for parameters ... 100.83.134.158: ------------------------ arguments ------------------------ 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: accumulate_allreduce_grads_in_fp32 .............. True 100.83.134.158: activation_func_type ............................ swiglu 100.83.134.158: adam_beta1 ...................................... 0.9 100.83.134.158: adam_beta2 ...................................... 0.95 100.83.134.158: adam_eps ........................................ 1e-06 100.83.134.158: adlr_autoresume ................................. False 100.83.134.158: adlr_autoresume_interval ........................ 1000 100.83.134.158: aml_data_download_path .......................... None 100.83.134.158: apply_layernorm_weight_plus_one ................. False 100.83.134.158: apply_query_key_layer_scaling ................... True 100.83.134.158: apply_residual_connection_post_layernorm ........ False 100.83.134.158: attention_dropout ............................... 0.1 100.83.134.158: attention_softmax_in_fp32 ....................... False 100.83.134.158: bert_binary_head ................................ True 100.83.134.158: bert_load ....................................... None 100.83.134.158: bf16 ............................................ True 100.83.134.158: bias_dropout_fusion ............................. False 100.83.134.158: bias_gelu_fusion ................................ False 100.83.134.158: biencoder_projection_dim ........................ 0 100.83.134.158: biencoder_shared_query_context_model ............ False 100.83.134.158: block_data_path ................................. None 100.83.134.158: cache_fp8_weight ................................ False 100.83.134.158: cache_fp8_weight_fwd ............................ True 100.83.134.158: checkpoint_activations .......................... False 100.83.134.158: checkpoint_activations_granularity .............. full 100.83.134.158: checkpoint_in_cpu ............................... False 100.83.134.158: checkpoint_num_layers ........................... 1 100.83.134.158: clearml_config_path ............................. None 100.83.134.158: clearml_continue_exp ............................ False 100.83.134.158: clearml_exp_name ................................ None 100.83.134.158: clip_grad ....................................... 1.0 100.83.134.158: compression_training ............................ False 100.83.134.158: consumed_train_samples .......................... 0 100.83.134.158: consumed_train_tokens ........................... 0 100.83.134.158: consumed_valid_samples .......................... 0 100.83.134.158: contigious_checkpointing ........................ False 100.83.134.158: cpu_optimizer ................................... False 100.83.134.158: cpu_torch_adam .................................. False 100.83.134.158: create_moe_param_group .......................... False 100.83.134.158: curriculum_learning ............................. False 100.83.134.158: data_idx_path ................................... None 100.83.134.158: data_impl ....................................... infer 100.83.134.158: data_parallel_size .............................. 2 100.83.134.158: data_path ....................................... ['/data/arxiv//tokenized_text_document'] 100.83.134.158: data_sharding ................................... True 100.83.134.158: dataloader_type ................................. single 100.83.134.158: DDP_impl ........................................ local 100.83.134.158: decoder_seq_length .............................. None 100.83.134.158: deepscale ....................................... False 100.83.134.158: deepscale_config ................................ None 100.83.134.158: deepspeed ....................................... True 100.83.134.158: deepspeed_activation_checkpointing .............. False 100.83.134.158: deepspeed_config ................................ /data/output/llama13b_600M/24-04-2024-10:37:59/ds_config.json 100.83.134.158: deepspeed_mpi ................................... False 100.83.134.158: distribute_checkpointed_activations ............. False 100.83.134.158: distributed_backend ............................. hccl 100.83.134.158: do_layernorm_bias_weight_decay .................. False 100.83.134.158: do_pretrain_validation .......................... False 100.83.134.158: ds_inference .................................... False 100.83.134.158: ds_pipeline_enabled ............................. True 100.83.134.158: embed_layernorm ................................. False 100.83.134.158: embedding_path .................................. None 100.83.134.158: enable_expert_tensor_parallelism ................ False 100.83.134.158: encoder_seq_length .............................. 2048 100.83.134.158: eod_mask_loss ................................... False**** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: 100.83.134.158: eval_interval ................................... 20 100.83.134.158: eval_iters ...................................... 10 100.83.134.158: eval_loss_exit_value ............................ None 100.83.134.158: eval_micro_batch_size ........................... 1 100.83.134.158: evidence_data_path .............................. None 100.83.134.158: exit_duration_in_mins ........................... None 100.83.134.158: exit_interval ................................... 0 100.83.134.158: expert_interval ................................. 2 100.83.134.158: ffn_hidden_coeff ................................ 2.6666666666666665 100.83.134.158: ffn_hidden_size ................................. 4096 100.83.134.158: finetune ........................................ False 100.83.134.158: fix_position_emb_redundant_alloc ................ False 100.83.134.158: flatten_linear_operands ......................... False 100.83.134.158: fp16 ............................................ False 100.83.134.158: fp16_lm_cross_entropy ........................... False 100.83.134.158: fp32_residual_connection ........................ False 100.83.134.158: global_batch_size ............................... 256 100.83.134.158: hf_save ......................................... /data/output/llama13b_600M/24-04-2024-10:37:59/hf_ckpt 100.83.134.158: hidden_dropout .................................. 0.1 100.83.134.158: hidden_size ..................................... 2048 100.83.134.158: hidden_size_teacher ............................. None 100.83.134.158: hpu_deterministic ............................... True 100.83.134.158: hpu_fp8_format .................................. e5m2 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: hpu_fp8_measure_interval ........................ 10 100.83.134.158: hysteresis ...................................... 2 100.83.134.158: ict_head_size ................................... None 100.83.134.158: ict_load ........................................ None 100.83.134.158: img_dim ......................................... 224 100.83.134.158: indexer_batch_size .............................. 128 100.83.134.158: indexer_log_interval ............................ 1000 100.83.134.158: inference ....................................... False 100.83.134.158: init_method_std ................................. 0.02 100.83.134.158: init_method_xavier_uniform ...................... False 100.83.134.158: initial_loss_scale .............................. 4294967296 100.83.134.158: kd .............................................. False 100.83.134.158: kd_alpha_ce ..................................... 1 100.83.134.158: kd_beta_ce ...................................... 1 100.83.134.158: kd_temp ......................................... 1.0 100.83.134.158: kill_switch_path ................................ None 100.83.134.158: kv_channels ..................................... 64 100.83.134.158: layernorm_epsilon ............................... 1e-06 100.83.134.158: layernorm_type .................................. rmsnorm 100.83.134.158: lazy_mpu_init ................................... None 100.83.134.158: load ............................................ /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 100.83.134.158: load_teacher .................................... None 100.83.134.158: local_rank ...................................... 0 100.83.134.158: log_batch_size_to_tensorboard ................... True 100.83.134.158: log_bwd_grads ................................... False 100.83.134.158: log_fwd_activations ............................. False 100.83.134.158: log_interval .................................... 10 100.83.134.158: log_learning_rate_to_tensorboard ................ True 100.83.134.158: log_loss_scale_to_tensorboard ................... True 100.83.134.158: log_model_inputs ................................ False 100.83.134.158: log_num_zeros_in_grad ........................... False 100.83.134.158: log_optimizer_states_to_tensorboard ............. False 100.83.134.158: log_params_norm ................................. False 100.83.134.158: log_timers_to_tensorboard ....................... True 100.83.134.158: log_validation_ppl_to_tensorboard ............... True 100.83.134.158: loss_scale ...................................... None 100.83.134.158: loss_scale_window ............................... 1000 100.83.134.158: lr .............................................. 0.0003 100.83.134.158: lr_decay_iters .................................. None 100.83.134.158: lr_decay_samples ................................ None 100.83.134.158: lr_decay_style .................................. cosine 100.83.134.158: lr_decay_tokens ................................. None 100.83.134.158: lr_warmup_fraction .............................. None 100.83.134.158: lr_warmup_iters ................................. 2000 100.83.134.158: lr_warmup_samples ............................... 0 100.83.134.158: lr_warmup_tokens ................................ None 100.83.134.158: make_vocab_size_divisible_by .................... 128 100.83.134.158: mask_prob ....................................... 0.15 100.83.134.158: mask_tensor_adding .............................. False 100.83.134.158: masked_softmax_fusion ........................... False 100.83.134.158: max_position_embeddings ......................... None 100.83.134.158: memory_centric_tiled_linear ..................... False 100.83.134.158: merge_file ...................................... /data/arxiv//gpt2-merges.txt 100.83.134.158: micro_batch_size ................................ 1 100.83.134.158: min_loss_scale .................................. 1.0 100.83.134.158: min_lr .......................................... 0.0 100.83.134.158: mlp_type ........................................ standard 100.83.134.158: mmap_warmup ..................................... False 100.83.134.158: moe_eval_capacity_factor ........................ 1.0 100.83.134.158: moe_expert_parallel_size ........................ 1 100.83.134.158: moe_loss_coeff .................................. 0.1 100.83.134.158: moe_min_capacity ................................ 4 100.83.134.158: moe_token_dropping .............................. True 100.83.134.158: moe_train_capacity_factor ....................... 1.0 100.83.134.158: mos ............................................. False 100.83.134.158: no_bias ......................................... True 100.83.134.158: no_cuda ......................................... False 100.83.134.158: no_load_lr_state ................................ False 100.83.134.158: no_load_optim ................................... None 100.83.134.158: no_load_rng ..................................... None 100.83.134.158: no_pipeline_parallel ............................ False 100.83.134.158: no_save_optim ................................... None 100.83.134.158: no_save_rng ..................................... None 100.83.134.158: no_scaled_init .................................. False 100.83.134.158: num_attention_heads ............................. 32 100.83.134.158: num_attention_heads_teacher ..................... None 100.83.134.158: num_channels .................................... 3 100.83.134.158: num_classes ..................................... 1000 100.83.134.158: num_experts ..................................... [1] 100.83.134.158: num_experts_teacher ............................. [1] 100.83.134.158: num_key_value_heads ............................. 32 100.83.134.158: num_layers ...................................... 24 100.83.134.158: num_layers_per_virtual_pipeline_stage ........... None 100.83.134.158: num_layers_teacher .............................. None 100.83.134.158: num_workers ..................................... 2 100.83.134.158: onnx_safe ....................................... None 100.83.134.158: openai_gelu ..................................... False 100.83.134.158: optimizer ....................................... adamw 100.83.134.158: override_lr_scheduler ........................... False 100.83.134.158: params_dtype .................................... torch.bfloat16 100.83.134.158: partition_activations ........................... False 100.83.134.158: patch_dim ....................................... 16 100.83.134.158: pipeline_model_parallel_size .................... 2 100.83.134.158: position_embedding_type ......................... PositionEmbeddingType.rotary 100.83.134.158: profile ......................................... None 100.83.134.158: profile_backward ................................ False 100.83.134.158: profile_steps ................................... 2,3 100.83.134.158: query_in_block_prob ............................. 0.1 100.83.134.158: rampup_batch_size ............................... None 100.83.134.158: rank ............................................ 0 100.83.134.158: remote_device ................................... none 100.83.134.158: reset_attention_mask ............................ False 100.83.134.158: reset_iteration ................................. False 100.83.134.158: reset_position_ids .............................. False 100.83.134.158: retriever_report_topk_accuracies ................ [] 100.83.134.158: retriever_score_scaling ......................... False 100.83.134.158: retriever_seq_length ............................ 256 100.83.134.158: sample_rate ..................................... 1.0 100.83.134.158: save ............................................ /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 100.83.134.158: save_interval ................................... 20 100.83.134.158: scatter_gather_tensors_in_pipeline .............. True 100.83.134.158: scattered_embeddings ............................ False 100.83.134.158: seed ............................................ 1234 100.83.134.158: seq_length ...................................... 2048 100.83.134.158: sequence_parallel ............................... True 100.83.134.158: sgd_momentum .................................... 0.9 100.83.134.158: short_seq_prob .................................. 0.1 100.83.134.158: skip_train ...................................... False 100.83.134.158: split ........................................... 969, 30, 1 100.83.134.158: split_transformers .............................. False 100.83.134.158: synchronize_each_layer .......................... False 100.83.134.158: tensor_logger_max_iter .......................... 0 100.83.134.158: tensor_logger_path .............................. None 100.83.134.158: tensor_model_parallel_size ...................... 4 100.83.134.158: tensorboard_dir ................................. /data/output/llama13b_600M/24-04-2024-10:37:59/tensorboard 100.83.134.158: tensorboard_log_interval ........................ 1 100.83.134.158: tensorboard_queue_size .......................... 1000 100.83.134.158: test_data_path .................................. None 100.83.134.158: tile_factor ..................................... 1 100.83.134.158: titles_data_path ................................ None 100.83.134.158: tokenizer_eod_id ................................ None 100.83.134.158: tokenizer_model_file ............................ None 100.83.134.158: tokenizer_type .................................. GPT2BPETokenizer 100.83.134.158: topk ............................................ 1 100.83.134.158: train_data_path ................................. None 100.83.134.158: train_iters ..................................... 10000 100.83.134.158: train_samples ................................... None 100.83.134.158: train_tokens .................................... None 100.83.134.158: universal_checkpoint ............................ False 100.83.134.158: use_checkpoint_lr_scheduler ..................... False 100.83.134.158: use_contiguous_buffers_in_ddp ................... True 100.83.134.158: use_cpu_initialization .......................... None 100.83.134.158: use_fused_sdpa .................................. True 100.83.134.158: use_fused_sdpa_with_recompute ................... False 100.83.134.158: use_hpu ......................................... True 100.83.134.158: use_hpu_fp8_transformer_engine .................. False 100.83.134.158: use_hpu_graphs .................................. False 100.83.134.158: use_one_sent_docs ............................... False 100.83.134.158: use_pin_memory .................................. False 100.83.134.158: use_rotary_v2 ................................... False 100.83.134.158: use_seq_len_plus_one_tokens ..................... True 100.83.134.158: use_torch_compile ............................... False 100.83.134.158: use_tutel ....................................... False 100.83.134.158: valid_data_path ................................. None 100.83.134.158: verify_checkpoint ............................... True 100.83.134.158: verify_checkpoint_model_type .................... LLAMA 100.83.134.158: verify_tp_workers ............................... False 100.83.134.158: verify_tp_workers_hash .......................... False 100.83.134.158: virtual_pipeline_model_parallel_size ............ None 100.83.134.158: vocab_extra_ids ................................. 0 100.83.134.158: vocab_file ...................................... /data/arxiv//gpt2-vocab.json 100.83.134.158: weight_decay .................................... 0.1 100.83.134.158: world_size ...................................... 16 100.83.134.158: zero_allgather_bucket_size ...................... 0.0 100.83.134.158: zero_contigious_gradients ....................... False 100.83.134.158: zero_reduce_bucket_size ......................... 0.0 100.83.134.158: zero_reduce_scatter ............................. False 100.83.134.158: zero_stage ...................................... 0 100.83.134.158: -------------------- end of arguments --------------------- 100.83.134.158: setting number of micro-batches to constant 128 100.83.134.158: setting number of micro-batches to constant 128 100.83.134.158: > building GPT2BPETokenizer tokenizer ... 100.83.134.158: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: fatal: not a git repository (or any parent up to mount point /) 100.83.134.158: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.158: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.158: > padded vocab (size: 50257) with 431 dummy tokens (new size: 50688) 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: args.local_rank: 0 100.83.134.158: args.world_size: 16 100.83.134.158: args.rank: 0 100.83.134.158: args.distributed_backend: hccl 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: args.local_rank: 3 100.83.134.158: args.world_size: 16 100.83.134.158: args.rank: 3 100.83.134.158: args.distributed_backend: hccl 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: args.local_rank: 5 100.83.134.158: args.world_size: 16 100.83.134.158: args.rank: 5 100.83.134.158: args.distributed_backend: hccl 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: args.local_rank: 6 100.83.134.158: args.world_size: 16 100.83.134.158: args.rank: 6 100.83.134.158: args.distributed_backend: hccl 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: _initialize_distributed: Initializing with below params:args.local_rank: 100.83.134.158: 7args.local_rank: 100.83.134.158: args.world_size: 216 100.83.134.158: 100.83.134.158: args.world_size:args.rank: 167 100.83.134.158: 100.83.134.158: args.rank: args.distributed_backend:2 100.83.134.158: hcclargs.distributed_backend: 100.83.134.158: hccl 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: args.local_rank: 1 100.83.134.158: args.world_size: 16 100.83.134.158: args.rank: 1 100.83.134.158: args.distributed_backend: hccl 100.83.134.158: _initialize_distributed: Initializing with below params: 100.83.134.158: args.local_rank: 4 100.83.134.158: args.world_size: 16 100.83.134.158: args.rank: 4 100.83.134.158: args.distributed_backend: hccl 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: hccl device_count: 8 100.83.134.158: [2024-04-24 10:38:09,602] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,602] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: hccl device_count: 8 100.83.134.158: hccl device_count: 8 100.83.134.158: > initializing torch distributed ... 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:09,603] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,603] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: hccl device_count: 8 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: [2024-04-24 10:38:09,603] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: [2024-04-24 10:38:09,603] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: [2024-04-24 10:38:09,603] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend hccl 100.83.134.158: [2024-04-24 10:38:09,603] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,603] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: hccl device_count: 8 100.83.134.158: hccl device_count: 8 100.83.134.158: [2024-04-24 10:38:09,603] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,603] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: [2024-04-24 10:38:09,603] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,603] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: hccl device_count: 8 100.83.134.158: [2024-04-24 10:38:09,610] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,610] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.158: warnings.warn( 100.83.134.158: hccl device_count: 8 100.83.134.158: [2024-04-24 10:38:09,613] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.158: [2024-04-24 10:38:09,613] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: -------------------------------------------------- 100.83.134.148: DeepSpeed C++/CUDA extension op report 100.83.134.148: -------------------------------------------------- 100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op. 100.83.134.148: -------------------------------------------------- 100.83.134.148: JIT compiled ops requires ninja 100.83.134.148: ------------------------------------------------------------------------------------------------------------------------------------------------------ 100.83.134.148: 100.83.134.148: DeepSpeed C++/CUDA extension op report 100.83.134.148: DeepSpeed C++/CUDA extension op report 100.83.134.148: 100.83.134.148: DeepSpeed C++/CUDA extension op report------------------------------------------------------------------------------------------------------------------------------------------------------ 100.83.134.148: 100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op. 100.83.134.148: 100.83.134.148: -------------------------------------------------- 100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op.DeepSpeed C++/CUDA extension op report 100.83.134.148: ----------------------------------------------------------------------------------------------------NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op. 100.83.134.148: 100.83.134.148: 100.83.134.148: 100.83.134.148: 100.83.134.148: ----------------------------------------------------------------------------------------------------JIT compiled ops requires ninja--------------------------------------------------DeepSpeed C++/CUDA extension op report 100.83.134.148: 100.83.134.148: 100.83.134.148: 100.83.134.148: 100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op.JIT compiled ops requires ninja--------------------------------------------------JIT compiled ops requires ninja 100.83.134.148: 100.83.134.148: 100.83.134.148: 100.83.134.148: --------------------------------------------------NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op. 100.83.134.148: 100.83.134.148: JIT compiled ops requires ninja-------------------------------------------------- 100.83.134.148: 100.83.134.148: JIT compiled ops requires ninja 100.83.134.148: ninja .................. [OKAY] 100.83.134.148: -------------------------------------------------- 100.83.134.148: op name ................ installed .. compatible 100.83.134.148: -------------------------------------------------- 100.83.134.148: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.148: fused_adam ............. [NO] ....... [OKAY] 100.83.134.148: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.148: ninjatransformer_inferenceninja ninja...................................... [NO]..................[OKAY][OKAY] 100.83.134.148: ....... 100.83.134.148: [OKAY]-------------------------------------------------- 100.83.134.148: -------------------------------------------------- 100.83.134.148: [OKAY] 100.83.134.148: -------------------------------------------------- 100.83.134.148: op name 100.83.134.148: op name -------------------------------------------------- ................op name 100.83.134.148: ................ ninja installedninjainstalled................ .................... .. compatibleinstalled..................compatible[OKAY] 100.83.134.148: 100.83.134.148: .. 100.83.134.148: [OKAY] ----------------------------------------------------------------------------------------------------compatible 100.83.134.148: 100.83.134.148: -------------------------------------------------- 100.83.134.148: 100.83.134.148: 100.83.134.148: --------------------------------------------------cpu_adam-------------------------------------------------- 100.83.134.148: cpu_adamop name 100.83.134.148: op name ............... ...............DeepSpeed general environment info: ................ ................ 100.83.134.148: [NO]cpu_adam[NO]installed torch install path ............... installed....... ....... .. ...............[OKAY] 100.83.134.148: [OKAY][NO] 100.83.134.148: fused_adam.......compatible .. 100.83.134.148: ['/usr/local/lib/python3.10/dist-packages/torch']fused_adam [OKAY]............. 100.83.134.148: --------------------------------------------------............. 100.83.134.148: compatible [NO] 100.83.134.148: 100.83.134.148: torch versionfused_adam [NO]-------------------------------------------------- ....... cpu_adam[OKAY] 100.83.134.148: ........................................ 100.83.134.148: cpu_adam...............[OKAY] [NO] 100.83.134.148: deepspeed_not_implemented2.1.1a0+gitb51c9f6 100.83.134.148: [NO] deepspeed_not_implemented ...............deepspeed install path[NO] ....... .................. .......[NO][NO] [OKAY] ['/usr/local/lib/python3.10/dist-packages/deepspeed'] [OKAY] 100.83.134.148: [OKAY] 100.83.134.148: ....... 100.83.134.148: ....... 100.83.134.148: deepspeed info fused_adamdeepspeed_not_implemented [OKAY][OKAY].............transformer_inference ................... 100.83.134.148: 100.83.134.148: [NO] ..transformer_inference [NO] 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0fused_adam 100.83.134.148: .......[NO] .. deepspeed wheel compiled w. [OKAY]....... 100.83.134.148: .................... [NO][OKAY]deepspeed_not_implemented 100.83.134.148: [OKAY] 100.83.134.148: ...... --------------------------------------------------.......[NO] [NO][OKAY] transformer_inference 100.83.134.148: 100.83.134.148: ..............torch 2.1 -------------------------------------------------- 100.83.134.148: .. 100.83.134.148: [OKAY] [OKAY] 100.83.134.148: shared memory (/dev/shm) size 100.83.134.148: [NO] transformer_inference.... deepspeed_not_implemented ....... .. 503.72 GB 100.83.134.148: [OKAY][NO][NO] 100.83.134.148: .............. --------------------------------------------------[OKAY]DeepSpeed general environment info:[OKAY] 100.83.134.148: DeepSpeed general environment info: 100.83.134.148: 100.83.134.148: 100.83.134.148: 100.83.134.148: transformer_inferencetorch install path torch install path --------------------------------------------------............... .. 100.83.134.148: ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.148: [NO] torch version.......['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.148: [OKAY]....................DeepSpeed general environment info: 100.83.134.148: 100.83.134.148: torch version--------------------------------------------------torch install path 2.1.1a0+gitb51c9f6 100.83.134.148: 100.83.134.148: ...................................deepspeed install path 2.1.1a0+gitb51c9f6........... 100.83.134.148: deepspeed install path['/usr/local/lib/python3.10/dist-packages/torch'] ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.148: ........... 100.83.134.148: deepspeed infotorch version['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.148: ....................................... deepspeed info0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 DeepSpeed general environment info: 100.83.134.148: 2.1.1a0+gitb51c9f6 100.83.134.148: ...................deepspeed wheel compiled w. 100.83.134.148: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0torch install pathdeepspeed install path...... 100.83.134.148: deepspeed wheel compiled w............ ...............torch 2.1 DeepSpeed general environment info: ...... 100.83.134.148: 100.83.134.148: ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.148: ['/usr/local/lib/python3.10/dist-packages/torch']torch install pathshared memory (/dev/shm) sizedeepspeed infotorch 2.1 100.83.134.148: .................................. 100.83.134.148: torch version.... shared memory (/dev/shm) size 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0503.72 GB ['/usr/local/lib/python3.10/dist-packages/torch'].................... 100.83.134.148: 100.83.134.148: .... 100.83.134.148: deepspeed wheel compiled w. torch version 2.1.1a0+gitb51c9f6......503.72 GB 100.83.134.148: 100.83.134.148: ....................torch 2.1 deepspeed install path 100.83.134.148: 2.1.1a0+gitb51c9f6...........shared memory (/dev/shm) size 100.83.134.148: deepspeed install path['/usr/local/lib/python3.10/dist-packages/deepspeed'].... 100.83.134.148: ...........deepspeed info503.72 GB 100.83.134.148: ...................['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.148: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.148: deepspeed info deepspeed wheel compiled w.................... ......0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 torch 2.1 100.83.134.148: 100.83.134.148: deepspeed wheel compiled w.shared memory (/dev/shm) size .......... torch 2.1 503.72 GB 100.83.134.148: 100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown ******** Git info for Megatron: git_hash=unknown git_branch=unknown ******** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.148: 100.83.134.148: 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown ******** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.148: 100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.148: -------------------------------------------------- 100.83.134.148: DeepSpeed C++/CUDA extension op report 100.83.134.148: -------------------------------------------------- 100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op. 100.83.134.148: -------------------------------------------------- 100.83.134.148: JIT compiled ops requires ninja 100.83.134.148: -------------------------------------------------- 100.83.134.148: DeepSpeed C++/CUDA extension op report 100.83.134.148: -------------------------------------------------- 100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at 100.83.134.148: runtime if needed. Op compatibility means that your system 100.83.134.148: meet the required dependencies to JIT install the op. 100.83.134.148: -------------------------------------------------- 100.83.134.148: JIT compiled ops requires ninja 100.83.134.148: ninja .................. [OKAY] 100.83.134.148: -------------------------------------------------- 100.83.134.148: op name ................ installed .. compatible 100.83.134.148: -------------------------------------------------- 100.83.134.148: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.148: fused_adam ............. [NO] ....... [OKAY] 100.83.134.148: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.148: transformer_inference .. [NO] ....... [OKAY] 100.83.134.148: -------------------------------------------------- 100.83.134.148: DeepSpeed general environment info: 100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.148: ninja .................. [OKAY] 100.83.134.148: -------------------------------------------------- 100.83.134.148: op name ................ installed .. compatible 100.83.134.148: -------------------------------------------------- 100.83.134.148: cpu_adam ............... [NO] ....... [OKAY] 100.83.134.148: fused_adam ............. [NO] ....... [OKAY] 100.83.134.148: deepspeed_not_implemented [NO] ....... [OKAY] 100.83.134.148: transformer_inference .. [NO] ....... [OKAY] 100.83.134.148: -------------------------------------------------- 100.83.134.148: DeepSpeed general environment info: 100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] 100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] 100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: fatal: not a git repository (or any parent up to mount point /) 100.83.134.148: Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 1 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 9 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 4 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 12 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 2 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 10 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 3 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 11 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 6 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 14 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 0 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 8 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 5 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 13 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: > setting tensorboard ... 100.83.134.148: _initialize_distributed: Initializing with below params: 100.83.134.148: args.local_rank: 7 100.83.134.148: args.world_size: 16 100.83.134.148: args.rank: 15 100.83.134.148: args.distributed_backend: hccl 100.83.134.148: hccl device_count: 8 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: [2024-04-24 10:38:09,922] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,922] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,923] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,923] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,923] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,923] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,923] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: [2024-04-24 10:38:09,923] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,924] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,924] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,966] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,966] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,966] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,966] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. 100.83.134.148: warnings.warn( 100.83.134.148: hccl device_count: 8 100.83.134.148: [2024-04-24 10:38:09,967] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented 100.83.134.148: [2024-04-24 10:38:09,967] [INFO] [comm.py:637:init_distributed] cdb=None 100.83.134.158: > initializing tensor model parallel with size 4 100.83.134.158: > initializing pipeline model parallel with size 2 100.83.134.158: > setting random seeds to 1234 ... 100.83.134.158: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 100.83.134.158: ============================= HABANA PT BRIDGE CONFIGURATION =========================== 100.83.134.158: PT_HPU_LAZY_MODE = 1 100.83.134.158: PT_RECIPE_CACHE_PATH = 100.83.134.158: PT_CACHE_FOLDER_DELETE = 0 100.83.134.158: PT_HPU_RECIPE_CACHE_CONFIG = 100.83.134.158: PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807 100.83.134.158: PT_HPU_LAZY_ACC_PAR_MODE = 0 100.83.134.158: PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0 100.83.134.158: ---------------------------: System Configuration :--------------------------- 100.83.134.158: Num CPU Cores : 160 100.83.134.158: CPU RAM : 1056375308 KB 100.83.134.158: ------------------------------------------------------------------------------ 100.83.134.158: > compiling dataset index builder ... 100.83.134.158: make: Entering directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' 100.83.134.158: g++ -O3 -Wall -shared -std=c++11 -fPIC -fdiagnostics-color -I/usr/include/python3.10 -I/usr/local/lib/python3.10/dist-packages/pybind11/include helpers.cpp -o helpers.cpython-310-x86_64-linux-gnu.so 100.83.134.148: > compiling dataset index builder ... 100.83.134.148: make: Entering directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' 100.83.134.148: g++ -O3 -Wall -shared -std=c++11 -fPIC -fdiagnostics-color -I/usr/include/python3.10 -I/usr/local/lib/python3.10/dist-packages/pybind11/include helpers.cpp -o helpers.cpython-310-x86_64-linux-gnu.so 100.83.134.158: make: Leaving directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' 100.83.134.158: >>> done with dataset index builder. Compilation time: 4.701 seconds 100.83.134.158: WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. 100.83.134.158: > compiling and loading fused kernels ... 100.83.134.148: make: Leaving directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' 100.83.134.148: >>> done with dataset index builder. Compilation time: 5.028 seconds 100.83.134.158: >>> done with compiling and loading fused kernels. Compilation time: 0.633 seconds 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.158: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin 100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-2gvu06me 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run true-salad-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/2gvu06me 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-a9p61iy4 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run prime-donkey-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/a9p61iy4 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-hawf4qag 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Syncing run lyric-totem-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/hawf4qag 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-1lf6hrcg 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-k8bmvdda 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-yfi6p93o 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-9q3cubkd 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run rose-eon-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/1lf6hrcg 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Syncing run avid-darkness-2052 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-wulfqwkg 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/yfi6p93o 100.83.134.148: wandb: Syncing run laced-bee-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/k8bmvdda 100.83.134.148: wandb: Syncing run sweet-grass-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/9q3cubkd 100.83.134.148: wandb: Syncing run frosty-valley-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/wulfqwkg 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-ljwbq5pp 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Syncing run glowing-dust-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/ljwbq5pp 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-hqeqzovu 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run driven-surf-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/hqeqzovu 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-33j1g38a 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run lilac-aardvark-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/33j1g38a 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-8jj6i7a9 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run logical-fire-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/8jj6i7a9 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-kdru3pwo 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Syncing run wobbly-fire-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/kdru3pwo 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-ydlgw8i5 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Syncing run bumbling-firebrand-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/ydlgw8i5 100.83.134.148: wandb: Tracking run with wandb version 0.16.6 100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-wlyb6qye 100.83.134.148: wandb: Run `wandb offline` to turn off syncing. 100.83.134.148: wandb: Syncing run super-yogurt-2052 100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/wlyb6qye 100.83.134.158: wandb: Tracking run with wandb version 0.16.6 100.83.134.158: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240424_103821-r42kdunw 100.83.134.158: wandb: Run `wandb offline` to turn off syncing. 100.83.134.158: wandb: Syncing run dainty-brook-2052 100.83.134.158: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs 100.83.134.158: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/r42kdunw 100.83.134.158: time to initialize megatron (seconds): 31.414 100.83.134.158: [after megatron is initialized] datetime: 2024-04-24 10:38:23 100.83.134.158: building LLaMA model ... 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.158: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.158: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.148: 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 151830528 100.83.134.158: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** > number of parameters on (tensor, pipeline) model parallel rank (1, 1): 151832576 100.83.134.148: 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.158: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.158: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) 100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) 100.83.134.148: > number of parameters on (tensor, pipeline) model parallel rank (2, 1): 151832576 100.83.134.148: > number of parameters on (tensor, pipeline) model parallel rank (3, 1): 151832576 100.83.134.158: > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 151830528 > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 151830528 100.83.134.148: > number of parameters on (tensor, pipeline) model parallel rank (0, 1): 151832576 100.83.134.158: 100.83.134.158: [2024-04-24 10:38:23,523] [INFO] [utils.py:824:see_memory_usage] Before Building Model 100.83.134.158: [2024-04-24 10:38:23,527] [INFO] [utils.py:825:see_memory_usage] MA 0.01 GB Max_MA 0.01 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:23,527] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.52 GB, percent = 43.5% 100.83.134.158: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 100.83.134.158: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=0, model=1): 1, ProcessCoord(pipe=0, data=0, model=2): 2, ProcessCoord(pipe=0, data=0, model=3): 3, ProcessCoord(pipe=0, data=1, model=0): 4, ProcessCoord(pipe=0, data=1, model=1): 5, ProcessCoord(pipe=0, data=1, model=2): 6, ProcessCoord(pipe=0, data=1, model=3): 7, ProcessCoord(pipe=1, data=0, model=0): 8, ProcessCoord(pipe=1, data=0, model=1): 9, ProcessCoord(pipe=1, data=0, model=2): 10, ProcessCoord(pipe=1, data=0, model=3): 11, ProcessCoord(pipe=1, data=1, model=0): 12, ProcessCoord(pipe=1, data=1, model=1): 13, ProcessCoord(pipe=1, data=1, model=2): 14, ProcessCoord(pipe=1, data=1, model=3): 15} 100.83.134.158: [2024-04-24 10:38:23,530] [INFO] [module.py:375:_partition_layers] Partitioning pipeline stages with method type:transformer 100.83.134.158: stage=0 layers=15 100.83.134.158: 0: _to_float16 100.83.134.158: 1: EmbeddingPipe 100.83.134.158: 2: 100.83.134.158: 3: ParallelTransformerLayerPipe 100.83.134.158: 4: ParallelTransformerLayerPipe 100.83.134.158: 5: ParallelTransformerLayerPipe 100.83.134.158: 6: ParallelTransformerLayerPipe 100.83.134.158: 7: ParallelTransformerLayerPipe 100.83.134.158: 8: ParallelTransformerLayerPipe 100.83.134.158: 9: ParallelTransformerLayerPipe 100.83.134.158: 10: ParallelTransformerLayerPipe 100.83.134.158: 11: ParallelTransformerLayerPipe 100.83.134.158: 12: ParallelTransformerLayerPipe 100.83.134.158: 13: ParallelTransformerLayerPipe 100.83.134.158: 14: ParallelTransformerLayerPipe 100.83.134.158: stage=1 layers=17 100.83.134.158: 15: ParallelTransformerLayerPipe 100.83.134.158: 16: ParallelTransformerLayerPipe 100.83.134.158: 17: ParallelTransformerLayerPipe 100.83.134.158: 18: ParallelTransformerLayerPipe 100.83.134.158: 19: ParallelTransformerLayerPipe 100.83.134.158: 20: ParallelTransformerLayerPipe 100.83.134.158: 21: ParallelTransformerLayerPipe 100.83.134.158: 22: ParallelTransformerLayerPipe 100.83.134.158: 23: ParallelTransformerLayerPipe 100.83.134.158: 24: ParallelTransformerLayerPipe 100.83.134.158: 25: ParallelTransformerLayerPipe 100.83.134.158: 26: ParallelTransformerLayerPipe 100.83.134.158: 27: 100.83.134.158: 28: WrapName 100.83.134.158: 29: WrapName 100.83.134.158: 30: 100.83.134.158: 31: float16_to_fp32 100.83.134.158: loss: CrossEntropy 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: *************** Using FusedSDPA ****************** 100.83.134.158: [2024-04-24 10:38:23,665] [INFO] [utils.py:824:see_memory_usage] After Building Model 100.83.134.158: [2024-04-24 10:38:23,668] [INFO] [utils.py:825:see_memory_usage] MA 0.01 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:23,669] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.58 GB, percent = 43.5% 100.83.134.158: > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 151830528 100.83.134.158: > learning rate decay style: cosine 100.83.134.158: DeepSpeed is enabled. 100.83.134.158: [2024-04-24 10:38:23,672] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.12.4+hpu.synapse.v1.14.0, git-hash=fad45b2, git-branch=1.14.0 100.83.134.158: [2024-04-24 10:38:23,919] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:23,919] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:23,940] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:23,940] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:23,999] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 100.83.134.158: [2024-04-24 10:38:23,999] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer 100.83.134.158: [2024-04-24 10:38:24,000] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer 100.83.134.158: [2024-04-24 10:38:24,001] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW 100.83.134.158: [2024-04-24 10:38:24,001] [INFO] [logging.py:96:log_dist] [Rank 0] Creating BF16 optimizer 100.83.134.158: [2024-04-24 10:38:24,064] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:24,076] [INFO] [utils.py:824:see_memory_usage] begin bf16_optimizer 100.83.134.158: [2024-04-24 10:38:24,080] [INFO] [utils.py:825:see_memory_usage] MA 0.29 GB Max_MA 0.31 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,080] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.64 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,145] [INFO] [utils.py:824:see_memory_usage] before initializing group 0 100.83.134.158: [2024-04-24 10:38:24,149] [INFO] [utils.py:825:see_memory_usage] MA 0.29 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,149] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.65 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,257] [INFO] [utils.py:824:see_memory_usage] after initializing group 0 100.83.134.158: [2024-04-24 10:38:24,261] [INFO] [utils.py:825:see_memory_usage] MA 0.29 GB Max_MA 0.58 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,261] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.65 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,318] [INFO] [utils.py:824:see_memory_usage] before initializing group 1 100.83.134.158: [2024-04-24 10:38:24,321] [INFO] [utils.py:825:see_memory_usage] MA 0.29 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,322] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.65 GB, percent = 43.5% 100.83.134.148: [2024-04-24 10:38:24,338] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:24,338] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:24,364] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:24,366] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:24,369] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:24,371] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:24,391] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:24,392] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:24,392] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.148: [2024-04-24 10:38:24,392] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:24,410] [INFO] [utils.py:824:see_memory_usage] after initializing group 1 100.83.134.158: [2024-04-24 10:38:24,414] [INFO] [utils.py:825:see_memory_usage] MA 1.14 GB Max_MA 1.14 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,414] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.71 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,490] [INFO] [utils.py:824:see_memory_usage] before initialize_optimizer 100.83.134.158: [2024-04-24 10:38:24,493] [INFO] [utils.py:825:see_memory_usage] MA 1.14 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,494] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.72 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,554] [INFO] [utils.py:824:see_memory_usage] end initialize_optimizer 100.83.134.158: [2024-04-24 10:38:24,557] [INFO] [utils.py:825:see_memory_usage] MA 1.14 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,557] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.72 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,621] [INFO] [utils.py:824:see_memory_usage] end bf16_optimizer 100.83.134.158: [2024-04-24 10:38:24,625] [INFO] [utils.py:825:see_memory_usage] MA 1.14 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 100.83.134.158: [2024-04-24 10:38:24,625] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 438.72 GB, percent = 43.5% 100.83.134.158: [2024-04-24 10:38:24,626] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = BF16_Optimizer 100.83.134.158: [2024-04-24 10:38:24,626] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client LR scheduler 100.83.134.158: [2024-04-24 10:38:24,626] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = 100.83.134.158: [2024-04-24 10:38:24,626] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.95), (0.9, 0.95)] 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:992:print] DeepSpeedEngine configuration: 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] activation_checkpointing_config { 100.83.134.158: "partition_activations": false, 100.83.134.158: "contiguous_memory_optimization": false, 100.83.134.158: "cpu_checkpointing": false, 100.83.134.158: "number_checkpoints": null, 100.83.134.158: "synchronize_checkpoint_boundary": false, 100.83.134.158: "profile": false 100.83.134.158: } 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] amp_enabled .................. False 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] amp_params ................... False 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] autotuning_config ............ { 100.83.134.158: "enabled": false, 100.83.134.158: "start_step": null, 100.83.134.158: "end_step": null, 100.83.134.158: "metric_path": null, 100.83.134.158: "arg_mappings": null, 100.83.134.158: "metric": "throughput", 100.83.134.158: "model_info": null, 100.83.134.158: "results_dir": "autotuning_results", 100.83.134.158: "exps_dir": "autotuning_exps", 100.83.134.158: "overwrite": true, 100.83.134.158: "fast": true, 100.83.134.158: "start_profile_step": 3, 100.83.134.158: "end_profile_step": 5, 100.83.134.158: "tuner_type": "gridsearch", 100.83.134.158: "tuner_early_stopping": 5, 100.83.134.158: "tuner_num_trials": 50, 100.83.134.158: "model_info_path": null, 100.83.134.158: "mp_size": 1, 100.83.134.158: "max_train_batch_size": null, 100.83.134.158: "min_train_batch_size": 1, 100.83.134.158: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 100.83.134.158: "min_train_micro_batch_size_per_gpu": 1, 100.83.134.158: "num_tuning_micro_batch_sizes": 3 100.83.134.158: } 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] bfloat16_accumulate_grads_via_hooks True 100.83.134.158: [2024-04-24 10:38:24,627] [INFO] [config.py:996:print] bfloat16_enabled ............. True 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] checkpoint_parallel_write_pipeline False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] checkpoint_tag_validation_enabled True 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] checkpoint_tag_validation_fail False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] comms_config ................. 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] communication_data_type ...... None 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] curriculum_enabled_legacy .... False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] curriculum_params_legacy ..... False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] data_efficiency_enabled ...... False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] dataloader_drop_last ......... False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] disable_allgather ............ False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] dump_state ................... False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] dynamic_loss_scale_args ...... None 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] eigenvalue_enabled ........... False 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] eigenvalue_gas_boundary_resolution 1 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] eigenvalue_layer_name ........ bert.encoder.layer 100.83.134.158: [2024-04-24 10:38:24,628] [INFO] [config.py:996:print] eigenvalue_layer_num ......... 0 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] eigenvalue_max_iter .......... 100 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] eigenvalue_stability ......... 1e-06 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] eigenvalue_tol ............... 0.01 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] eigenvalue_verbose ........... False 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] elasticity_enabled ........... False 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] flops_profiler_config ........ { 100.83.134.158: "enabled": false, 100.83.134.158: "recompute_fwd_factor": 0.0, 100.83.134.158: "profile_step": 1, 100.83.134.158: "module_depth": -1, 100.83.134.158: "top_modules": 1, 100.83.134.158: "detailed": true, 100.83.134.158: "output_file": null 100.83.134.158: } 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] fp16_auto_cast ............... None 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] fp16_enabled ................. False 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] fp16_master_weights_and_gradients False 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] global_rank .................. 0 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] grad_accum_dtype ............. None 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] gradient_accumulation_steps .. 128 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] gradient_clipping ............ 1.0 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] gradient_predivide_factor .... 1.0 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] initial_dynamic_scale ........ 1 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] load_universal_checkpoint .... False 100.83.134.158: [2024-04-24 10:38:24,629] [INFO] [config.py:996:print] loss_scale ................... 1.0 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] memory_breakdown ............. False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] mics_hierarchial_params_gather False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] mics_shard_size .............. -1 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] nebula_config ................ { 100.83.134.158: "enabled": false, 100.83.134.158: "persistent_storage_path": null, 100.83.134.158: "persistent_time_interval": 100, 100.83.134.158: "num_of_version_in_retention": 2, 100.83.134.158: "enable_nebula_load": true, 100.83.134.158: "load_path": null 100.83.134.158: } 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] optimizer_legacy_fusion ...... False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] optimizer_name ............... None 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] optimizer_params ............. None 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': False, 'grad_partitioned': False} 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] pld_enabled .................. False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] pld_params ................... False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] prescale_gradients ........... False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] scheduler_name ............... None 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] scheduler_params ............. None 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] seq_parallel_communication_data_type torch.float32 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] sparse_attention ............. None 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] sparse_gradients_enabled ..... False 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] steps_per_print .............. 10 100.83.134.158: [2024-04-24 10:38:24,630] [INFO] [config.py:996:print] train_batch_size ............. 256 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] train_micro_batch_size_per_gpu 1 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] use_data_before_expert_parallel_ False 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] use_node_local_storage ....... False 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] wall_clock_breakdown ......... False 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] weight_quantization_config ... None 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] world_size ................... 2 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] zero_allow_comm_data_type_fp32 False 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] zero_allow_untested_optimizer False 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=False reduce_bucket_size=500,000,000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False use_all_reduce_for_fetch_params=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] zero_enabled ................. False 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] zero_force_ds_cpu_optimizer .. True 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:996:print] zero_optimization_stage ...... 0 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [config.py:982:print_user_config] json = { 100.83.134.158: "train_batch_size": 256, 100.83.134.158: "train_micro_batch_size_per_gpu": 1, 100.83.134.158: "steps_per_print": 10, 100.83.134.158: "gradient_clipping": 1.0, 100.83.134.158: "zero_optimization": { 100.83.134.158: "stage": 0 100.83.134.158: }, 100.83.134.158: "bf16": { 100.83.134.158: "enabled": true, 100.83.134.158: "accumulate_grads_via_hooks": true 100.83.134.158: }, 100.83.134.158: "fp16": { 100.83.134.158: "enabled": false 100.83.134.158: }, 100.83.134.158: "wall_clock_breakdown": false, 100.83.134.158: "pipeline": { 100.83.134.158: "pipe_partitioned": false, 100.83.134.158: "grad_partitioned": false 100.83.134.158: } 100.83.134.158: } 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [engine.py:99:__init__] CONFIG: micro_batches=128 micro_batch_size=1 100.83.134.158: [2024-04-24 10:38:24,631] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False 100.83.134.158: [2024-04-24 10:38:25,366] [INFO] [engine.py:180:__init__] RANK=0 STAGE=0 LAYERS=15 [0, 15) STAGE_PARAMS=151830528 (151.831M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.158: [2024-04-24 10:38:25,366] [INFO] [engine.py:180:__init__] RANK=3 STAGE=0 LAYERS=15 [0, 15) STAGE_PARAMS=151830528 (151.831M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.158: [2024-04-24 10:38:25,366] [INFO] [engine.py:180:__init__] RANK=1 STAGE=0 LAYERS=15 [0, 15) STAGE_PARAMS=151830528 (151.831M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.158: [2024-04-24 10:38:25,366] [INFO] [engine.py:180:__init__] RANK=2 STAGE=0 LAYERS=15 [0, 15) STAGE_PARAMS=151830528 (151.831M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.148: [2024-04-24 10:38:25,370] [INFO] [engine.py:180:__init__] RANK=8 STAGE=1 LAYERS=17 [15, 32) STAGE_PARAMS=151832576 (151.833M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.148: [2024-04-24 10:38:25,370] [INFO] [engine.py:180:__init__] RANK=10 STAGE=1 LAYERS=17 [15, 32) STAGE_PARAMS=151832576 (151.833M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.148: [2024-04-24 10:38:25,370] [INFO] [engine.py:180:__init__] RANK=11 STAGE=1 LAYERS=17 [15, 32) STAGE_PARAMS=151832576 (151.833M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.148: [2024-04-24 10:38:25,370] [INFO] [engine.py:180:__init__] RANK=9 STAGE=1 LAYERS=17 [15, 32) STAGE_PARAMS=151832576 (151.833M) TOTAL_PARAMS=1214652416 (1214.652M) UNIQUE_PARAMS=1214652416 (1214.652M) 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: [2024-04-24 10:38:25,373] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,370] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,370] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,370] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,370] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,370] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,370] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: WARNING: could not find the metadata file /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 100.83.134.158: will not load any checkpoints and will start from random 100.83.134.148: [2024-04-24 10:38:25,374] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,371] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.158: [2024-04-24 10:38:25,371] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 100.83.134.148: time (ms) | load-checkpoint: 2.70 100.83.134.158: [after model, optimizer, and learning rate scheduler are built] datetime: 2024-04-24 10:38:25 100.83.134.158: > building train, validation, and test datasets ... 100.83.134.158: > datasets target sizes (minimum size): 100.83.134.158: train: 2560000 100.83.134.158: validation: 1282560 100.83.134.158: test: 2560 100.83.134.158: > building train, validation, and test datasets for GPT ... 100.83.134.158: Single data path provided for train, valid & test 100.83.134.158: > building dataset index ... 100.83.134.158: reading sizes... 100.83.134.158: reading pointers... 100.83.134.158: reading document index... 100.83.134.158: creating numpy buffer of mmap... 100.83.134.158: creating memory view of numpy buffer... 100.83.134.158: > finished creating indexed dataset in 0.000572 seconds 100.83.134.158: number of documents: 1558306 100.83.134.158: > dataset split: 100.83.134.158: train: 100.83.134.158: document indices in [0, 1509999) total of 1509999 documents 100.83.134.158: validation: 100.83.134.158: document indices in [1509999, 1556748) total of 46749 documents 100.83.134.158: test: 100.83.134.158: document indices in [1556748, 1558306) total of 1558 documents 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy 100.83.134.158: > loaded doc-idx mapping from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy 100.83.134.158: > loaded sample-idx mapping from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: > loaded shuffle-idx mapping from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: loaded indexed file in 0.002 seconds 100.83.134.158: total number of samples: 15244235 100.83.134.158: total number of epochs: 1 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_doc_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_doc_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_doc_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: 100.83.134.158: > loaded doc-idx mapping from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_doc_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_doc_idx.npy 100.83.134.158: > loaded sample-idx mapping from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: > loaded shuffle-idx mapping from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: loaded indexed file in 0.001 seconds 100.83.134.158: total number of samples: 1443484 100.83.134.158: total number of epochs: 3 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_sample_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_sample_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_1282560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.148: 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy 100.83.134.158: > loaded doc-idx mapping from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: > loaded sample-idx mapping from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: > loaded shuffle-idx mapping from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy 100.83.134.158: loaded indexed file in 0.001 seconds 100.83.134.158: total number of samples: 16581 100.83.134.158: total number of epochs: 1 100.83.134.158: > finished creating GPT datasets ... 100.83.134.148: time (ms) | model-and-optimizer-setup: 1956.40 | train/valid/test-data-iterators-setup: 1587.35 100.83.134.158: [after dataloaders are built] datetime: 2024-04-24 10:38:27 100.83.134.158: done with setup ... 100.83.134.158: training ... 100.83.134.158: [before the start of training step] datetime: 2024-04-24 10:38:27 100.83.134.158: [2024-04-24 14:03:37,049] [INFO] [logging.py:96:log_dist] [Rank 0] step=10, skipped=0, lr=[1.4999999999999998e-06, 1.4999999999999998e-06], mom=[(0.9, 0.95), (0.9, 0.95)] 100.83.134.158: [Rank 1] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0[Rank 3] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0 100.83.134.158: 100.83.134.148: [Rank 11] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0[Rank 9] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0 100.83.134.148: 100.83.134.158: steps: 10 loss: 10.9455 iter time (s): 1231.017 samples/sec: 0.208 100.83.134.148: iteration 10/ 10000 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (ms): 1230993.0 | learning rate: 1.500E-06 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 0.208 | TFLOPs: 0.24 | 100.83.134.158: [Rank 2] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0 100.83.134.148: [Rank 10] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0 100.83.134.158: [Rank 0] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0 100.83.134.148: [Rank 8] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0 100.83.134.158: [2024-04-24 17:30:06,287] [INFO] [logging.py:96:log_dist] [Rank 0] step=20, skipped=0, lr=[2.9999999999999997e-06, 2.9999999999999997e-06], mom=[(0.9, 0.95), (0.9, 0.95)] 100.83.134.158: steps: 20 loss: 9.1008 iter time (s): 1238.924 samples/sec: 0.207 100.83.134.148: iteration 20/ 10000 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (ms): 1238923.2 | learning rate: 3.000E-06 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 0.207 | TFLOPs: 0.24 | 100.83.134.148: 2024-04-24 17:30:06 Start last rank evaluation 100.83.134.158: Evaluating iter 10/10 100.83.134.148: -------------------------------------------------------------------------------------------------------------------- 100.83.134.148: 2024-04-24 17:31:05 | validation loss at iteration 20 | lm loss value: 8.887300E+00 | lm loss PPL: 7.239443E+03 | 100.83.134.148: -------------------------------------------------------------------------------------------------------------------- 100.83.134.158: saving checkpoint at iteration 20 to /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 100.83.134.158: [2024-04-24 17:31:05,586] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step20 is about to be saved! 100.83.134.158: [2024-04-24 17:31:05,616] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,620] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,621] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,622] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,623] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,626] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,639] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,643] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,662] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,677] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,688] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,695] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,705] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,710] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,716] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,719] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_01-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,723] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,733] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,731] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,739] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,741] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,747] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,746] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,755] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,753] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,757] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,766] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,768] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,778] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_15-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,781] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,789] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,788] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,790] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,790] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,793] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,802] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,800] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,812] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,815] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,813] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,818] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,824] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_03-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,829] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,835] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,836] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,834] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,844] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,850] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,857] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,857] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,859] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,878] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,901] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_16-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,908] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,912] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,928] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,933] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,937] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,948] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_04-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,953] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:05,954] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,967] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,971] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:05,971] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,977] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:05,977] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,989] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:05,991] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,005] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,029] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,027] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_17-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,052] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,059] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,061] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,081] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_05-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,079] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,080] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,088] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,091] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,092] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_18-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,103] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,111] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,111] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,112] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,113] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,114] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,116] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,121] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,129] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,130] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,136] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,135] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,144] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,155] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,168] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,190] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,218] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_06-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,223] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,238] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,239] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,241] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,242] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,246] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,249] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,252] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_19-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,264] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,263] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,267] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,266] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,271] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,269] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,281] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,288] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,293] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,334] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_07-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,355] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,379] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,379] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,383] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,389] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,392] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,397] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,397] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,404] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,407] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,408] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_20-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,408] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,418] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,424] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,433] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,453] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_08-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,474] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,514] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,514] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,517] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,532] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,532] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,540] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,539] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,543] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,549] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,552] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_21-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,559] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,563] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_09-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,563] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,569] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,579] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,584] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,635] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,649] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,653] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_03-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,654] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_02-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,669] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,678] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,679] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,682] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_03_model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,684] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,685] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,688] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_10-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,690] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_03_model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,690] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,693] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_03_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:06,694] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:06,699] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,703] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_22-model_03-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,706] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,704] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,708] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_01-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,709] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,706] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,725] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,725] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,733] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,744] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,744] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,755] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,765] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,770] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,773] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_23-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,784] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,788] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,790] [INFO] [logging.py:96:log_dist] [Rank 1] Saving model checkpoint: /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_01_model_states.pt 100.83.134.148: [2024-04-24 17:31:06,787] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_00-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,790] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_01_model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,789] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,791] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,792] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,795] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_11-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,797] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,795] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_01-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,799] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_02_model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,801] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_01_model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,803] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:06,805] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:06,807] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_02_model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,809] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_02_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:06,810] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_02_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:06,819] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,818] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,820] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,831] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,831] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_24-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,839] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,847] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,850] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,854] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,858] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,877] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,880] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_25-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,894] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:06,897] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_12-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,895] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,899] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_04_model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,902] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_01-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,911] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_02-model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,914] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,912] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_04_model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,920] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_04_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:06,922] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,925] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:06,926] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,927] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,940] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_26-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,944] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,946] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_28-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,962] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_03-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,970] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_01-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,975] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_05_model_states.pt... 100.83.134.158: [2024-04-24 17:31:06,987] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_13-model_00-model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,984] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_05_model_states.pt. 100.83.134.148: [2024-04-24 17:31:06,986] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_02-model_states.pt... 100.83.134.148: [2024-04-24 17:31:06,987] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_05_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:06,989] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:07,004] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_00-model_states.pt... 100.83.134.148: [2024-04-24 17:31:07,016] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_03-model_states.pt. 100.83.134.148: [2024-04-24 17:31:07,019] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_07_model_states.pt... 100.83.134.148: [2024-04-24 17:31:07,028] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_07_model_states.pt. 100.83.134.148: [2024-04-24 17:31:07,030] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:07,031] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:07,044] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_29-model_02-model_states.pt. 100.83.134.148: [2024-04-24 17:31:07,049] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_06_model_states.pt... 100.83.134.148: [2024-04-24 17:31:07,058] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_06_model_states.pt. 100.83.134.148: [2024-04-24 17:31:07,061] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:07,063] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:07,081] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/layer_14-model_00-model_states.pt. 100.83.134.158: [2024-04-24 17:31:07,084] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_00_model_states.pt 100.83.134.158: [2024-04-24 17:31:07,084] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_00_model_states.pt... 100.83.134.158: [2024-04-24 17:31:07,093] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/mp_rank_00_model_states.pt. 100.83.134.158: [2024-04-24 17:31:07,096] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:07,097] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,214] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_03_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,214] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_03_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,258] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,258] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,313] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,314] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,374] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_04_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,374] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_04_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,401] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_02_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,402] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_02_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,403] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_02_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,404] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_02_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,407] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,407] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,493] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,493] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,498] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,499] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,564] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,565] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,597] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,597] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,603] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,603] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,612] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,612] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,639] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,640] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt... 100.83.134.148: [2024-04-24 17:31:08,670] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_05_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:08,670] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_05_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:08,708] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:08,709] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 100.83.134.158: [2024-04-24 17:31:09,951] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:09,951] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt 100.83.134.158: [2024-04-24 17:31:09,951] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:10,008] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_03_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:10,009] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_03_optim_states.pt 100.83.134.158: [2024-04-24 17:31:10,009] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:10,101] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:10,101] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_03_optim_states.pt 100.83.134.158: [2024-04-24 17:31:10,102] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:10,190] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:10,190] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_07_optim_states.pt 100.83.134.148: [2024-04-24 17:31:10,190] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:10,290] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:10,290] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_06_optim_states.pt 100.83.134.148: [2024-04-24 17:31:10,291] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:10,423] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:10,423] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt 100.83.134.158: [2024-04-24 17:31:10,423] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:10,459] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:10,459] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 100.83.134.158: [2024-04-24 17:31:10,459] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:10,590] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_02_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:10,591] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_02_optim_states.pt 100.83.134.158: [2024-04-24 17:31:10,591] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:10,636] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_04_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:10,637] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_04_optim_states.pt 100.83.134.148: [2024-04-24 17:31:10,637] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:10,673] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_02_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:10,673] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_02_optim_states.pt 100.83.134.158: [2024-04-24 17:31:10,674] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: [2024-04-24 17:31:11,171] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 100.83.134.158: [2024-04-24 17:31:11,173] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 100.83.134.158: [2024-04-24 17:31:11,174] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:11,828] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:11,828] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_07_optim_states.pt 100.83.134.148: [2024-04-24 17:31:11,828] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:11,838] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:11,838] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_04_optim_states.pt 100.83.134.148: [2024-04-24 17:31:11,838] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:12,558] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_05_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:12,558] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_05_optim_states.pt 100.83.134.148: [2024-04-24 17:31:12,558] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:12,616] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:12,617] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_1_mp_rank_06_optim_states.pt 100.83.134.148: [2024-04-24 17:31:12,617] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.148: [2024-04-24 17:31:12,625] [INFO] [torch_checkpoint_engine.py:24:save] [Torch] Saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt. 100.83.134.148: [2024-04-24 17:31:12,625] [INFO] [engine.py:3481:_save_zero_checkpoint] bf16_zero checkpoint saved /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2/global_step20/bf16_zero_pp_rank_0_mp_rank_05_optim_states.pt 100.83.134.148: [2024-04-24 17:31:12,625] [INFO] [torch_checkpoint_engine.py:34:commit] [Torch] Checkpoint global_step20 is ready now! 100.83.134.158: successfully saved checkpoint at iteration 20 to /data/output/llama13b_600M/24-04-2024-10:37:59/checkpoints_zero_stage_2 100.83.134.158: 3D configuration: DP=2 TP=4 PP=2 100.83.134.158: 100.83.134.158: Verify ** layer_ ** files 100.83.134.158: 100.83.134.158: Checking pp_stage=0 100.83.134.158: 3.input_layernorm.weight: OK [n=4] 100.83.134.158: 3.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 4.input_layernorm.weight: OK [n=4] 100.83.134.158: 4.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 5.input_layernorm.weight: OK [n=4] 100.83.134.158: 5.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 6.input_layernorm.weight: OK [n=4] 100.83.134.158: 6.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 7.input_layernorm.weight: OK [n=4] 100.83.134.158: 7.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 8.input_layernorm.weight: OK [n=4] 100.83.134.158: 8.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 9.input_layernorm.weight: OK [n=4] 100.83.134.158: 9.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 10.input_layernorm.weight: OK [n=4] 100.83.134.158: 10.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 11.input_layernorm.weight: OK [n=4] 100.83.134.158: 11.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 12.input_layernorm.weight: OK [n=4] 100.83.134.158: 12.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 13.input_layernorm.weight: OK [n=4] 100.83.134.158: 13.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 14.input_layernorm.weight: OK [n=4] 100.83.134.158: 14.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 100.83.134.158: Checking pp_stage=1 100.83.134.158: 15.input_layernorm.weight: OK [n=4] 100.83.134.158: 15.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 16.input_layernorm.weight: OK [n=4] 100.83.134.158: 16.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 17.input_layernorm.weight: OK [n=4] 100.83.134.158: 17.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 18.input_layernorm.weight: OK [n=4] 100.83.134.158: 18.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 19.input_layernorm.weight: OK [n=4] 100.83.134.158: 19.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 20.input_layernorm.weight: OK [n=4] 100.83.134.158: 20.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 21.input_layernorm.weight: OK [n=4] 100.83.134.158: 21.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 22.input_layernorm.weight: OK [n=4] 100.83.134.158: 22.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 23.input_layernorm.weight: OK [n=4] 100.83.134.158: 23.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 24.input_layernorm.weight: OK [n=4] 100.83.134.158: 24.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 25.input_layernorm.weight: OK [n=4] 100.83.134.158: 25.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 26.input_layernorm.weight: OK [n=4] 100.83.134.158: 26.post_attention_layernorm.weight: OK [n=4] 100.83.134.158: 28.final_rmsnorm.weight: OK [n=4] 100.83.134.158: 100.83.134.158: Checkpoint layer files OK 100.83.134.158: 100.83.134.158: Verify ** bf16_zero_ ** files 100.83.134.158: 100.83.134.158: Checking pp_stage=0 dp_stage=0 100.83.134.158: bf16 zero files: 0%| | 0/4 [00:00