|
/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
warnings.warn( |
|
[2024-04-20 12:29:53,021] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
[2024-04-20 12:29:56,963] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env |
|
[2024-04-20 12:29:56,964] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.134.153,100.83.134.148 |
|
[2024-04-20 12:29:56,964] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.134.153,100.83.134.148 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMTM0LjE1MyI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XSwgIjEwMC44My4xMzQuMTQ4IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddfQ== --node_rank=%n --master_addr=100.83.134.153 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 2 --pipeline-model-parallel-size 2 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 16 --hidden-size 5120 --ffn-hidden-size 13824 --num-attention-heads 40 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 100 --data-path /data/arxiv//tokenized_text_document --vocab-file /data/arxiv//gpt2-vocab.json --merge-file /data/arxiv//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_poc//tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_poc//checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_poc//ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_poc//checkpoints_zero_stage_2 --hf-save /data/output/llama13b_poc//hf_ckpt --save-interval 100 --verify-checkpoint --verify-checkpoint-model-type LLAMA' |
|
100.83.134.148: Warning: Permanently added '[100.83.134.148]:3122' (ED25519) to the list of known hosts.
|
|
100.83.134.153: Warning: Permanently added '[100.83.134.153]:3122' (ED25519) to the list of known hosts.
|
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: ...done. |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: ...done. |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:01,530] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:02,437] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: [2024-04-20 12:30:05,552] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.134.153': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [0, 1, 2, 3, 4, 5, 6, 7]} |
|
100.83.134.153: [2024-04-20 12:30:05,552] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0 |
|
100.83.134.153: [2024-04-20 12:30:05,552] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.134.153': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [8, 9, 10, 11, 12, 13, 14, 15]}) |
|
100.83.134.153: [2024-04-20 12:30:05,552] [INFO] [launch.py:164:main] dist_world_size=16 |
|
100.83.134.153: [2024-04-20 12:30:05,552] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: ...done. |
|
100.83.134.153: ...done. |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: ...done. |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: ...done. |
|
100.83.134.153: ...done. |
|
100.83.134.153: ...done. |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.153: ...done. |
|
100.83.134.153: ...done. |
|
100.83.134.148: [2024-04-20 12:30:07,628] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.134.153': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [0, 1, 2, 3, 4, 5, 6, 7]} |
|
100.83.134.148: [2024-04-20 12:30:07,628] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1 |
|
100.83.134.148: [2024-04-20 12:30:07,628] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.134.153': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.134.148': [8, 9, 10, 11, 12, 13, 14, 15]}) |
|
100.83.134.148: [2024-04-20 12:30:07,629] [INFO] [launch.py:164:main] dist_world_size=16 |
|
100.83.134.148: [2024-04-20 12:30:07,629] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: ...done. |
|
100.83.134.148: ...done. |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: ...done. |
|
100.83.134.148: ...done. |
|
100.83.134.148: ...done. |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: ...done. |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: ...done. |
|
100.83.134.148: * Starting OpenBSD Secure Shell server sshd |
|
100.83.134.148: ...done. |
|
100.83.134.153: [2024-04-20 12:30:10,285] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:10,292] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:10,375] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:10,535] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:10,592] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:10,698] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:10,774] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: [2024-04-20 12:30:10,824] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,375] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,452] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,457] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,488] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,489] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,519] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,528] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:12,651] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect) |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.148: ninja .................. [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: op name ................ installed .. compatible |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.148: ninja .................. [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: op name ................ installed .. compatible |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: > setting tensorboard ... |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 7 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 15 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 0 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 8 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.148: ninja .................. [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: op name ................ installed .. compatible |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.148: ninja .................. [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: op name ................ installed .. compatible |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 2 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 10 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: [2024-04-20 12:30:16,354] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,354] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 5 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 13 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 1 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 1 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:16,452] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,452] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.148: ninja .................. [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: op name ................ installed .. compatible |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: using world size: 16, data-parallel-size: 4, tensor-model-parallel size: 2, pipeline-model-parallel size: 2 |
|
100.83.134.153: accumulate and all-reduce gradients in fp32 for bfloat16 data type. |
|
100.83.134.153: using torch.bfloat16 for parameters ... |
|
100.83.134.153: ------------------------ arguments ------------------------ |
|
100.83.134.153: accumulate_allreduce_grads_in_fp32 .............. True |
|
100.83.134.153: activation_func_type ............................ swiglu |
|
100.83.134.153: adam_beta1 ...................................... 0.9 |
|
100.83.134.153: adam_beta2 ...................................... 0.95 |
|
100.83.134.153: adam_eps ........................................ 1e-06 |
|
100.83.134.153: adlr_autoresume ................................. False |
|
100.83.134.153: adlr_autoresume_interval ........................ 1000 |
|
100.83.134.153: aml_data_download_path .......................... None |
|
100.83.134.153: apply_layernorm_weight_plus_one ................. False |
|
100.83.134.153: apply_query_key_layer_scaling ................... True |
|
100.83.134.153: apply_residual_connection_post_layernorm ........ False |
|
100.83.134.153: attention_dropout ............................... 0.1 |
|
100.83.134.153: attention_softmax_in_fp32 ....................... False |
|
100.83.134.153: bert_binary_head ................................ True |
|
100.83.134.153: bert_load ....................................... None |
|
100.83.134.153: bf16 ............................................ True |
|
100.83.134.153: bias_dropout_fusion ............................. False |
|
100.83.134.153: bias_gelu_fusion ................................ False |
|
100.83.134.153: biencoder_projection_dim ........................ 0 |
|
100.83.134.153: biencoder_shared_query_context_model ............ False |
|
100.83.134.153: block_data_path ................................. None |
|
100.83.134.153: cache_fp8_weight ................................ False |
|
100.83.134.153: cache_fp8_weight_fwd ............................ True |
|
100.83.134.153: checkpoint_activations .......................... False |
|
100.83.134.153: checkpoint_activations_granularity .............. full |
|
100.83.134.153: checkpoint_in_cpu ............................... False |
|
100.83.134.153: checkpoint_num_layers ........................... 1 |
|
100.83.134.153: clearml_config_path ............................. None |
|
100.83.134.153: clearml_continue_exp ............................ False |
|
100.83.134.153: clearml_exp_name ................................ None |
|
100.83.134.153: clip_grad ....................................... 1.0 |
|
100.83.134.153: compression_training ............................ False |
|
100.83.134.153: consumed_train_samples .......................... 0 |
|
100.83.134.153: consumed_train_tokens ........................... 0 |
|
100.83.134.153: consumed_valid_samples .......................... 0 |
|
100.83.134.153: contigious_checkpointing ........................ False |
|
100.83.134.153: cpu_optimizer ................................... False |
|
100.83.134.153: cpu_torch_adam .................................. False |
|
100.83.134.153: create_moe_param_group .......................... False |
|
100.83.134.153: curriculum_learning ............................. False |
|
100.83.134.153: data_idx_path ................................... None |
|
100.83.134.153: data_impl ....................................... infer |
|
100.83.134.153: data_parallel_size .............................. 4 |
|
100.83.134.153: data_path ....................................... ['/data/arxiv//tokenized_text_document'] |
|
100.83.134.153: data_sharding ................................... True |
|
100.83.134.153: dataloader_type ................................. single |
|
100.83.134.153: DDP_impl ........................................ local |
|
100.83.134.153: decoder_seq_length .............................. None |
|
100.83.134.153: deepscale ....................................... False |
|
100.83.134.153: deepscale_config ................................ None |
|
100.83.134.153: deepspeed ....................................... True |
|
100.83.134.153: deepspeed_activation_checkpointing .............. False |
|
100.83.134.153: deepspeed_config ................................ /data/output/llama13b_poc//ds_config.json |
|
100.83.134.153: deepspeed_mpi ................................... False |
|
100.83.134.153: distribute_checkpointed_activations ............. False |
|
100.83.134.153: distributed_backend ............................. hccl |
|
100.83.134.153: do_layernorm_bias_weight_decay .................. False |
|
100.83.134.153: do_pretrain_validation .......................... False |
|
100.83.134.153: ds_inference .................................... False |
|
100.83.134.153: ds_pipeline_enabled ............................. True |
|
100.83.134.153: embed_layernorm ................................. False |
|
100.83.134.153: embedding_path .................................. None |
|
100.83.134.153: enable_expert_tensor_parallelism ................ False |
|
100.83.134.153: encoder_seq_length .............................. 2048 |
|
100.83.134.153: eod_mask_loss ................................... False |
|
100.83.134.153: eval_interval ................................... 100 |
|
100.83.134.153: eval_iters ...................................... 10 |
|
100.83.134.153: eval_loss_exit_value ............................ None |
|
100.83.134.153: eval_micro_batch_size ........................... 1 |
|
100.83.134.153: evidence_data_path .............................. None |
|
100.83.134.153: exit_duration_in_mins ........................... None |
|
100.83.134.153: exit_interval ................................... 0 |
|
100.83.134.153: expert_interval ................................. 2 |
|
100.83.134.153: ffn_hidden_coeff ................................ 2.6666666666666665 |
|
100.83.134.153: ffn_hidden_size ................................. 13824 |
|
100.83.134.153: finetune ........................................ False |
|
100.83.134.153: fix_position_emb_redundant_alloc ................ False |
|
100.83.134.153: flatten_linear_operands ......................... False |
|
100.83.134.153: fp16 ............................................ False |
|
100.83.134.153: fp16_lm_cross_entropy ........................... False |
|
100.83.134.153: fp32_residual_connection ........................ False |
|
100.83.134.153: global_batch_size ............................... 256 |
|
100.83.134.153: hf_save ......................................... /data/output/llama13b_poc//hf_ckpt |
|
100.83.134.153: hidden_dropout .................................. 0.1 |
|
100.83.134.153: hidden_size ..................................... 5120 |
|
100.83.134.153: hidden_size_teacher ............................. None |
|
100.83.134.153: hpu_deterministic ............................... True |
|
100.83.134.153: hpu_fp8_format .................................. e5m2 |
|
100.83.134.153: hpu_fp8_measure_interval ........................ 10 |
|
100.83.134.153: hysteresis ...................................... 2 |
|
100.83.134.153: ict_head_size ................................... None |
|
100.83.134.153: ict_load ........................................ None |
|
100.83.134.153: img_dim ......................................... 224 |
|
100.83.134.153: indexer_batch_size .............................. 128 |
|
100.83.134.153: indexer_log_interval ............................ 1000 |
|
100.83.134.153: inference ....................................... False |
|
100.83.134.153: init_method_std ................................. 0.02 |
|
100.83.134.153: init_method_xavier_uniform ...................... False |
|
100.83.134.153: initial_loss_scale .............................. 4294967296 |
|
100.83.134.153: kd .............................................. False |
|
100.83.134.153: kd_alpha_ce ..................................... 1 |
|
100.83.134.153: kd_beta_ce ...................................... 1 |
|
100.83.134.153: kd_temp ......................................... 1.0 |
|
100.83.134.153: kill_switch_path ................................ None |
|
100.83.134.153: kv_channels ..................................... 128 |
|
100.83.134.153: layernorm_epsilon ............................... 1e-06 |
|
100.83.134.153: layernorm_type .................................. rmsnorm |
|
100.83.134.153: lazy_mpu_init ................................... None |
|
100.83.134.153: load ............................................ /data/output/llama13b_poc//checkpoints_zero_stage_2 |
|
100.83.134.153: load_teacher .................................... None |
|
100.83.134.153: local_rank ...................................... 0 |
|
100.83.134.153: log_batch_size_to_tensorboard ................... True |
|
100.83.134.153: log_bwd_grads ................................... False |
|
100.83.134.153: log_fwd_activations ............................. False |
|
100.83.134.153: log_interval .................................... 10 |
|
100.83.134.153: log_learning_rate_to_tensorboard ................ True |
|
100.83.134.153: log_loss_scale_to_tensorboard ................... True |
|
100.83.134.153: log_model_inputs ................................ False |
|
100.83.134.153: log_num_zeros_in_grad ........................... False |
|
100.83.134.153: log_optimizer_states_to_tensorboard ............. False |
|
100.83.134.153: log_params_norm ................................. False |
|
100.83.134.153: log_timers_to_tensorboard ....................... True |
|
100.83.134.153: log_validation_ppl_to_tensorboard ............... True |
|
100.83.134.153: loss_scale ...................................... None |
|
100.83.134.153: loss_scale_window ............................... 1000 |
|
100.83.134.153: lr .............................................. 0.0003 |
|
100.83.134.153: lr_decay_iters .................................. None |
|
100.83.134.153: lr_decay_samples ................................ None |
|
100.83.134.153: lr_decay_style .................................. cosine |
|
100.83.134.153: lr_decay_tokens ................................. None |
|
100.83.134.153: lr_warmup_fraction .............................. None |
|
100.83.134.153: lr_warmup_iters ................................. 2000 |
|
100.83.134.153: lr_warmup_samples ............................... 0 |
|
100.83.134.153: lr_warmup_tokens ................................ None |
|
100.83.134.153: make_vocab_size_divisible_by .................... 128 |
|
100.83.134.153: mask_prob ....................................... 0.15 |
|
100.83.134.153: mask_tensor_adding .............................. False |
|
100.83.134.153: masked_softmax_fusion ........................... False |
|
100.83.134.153: max_position_embeddings ......................... None |
|
100.83.134.153: memory_centric_tiled_linear ..................... False |
|
100.83.134.153: merge_file ...................................... /data/arxiv//gpt2-merges.txt |
|
100.83.134.153: micro_batch_size ................................ 1 |
|
100.83.134.153: min_loss_scale .................................. 1.0 |
|
100.83.134.153: min_lr .......................................... 0.0 |
|
100.83.134.153: mlp_type ........................................ standard |
|
100.83.134.153: mmap_warmup ..................................... False |
|
100.83.134.153: moe_eval_capacity_factor ........................ 1.0 |
|
100.83.134.153: moe_expert_parallel_size ........................ 1 |
|
100.83.134.153: moe_loss_coeff .................................. 0.1 |
|
100.83.134.153: moe_min_capacity ................................ 4 |
|
100.83.134.153: moe_token_dropping .............................. True |
|
100.83.134.153: moe_train_capacity_factor ....................... 1.0 |
|
100.83.134.153: mos ............................................. False |
|
100.83.134.153: no_bias ......................................... True |
|
100.83.134.153: no_cuda ......................................... False |
|
100.83.134.153: no_load_lr_state ................................ False |
|
100.83.134.153: no_load_optim ................................... None |
|
100.83.134.153: no_load_rng ..................................... None |
|
100.83.134.153: no_pipeline_parallel ............................ False |
|
100.83.134.153: no_save_optim ................................... None |
|
100.83.134.153: no_save_rng ..................................... None |
|
100.83.134.153: no_scaled_init .................................. False |
|
100.83.134.153: num_attention_heads ............................. 40 |
|
100.83.134.153: num_attention_heads_teacher ..................... None |
|
100.83.134.153: num_channels .................................... 3 |
|
100.83.134.153: num_classes ..................................... 1000 |
|
100.83.134.153: num_experts ..................................... [1] |
|
100.83.134.153: num_experts_teacher ............................. [1] |
|
100.83.134.153: num_key_value_heads ............................. 40 |
|
100.83.134.153: num_layers ...................................... 16 |
|
100.83.134.153: num_layers_per_virtual_pipeline_stage ........... None |
|
100.83.134.153: num_layers_teacher .............................. None |
|
100.83.134.153: num_workers ..................................... 2 |
|
100.83.134.153: onnx_safe ....................................... None |
|
100.83.134.153: openai_gelu ..................................... False |
|
100.83.134.153: optimizer ....................................... adamw |
|
100.83.134.153: override_lr_scheduler ........................... False |
|
100.83.134.153: params_dtype .................................... torch.bfloat16 |
|
100.83.134.153: partition_activations ........................... False |
|
100.83.134.153: patch_dim ....................................... 16 |
|
100.83.134.153: pipeline_model_parallel_size .................... 2 |
|
100.83.134.153: position_embedding_type ......................... PositionEmbeddingType.rotary |
|
100.83.134.153: profile ......................................... None |
|
100.83.134.153: profile_backward ................................ False |
|
100.83.134.153: profile_steps ................................... 2,3 |
|
100.83.134.153: query_in_block_prob ............................. 0.1 |
|
100.83.134.153: rampup_batch_size ............................... None |
|
100.83.134.153: rank ............................................ 0 |
|
100.83.134.153: remote_device ................................... none |
|
100.83.134.153: reset_attention_mask ............................ False |
|
100.83.134.153: reset_iteration ................................. False |
|
100.83.134.153: reset_position_ids .............................. False |
|
100.83.134.153: retriever_report_topk_accuracies ................ [] |
|
100.83.134.153: retriever_score_scaling ......................... False |
|
100.83.134.153: retriever_seq_length ............................ 256 |
|
100.83.134.153: sample_rate ..................................... 1.0 |
|
100.83.134.153: save ............................................ /data/output/llama13b_poc//checkpoints_zero_stage_2 |
|
100.83.134.153: save_interval ................................... 100 |
|
100.83.134.153: scatter_gather_tensors_in_pipeline .............. True |
|
100.83.134.153: scattered_embeddings ............................ False |
|
100.83.134.153: seed ............................................ 1234 |
|
100.83.134.153: seq_length ...................................... 2048 |
|
100.83.134.153: sequence_parallel ............................... True |
|
100.83.134.153: sgd_momentum .................................... 0.9 |
|
100.83.134.153: short_seq_prob .................................. 0.1 |
|
100.83.134.153: skip_train ...................................... False |
|
100.83.134.153: split ........................................... 969, 30, 1 |
|
100.83.134.153: split_transformers .............................. False |
|
100.83.134.153: synchronize_each_layer .......................... False |
|
100.83.134.153: tensor_logger_max_iter .......................... 0 |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.153: tensor_logger_path .............................. None |
|
100.83.134.153: tensor_model_parallel_size ...................... 2 |
|
100.83.134.153: tensorboard_dir ................................. /data/output/llama13b_poc//tensorboard |
|
100.83.134.153: tensorboard_log_interval ........................ 1 |
|
100.83.134.153: tensorboard_queue_size .......................... 1000 |
|
100.83.134.153: test_data_path .................................. None |
|
100.83.134.153: tile_factor ..................................... 1 |
|
100.83.134.153: titles_data_path ................................ None |
|
100.83.134.153: tokenizer_eod_id ................................ None |
|
100.83.134.153: tokenizer_model_file ............................ None |
|
100.83.134.153: tokenizer_type .................................. GPT2BPETokenizer |
|
100.83.134.153: topk ............................................ 1 |
|
100.83.134.153: train_data_path ................................. None |
|
100.83.134.153: train_iters ..................................... 10000 |
|
100.83.134.153: train_samples ................................... None |
|
100.83.134.153: train_tokens .................................... None |
|
100.83.134.153: universal_checkpoint ............................ False |
|
100.83.134.153: use_checkpoint_lr_scheduler ..................... False |
|
100.83.134.153: use_contiguous_buffers_in_ddp ................... True |
|
100.83.134.153: use_cpu_initialization .......................... None |
|
100.83.134.153: use_fused_sdpa .................................. True |
|
100.83.134.153: use_fused_sdpa_with_recompute ................... False |
|
100.83.134.153: use_hpu ......................................... True |
|
100.83.134.153: use_hpu_fp8_transformer_engine .................. False |
|
100.83.134.153: use_hpu_graphs .................................. False |
|
100.83.134.153: use_one_sent_docs ............................... False |
|
100.83.134.153: use_pin_memory .................................. False |
|
100.83.134.153: use_rotary_v2 ................................... False |
|
100.83.134.153: use_seq_len_plus_one_tokens ..................... True |
|
100.83.134.153: use_torch_compile ............................... False |
|
100.83.134.153: use_tutel ....................................... False |
|
100.83.134.153: valid_data_path ................................. None |
|
100.83.134.153: verify_checkpoint ............................... True |
|
100.83.134.153: verify_checkpoint_model_type .................... LLAMA |
|
100.83.134.153: verify_tp_workers ............................... False |
|
100.83.134.153: verify_tp_workers_hash .......................... False |
|
100.83.134.153: virtual_pipeline_model_parallel_size ............ None |
|
100.83.134.153: vocab_extra_ids ................................. 0 |
|
100.83.134.153: vocab_file ...................................... /data/arxiv//gpt2-vocab.json |
|
100.83.134.153: weight_decay .................................... 0.1 |
|
100.83.134.153: world_size ...................................... 16 |
|
100.83.134.153: zero_allgather_bucket_size ...................... 0.0 |
|
100.83.134.153: zero_contigious_gradients ....................... False |
|
100.83.134.153: zero_reduce_bucket_size ......................... 0.0 |
|
100.83.134.153: zero_reduce_scatter ............................. False |
|
100.83.134.153: zero_stage ...................................... 0 |
|
100.83.134.153: -------------------- end of arguments --------------------- |
|
100.83.134.153: setting number of micro-batches to constant 64 |
|
100.83.134.153: setting number of micro-batches to constant 64 |
|
100.83.134.153: > building GPT2BPETokenizer tokenizer ... |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:16,502] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,502] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 6 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 6 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:16,556] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,556] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.148: ninjaninja .................. ..................[92m[OKAY][0m [92m[OKAY][0m |
|
100.83.134.148: |
|
100.83.134.148: ---------------------------------------------------------------------------------------------------- |
|
100.83.134.148: |
|
100.83.134.148: op nameop name ................................ installedinstalled .... compatiblecompatible |
|
100.83.134.148: |
|
100.83.134.148: ---------------------------------------------------------------------------------------------------- |
|
100.83.134.148: |
|
100.83.134.148: cpu_adam cpu_adam............... ...............[93m[NO][0m [93m[NO][0m....... .......[92m[OKAY][0m |
|
100.83.134.148: [92m[OKAY][0m |
|
100.83.134.148: fused_adamfused_adam .......................... [93m[NO][0m[93m[NO][0m .............. [92m[OKAY][0m[92m[OKAY][0m |
|
100.83.134.148: |
|
100.83.134.148: deepspeed_not_implementeddeepspeed_not_implemented [93m[NO][0m[93m[NO][0m .............. [92m[OKAY][0m[92m[OKAY][0m |
|
100.83.134.148: |
|
100.83.134.148: transformer_inferencetransformer_inference .... [93m[NO][0m[93m[NO][0m .............. [92m[OKAY][0m[92m[OKAY][0m |
|
100.83.134.148: |
|
100.83.134.148: ---------------------------------------------------------------------------------------------------- |
|
100.83.134.148: |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path torch install path............... ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version ....................torch version ....................2.1.1a0+gitb51c9f6 |
|
100.83.134.148: 2.1.1a0+gitb51c9f6deepspeed install path |
|
100.83.134.148: ...........deepspeed install path ...........['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: ['/usr/local/lib/python3.10/dist-packages/deepspeed']deepspeed info |
|
100.83.134.148: ...................deepspeed info 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0................... |
|
100.83.134.148: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0deepspeed wheel compiled w. |
|
100.83.134.148: ......deepspeed wheel compiled w. torch 2.1 ...... |
|
100.83.134.148: shared memory (/dev/shm) sizetorch 2.1 |
|
100.83.134.148: .... shared memory (/dev/shm) size503.72 GB |
|
100.83.134.148: .... 503.72 GB |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 6 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 14 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown ******** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja_initialize_distributed: Initializing with below params: |
|
100.83.134.153: .................. [92m[OKAY][0margs.local_rank: |
|
100.83.134.153: 7-------------------------------------------------- |
|
100.83.134.153: |
|
100.83.134.153: args.world_size: 16op name |
|
100.83.134.153: args.rank:................ 7installed |
|
100.83.134.153: args.distributed_backend:.. hcclcompatible |
|
100.83.134.153: |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed C++/CUDA extension op report |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.148: runtime if needed. Op compatibility means that your system |
|
100.83.134.148: meet the required dependencies to JIT install the op. |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: JIT compiled ops requires ninja |
|
100.83.134.148: ninja .................. [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: op name ................ installed .. compatible |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.148: -------------------------------------------------- |
|
100.83.134.148: DeepSpeed general environment info: |
|
100.83.134.148: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.148: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.148: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.148: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.148: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.148: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.153: > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 0 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 0 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.148: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References |
|
100.83.134.148: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 1 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 9 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:16,726] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:16,726] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 4 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 12 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed C++/CUDA extension op report |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: NOTE: Ops not installed will be just-in-time (JIT) compiled at |
|
100.83.134.153: runtime if needed. Op compatibility means that your system |
|
100.83.134.153: meet the required dependencies to JIT install the op. |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: JIT compiled ops requires ninja |
|
100.83.134.153: ninja .................. [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: op name ................ installed .. compatible |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: deepspeed_not_implemented [93m[NO][0m ....... [92m[OKAY][0m |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: transformer_inference .. [93m[NO][0m .......hccl device_count: [92m[OKAY][0m |
|
100.83.134.153: -------------------------------------------------- |
|
100.83.134.153: DeepSpeed general environment info: 8 |
|
100.83.134.153: |
|
100.83.134.153: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] |
|
100.83.134.153: torch version .................... 2.1.1a0+gitb51c9f6 |
|
100.83.134.153: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] |
|
100.83.134.153: [2024-04-20 12:30:16,771] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: deepspeed info ...................[2024-04-20 12:30:16,771] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0 |
|
100.83.134.153: deepspeed wheel compiled w. ...... torch 2.1 |
|
100.83.134.153: shared memory (/dev/shm) size .... 503.72 GB |
|
100.83.134.148: _initialize_distributed: Initializing with below params: |
|
100.83.134.148: args.local_rank: 3 |
|
100.83.134.148: args.world_size: 16 |
|
100.83.134.148: args.rank: 11 |
|
100.83.134.148: args.distributed_backend: hccl |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 5 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 5 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: [2024-04-20 12:30:16,819] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,820] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References |
|
100.83.134.153: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:16,866] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,867] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: > initializing torch distributed ... |
|
100.83.134.153: [2024-04-20 12:30:16,885] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:16,885] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: [2024-04-20 12:30:16,885] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend hccl |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 2 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 2 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:16,914] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:16,914] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 4 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 4 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:16,957] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,957] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.148: hccl device_count: 8 |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.148: warnings.warn( |
|
100.83.134.148: [2024-04-20 12:30:16,962] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.148: [2024-04-20 12:30:16,962] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: _initialize_distributed: Initializing with below params: |
|
100.83.134.153: args.local_rank: 3 |
|
100.83.134.153: args.world_size: 16 |
|
100.83.134.153: args.rank: 3 |
|
100.83.134.153: args.distributed_backend: hccl |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:17,015] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:17,015] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: [2024-04-20 12:30:17,134] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:17,134] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: [2024-04-20 12:30:17,142] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:17,142] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: hccl device_count: 8 |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead. |
|
100.83.134.153: warnings.warn( |
|
100.83.134.153: [2024-04-20 12:30:17,229] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented |
|
100.83.134.153: [2024-04-20 12:30:17,229] [INFO] [comm.py:637:init_distributed] cdb=None |
|
100.83.134.153: > initializing tensor model parallel with size 2 |
|
100.83.134.153: > initializing pipeline model parallel with size 2 |
|
100.83.134.153: > setting random seeds to 1234 ... |
|
100.83.134.153: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 |
|
100.83.134.148: > compiling dataset index builder ... |
|
100.83.134.148: make: Entering directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' |
|
100.83.134.148: make: Nothing to be done for 'default'. |
|
100.83.134.148: make: Leaving directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' |
|
100.83.134.148: >>> done with dataset index builder. Compilation time: 0.027 seconds |
|
100.83.134.153: ============================= HABANA PT BRIDGE CONFIGURATION =========================== |
|
100.83.134.153: PT_HPU_LAZY_MODE = 1 |
|
100.83.134.153: PT_RECIPE_CACHE_PATH = |
|
100.83.134.153: PT_CACHE_FOLDER_DELETE = 0 |
|
100.83.134.153: PT_HPU_RECIPE_CACHE_CONFIG = |
|
100.83.134.153: PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807 |
|
100.83.134.153: PT_HPU_LAZY_ACC_PAR_MODE = 0 |
|
100.83.134.153: PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0 |
|
100.83.134.153: ---------------------------: System Configuration :--------------------------- |
|
100.83.134.153: Num CPU Cores : 160 |
|
100.83.134.153: CPU RAM : 1056375272 KB |
|
100.83.134.153: ------------------------------------------------------------------------------ |
|
100.83.134.153: > compiling dataset index builder ... |
|
100.83.134.153: make: Entering directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' |
|
100.83.134.153: make: Nothing to be done for 'default'. |
|
100.83.134.153: make: Leaving directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data' |
|
100.83.134.153: >>> done with dataset index builder. Compilation time: 0.266 seconds |
|
100.83.134.153: WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. |
|
100.83.134.153: > compiling and loading fused kernels ... |
|
100.83.134.153: >>> done with compiling and loading fused kernels. Compilation time: 0.088 seconds |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.148: cmdline: git rev-parse --show-toplevel |
|
100.83.134.148: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.148: To add an exception for this directory, call: |
|
100.83.134.148: |
|
100.83.134.148: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.153: git root error: Cmd('git') failed due to: exit code(128) |
|
100.83.134.153: cmdline: git rev-parse --show-toplevel |
|
100.83.134.153: stderr: 'fatal: detected dubious ownership in repository at '/Model-References' |
|
100.83.134.153: To add an exception for this directory, call: |
|
100.83.134.153: |
|
100.83.134.153: git config --global --add safe.directory /Model-References' |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-qhj8mil2 |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-wh4qm47z |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: Syncing run colorful-night-1527 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/qhj8mil2/workspace |
|
100.83.134.148: wandb: Syncing run silver-music-1524 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/wh4qm47z/workspace |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-7ve0hqbt |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: Syncing run amber-butterfly-1524 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/7ve0hqbt/workspace |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-tcsz96ee |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-5xxbncif |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-gmp4lnk4 |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: Syncing run drawn-firefly-1530 |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/tcsz96ee/workspace |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.148: wandb: Syncing run honest-water-1524 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/5xxbncif/workspace |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-7ictpaa1 |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: Syncing run comfy-armadillo-1524 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/gmp4lnk4/workspace |
|
100.83.134.148: wandb: Syncing run lunar-music-1529 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/7ictpaa1/workspace |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.148: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.148: wandb: $ pip install wandb --upgrade |
|
100.83.134.148: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.148: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123026-rf91tnbq |
|
100.83.134.148: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.148: wandb: Syncing run wild-aardvark-1531 |
|
100.83.134.148: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.148: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/rf91tnbq/workspace |
|
100.83.134.153: wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.153: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: Currently logged in as: advaiddeepak0602 (bharatgpt). Use `wandb login --relogin` to force relogin |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-d58fdjk7 |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run atomic-moon-1532 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/d58fdjk7/workspace |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-vhkcbyrm |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run faithful-thunder-1533 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/vhkcbyrm/workspace |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-2x5fhg9a |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run wild-smoke-1534 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/2x5fhg9a/workspace |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-6nplgqo0 |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run confused-wave-1536 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/6nplgqo0/workspace |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-6k6n3ewu |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-haj16k4j |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run firm-snowflake-1535 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/6k6n3ewu/workspace |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123028-zljf7cur |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run silvery-puddle-1536 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/haj16k4j/workspace |
|
100.83.134.153: wandb: wandb version 0.16.6 is available! To upgrade, please run: |
|
100.83.134.153: wandb: $ pip install wandb --upgrade |
|
100.83.134.153: wandb: Tracking run with wandb version 0.16.5 |
|
100.83.134.153: wandb: Run data is saved locally in /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/wandb/run-20240420_123027-elj7xdda |
|
100.83.134.153: wandb: Run `wandb offline` to turn off syncing. |
|
100.83.134.153: wandb: Syncing run laced-waterfall-1539 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/zljf7cur/workspace |
|
100.83.134.153: wandb: Syncing run sandy-morning-1538 |
|
100.83.134.153: wandb: ⭐️ View project at https://wandb.ai/bharatgpt/llama_runs |
|
100.83.134.153: wandb: 🚀 View run at https://wandb.ai/bharatgpt/llama_runs/runs/elj7xdda/workspace |
|
100.83.134.153: time to initialize megatron (seconds): -24.067 |
|
100.83.134.153: [after megatron is initialized] datetime: 2024-04-20 12:30:31 |
|
100.83.134.153: building LLaMA model ... |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.153: |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.153: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: > number of parameters on (tensor, pipeline) model parallel rank (0, 1): 1397969920*************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: > number of parameters on (tensor, pipeline) model parallel rank (1, 1): 1397969920 |
|
100.83.134.153: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.153: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.153: > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 1397964800 |
|
100.83.134.148: *************** Using FusedSDPA ********************************* Using FusedSDPA ****************** |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: |
|
100.83.134.148: *************** Using FusedSDPA ****************** |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.148: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.) |
|
100.83.134.148: return super().__torch_function__(func, types, new_args, kwargs) |
|
100.83.134.153: [2024-04-20 12:30:32,445] [INFO] [utils.py:824:see_memory_usage] Before Building Model |
|
100.83.134.153: [2024-04-20 12:30:32,478] [INFO] [utils.py:825:see_memory_usage] MA 0.01 GB Max_MA 0.01 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:32,485] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.11 GB, percent = 37.0% |
|
100.83.134.153: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None |
|
100.83.134.153: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=0, model=1): 1, ProcessCoord(pipe=0, data=1, model=0): 2, ProcessCoord(pipe=0, data=1, model=1): 3, ProcessCoord(pipe=0, data=2, model=0): 4, ProcessCoord(pipe=0, data=2, model=1): 5, ProcessCoord(pipe=0, data=3, model=0): 6, ProcessCoord(pipe=0, data=3, model=1): 7, ProcessCoord(pipe=1, data=0, model=0): 8, ProcessCoord(pipe=1, data=0, model=1): 9, ProcessCoord(pipe=1, data=1, model=0): 10, ProcessCoord(pipe=1, data=1, model=1): 11, ProcessCoord(pipe=1, data=2, model=0): 12, ProcessCoord(pipe=1, data=2, model=1): 13, ProcessCoord(pipe=1, data=3, model=0): 14, ProcessCoord(pipe=1, data=3, model=1): 15} |
|
100.83.134.153: [2024-04-20 12:30:32,532] [INFO] [module.py:375:_partition_layers] Partitioning pipeline stages with method type:transformer |
|
100.83.134.153: stage=0 layers=11 |
|
100.83.134.153: 0: _to_float16 |
|
100.83.134.153: 1: EmbeddingPipe |
|
100.83.134.153: 2: <lambda> |
|
100.83.134.153: 3: ParallelTransformerLayerPipe |
|
100.83.134.153: 4: ParallelTransformerLayerPipe |
|
100.83.134.153: 5: ParallelTransformerLayerPipe |
|
100.83.134.153: 6: ParallelTransformerLayerPipe |
|
100.83.134.153: 7: ParallelTransformerLayerPipe |
|
100.83.134.153: 8: ParallelTransformerLayerPipe |
|
100.83.134.153: 9: ParallelTransformerLayerPipe |
|
100.83.134.153: 10: ParallelTransformerLayerPipe |
|
100.83.134.153: stage=1 layers=13 |
|
100.83.134.153: 11: ParallelTransformerLayerPipe |
|
100.83.134.153: 12: ParallelTransformerLayerPipe |
|
100.83.134.153: 13: ParallelTransformerLayerPipe |
|
100.83.134.153: 14: ParallelTransformerLayerPipe |
|
100.83.134.153: 15: ParallelTransformerLayerPipe |
|
100.83.134.153: 16: ParallelTransformerLayerPipe |
|
100.83.134.153: 17: ParallelTransformerLayerPipe |
|
100.83.134.153: 18: ParallelTransformerLayerPipe |
|
100.83.134.153: 19: <lambda> |
|
100.83.134.153: 20: WrapName |
|
100.83.134.153: 21: WrapName |
|
100.83.134.153: 22: <lambda> |
|
100.83.134.153: 23: float16_to_fp32 |
|
100.83.134.153: loss: CrossEntropy |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: *************** Using FusedSDPA ****************** |
|
100.83.134.153: [2024-04-20 12:30:32,931] [INFO] [utils.py:824:see_memory_usage] After Building Model |
|
100.83.134.153: [2024-04-20 12:30:32,940] [INFO] [utils.py:825:see_memory_usage] MA 0.01 GB Max_MA 0.01 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:32,942] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.15 GB, percent = 37.0% |
|
100.83.134.153: > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 1397964800 |
|
100.83.134.153: > learning rate decay style: cosine |
|
100.83.134.153: DeepSpeed is enabled. |
|
100.83.134.153: [2024-04-20 12:30:32,950] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.12.4+hpu.synapse.v1.14.0, git-hash=fad45b2, git-branch=1.14.0 |
|
100.83.134.153: [2024-04-20 12:30:34,689] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False |
|
100.83.134.153: [2024-04-20 12:30:34,691] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer |
|
100.83.134.153: [2024-04-20 12:30:34,691] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer |
|
100.83.134.153: [2024-04-20 12:30:34,693] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW |
|
100.83.134.153: [2024-04-20 12:30:34,693] [INFO] [logging.py:96:log_dist] [Rank 0] Creating BF16 optimizer |
|
100.83.134.153: [2024-04-20 12:30:34,860] [INFO] [utils.py:824:see_memory_usage] begin bf16_optimizer |
|
100.83.134.153: [2024-04-20 12:30:34,868] [INFO] [utils.py:825:see_memory_usage] MA 2.62 GB Max_MA 2.63 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:34,869] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.93 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:35,038] [INFO] [utils.py:824:see_memory_usage] before initializing group 0 |
|
100.83.134.153: [2024-04-20 12:30:35,046] [INFO] [utils.py:825:see_memory_usage] MA 2.62 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:35,047] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.7 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:35,139] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,140] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,140] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,145] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,286] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,305] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,316] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,316] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,335] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,351] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,406] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,515] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,524] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,642] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:35,663] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.153: [2024-04-20 12:30:35,852] [INFO] [utils.py:824:see_memory_usage] after initializing group 0 |
|
100.83.134.153: [2024-04-20 12:30:35,860] [INFO] [utils.py:825:see_memory_usage] MA 2.62 GB Max_MA 5.22 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:35,861] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.97 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:36,031] [INFO] [utils.py:824:see_memory_usage] before initializing group 1 |
|
100.83.134.153: [2024-04-20 12:30:36,039] [INFO] [utils.py:825:see_memory_usage] MA 2.62 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:36,040] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 374.19 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:36,307] [INFO] [utils.py:824:see_memory_usage] after initializing group 1 |
|
100.83.134.153: [2024-04-20 12:30:36,315] [INFO] [utils.py:825:see_memory_usage] MA 9.13 GB Max_MA 9.13 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:36,317] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.89 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:36,552] [INFO] [utils.py:824:see_memory_usage] before initialize_optimizer |
|
100.83.134.153: [2024-04-20 12:30:36,561] [INFO] [utils.py:825:see_memory_usage] MA 9.13 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:36,562] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 374.2 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:36,801] [INFO] [utils.py:824:see_memory_usage] end initialize_optimizer |
|
100.83.134.153: [2024-04-20 12:30:36,809] [INFO] [utils.py:825:see_memory_usage] MA 9.13 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:36,810] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 373.93 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:37,002] [INFO] [utils.py:824:see_memory_usage] end bf16_optimizer |
|
100.83.134.153: [2024-04-20 12:30:37,010] [INFO] [utils.py:825:see_memory_usage] MA 9.13 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
100.83.134.153: [2024-04-20 12:30:37,011] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 374.06 GB, percent = 37.1% |
|
100.83.134.153: [2024-04-20 12:30:37,013] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = BF16_Optimizer |
|
100.83.134.153: [2024-04-20 12:30:37,014] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client LR scheduler |
|
100.83.134.153: [2024-04-20 12:30:37,014] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x7f1f3fc6e530> |
|
100.83.134.153: [2024-04-20 12:30:37,014] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.95), (0.9, 0.95)] |
|
100.83.134.153: [2024-04-20 12:30:37,015] [INFO] [config.py:992:print] DeepSpeedEngine configuration: |
|
100.83.134.153: [2024-04-20 12:30:37,016] [INFO] [config.py:996:print] activation_checkpointing_config { |
|
100.83.134.153: "partition_activations": false, |
|
100.83.134.153: "contiguous_memory_optimization": false, |
|
100.83.134.153: "cpu_checkpointing": false, |
|
100.83.134.153: "number_checkpoints": null, |
|
100.83.134.153: "synchronize_checkpoint_boundary": false, |
|
100.83.134.153: "profile": false |
|
100.83.134.153: } |
|
100.83.134.153: [2024-04-20 12:30:37,016] [INFO] [config.py:996:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} |
|
100.83.134.153: [2024-04-20 12:30:37,016] [INFO] [config.py:996:print] amp_enabled .................. False |
|
100.83.134.153: [2024-04-20 12:30:37,017] [INFO] [config.py:996:print] amp_params ................... False |
|
100.83.134.153: [2024-04-20 12:30:37,018] [INFO] [config.py:996:print] autotuning_config ............ { |
|
100.83.134.153: "enabled": false, |
|
100.83.134.153: "start_step": null, |
|
100.83.134.153: "end_step": null, |
|
100.83.134.153: "metric_path": null, |
|
100.83.134.153: "arg_mappings": null, |
|
100.83.134.153: "metric": "throughput", |
|
100.83.134.153: "model_info": null, |
|
100.83.134.153: "results_dir": "autotuning_results", |
|
100.83.134.153: "exps_dir": "autotuning_exps", |
|
100.83.134.153: "overwrite": true, |
|
100.83.134.153: "fast": true, |
|
100.83.134.153: "start_profile_step": 3, |
|
100.83.134.153: "end_profile_step": 5, |
|
100.83.134.153: "tuner_type": "gridsearch", |
|
100.83.134.153: "tuner_early_stopping": 5, |
|
100.83.134.153: "tuner_num_trials": 50, |
|
100.83.134.153: "model_info_path": null, |
|
100.83.134.153: "mp_size": 1, |
|
100.83.134.153: "max_train_batch_size": null, |
|
100.83.134.153: "min_train_batch_size": 1, |
|
100.83.134.153: "max_train_micro_batch_size_per_gpu": 1.024000e+03, |
|
100.83.134.153: "min_train_micro_batch_size_per_gpu": 1, |
|
100.83.134.153: "num_tuning_micro_batch_sizes": 3 |
|
100.83.134.153: } |
|
100.83.134.153: [2024-04-20 12:30:37,018] [INFO] [config.py:996:print] bfloat16_accumulate_grads_via_hooks True |
|
100.83.134.153: [2024-04-20 12:30:37,018] [INFO] [config.py:996:print] bfloat16_enabled ............. True |
|
100.83.134.153: [2024-04-20 12:30:37,019] [INFO] [config.py:996:print] checkpoint_parallel_write_pipeline False |
|
100.83.134.153: [2024-04-20 12:30:37,019] [INFO] [config.py:996:print] checkpoint_tag_validation_enabled True |
|
100.83.134.153: [2024-04-20 12:30:37,019] [INFO] [config.py:996:print] checkpoint_tag_validation_fail False |
|
100.83.134.153: [2024-04-20 12:30:37,020] [INFO] [config.py:996:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f1f377712d0> |
|
100.83.134.153: [2024-04-20 12:30:37,020] [INFO] [config.py:996:print] communication_data_type ...... None |
|
100.83.134.153: [2024-04-20 12:30:37,020] [INFO] [config.py:996:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} |
|
100.83.134.153: [2024-04-20 12:30:37,021] [INFO] [config.py:996:print] curriculum_enabled_legacy .... False |
|
100.83.134.153: [2024-04-20 12:30:37,021] [INFO] [config.py:996:print] curriculum_params_legacy ..... False |
|
100.83.134.153: [2024-04-20 12:30:37,021] [INFO] [config.py:996:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} |
|
100.83.134.153: [2024-04-20 12:30:37,021] [INFO] [config.py:996:print] data_efficiency_enabled ...... False |
|
100.83.134.153: [2024-04-20 12:30:37,021] [INFO] [config.py:996:print] dataloader_drop_last ......... False |
|
100.83.134.153: [2024-04-20 12:30:37,021] [INFO] [config.py:996:print] disable_allgather ............ False |
|
100.83.134.153: [2024-04-20 12:30:37,022] [INFO] [config.py:996:print] dump_state ................... False |
|
100.83.134.153: [2024-04-20 12:30:37,022] [INFO] [config.py:996:print] dynamic_loss_scale_args ...... None |
|
100.83.134.153: [2024-04-20 12:30:37,022] [INFO] [config.py:996:print] eigenvalue_enabled ........... False |
|
100.83.134.153: [2024-04-20 12:30:37,022] [INFO] [config.py:996:print] eigenvalue_gas_boundary_resolution 1 |
|
100.83.134.153: [2024-04-20 12:30:37,022] [INFO] [config.py:996:print] eigenvalue_layer_name ........ bert.encoder.layer |
|
100.83.134.153: [2024-04-20 12:30:37,022] [INFO] [config.py:996:print] eigenvalue_layer_num ......... 0 |
|
100.83.134.153: [2024-04-20 12:30:37,023] [INFO] [config.py:996:print] eigenvalue_max_iter .......... 100 |
|
100.83.134.153: [2024-04-20 12:30:37,023] [INFO] [config.py:996:print] eigenvalue_stability ......... 1e-06 |
|
100.83.134.153: [2024-04-20 12:30:37,023] [INFO] [config.py:996:print] eigenvalue_tol ............... 0.01 |
|
100.83.134.153: [2024-04-20 12:30:37,023] [INFO] [config.py:996:print] eigenvalue_verbose ........... False |
|
100.83.134.153: [2024-04-20 12:30:37,023] [INFO] [config.py:996:print] elasticity_enabled ........... False |
|
100.83.134.153: [2024-04-20 12:30:37,024] [INFO] [config.py:996:print] flops_profiler_config ........ { |
|
100.83.134.153: "enabled": false, |
|
100.83.134.153: "recompute_fwd_factor": 0.0, |
|
100.83.134.153: "profile_step": 1, |
|
100.83.134.153: "module_depth": -1, |
|
100.83.134.153: "top_modules": 1, |
|
100.83.134.153: "detailed": true, |
|
100.83.134.153: "output_file": null |
|
100.83.134.153: } |
|
100.83.134.153: [2024-04-20 12:30:37,024] [INFO] [config.py:996:print] fp16_auto_cast ............... None |
|
100.83.134.153: [2024-04-20 12:30:37,024] [INFO] [config.py:996:print] fp16_enabled ................. False |
|
100.83.134.153: [2024-04-20 12:30:37,024] [INFO] [config.py:996:print] fp16_master_weights_and_gradients False |
|
100.83.134.153: [2024-04-20 12:30:37,024] [INFO] [config.py:996:print] global_rank .................. 0 |
|
100.83.134.153: [2024-04-20 12:30:37,025] [INFO] [config.py:996:print] grad_accum_dtype ............. None |
|
100.83.134.153: [2024-04-20 12:30:37,025] [INFO] [config.py:996:print] gradient_accumulation_steps .. 64 |
|
100.83.134.153: [2024-04-20 12:30:37,025] [INFO] [config.py:996:print] gradient_clipping ............ 1.0 |
|
100.83.134.153: [2024-04-20 12:30:37,025] [INFO] [config.py:996:print] gradient_predivide_factor .... 1.0 |
|
100.83.134.153: [2024-04-20 12:30:37,025] [INFO] [config.py:996:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 |
|
100.83.134.153: [2024-04-20 12:30:37,026] [INFO] [config.py:996:print] initial_dynamic_scale ........ 1 |
|
100.83.134.153: [2024-04-20 12:30:37,026] [INFO] [config.py:996:print] load_universal_checkpoint .... False |
|
100.83.134.153: [2024-04-20 12:30:37,026] [INFO] [config.py:996:print] loss_scale ................... 1.0 |
|
100.83.134.153: [2024-04-20 12:30:37,026] [INFO] [config.py:996:print] memory_breakdown ............. False |
|
100.83.134.153: [2024-04-20 12:30:37,026] [INFO] [config.py:996:print] mics_hierarchial_params_gather False |
|
100.83.134.153: [2024-04-20 12:30:37,026] [INFO] [config.py:996:print] mics_shard_size .............. -1 |
|
100.83.134.153: [2024-04-20 12:30:37,027] [INFO] [config.py:996:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False |
|
100.83.134.153: [2024-04-20 12:30:37,027] [INFO] [config.py:996:print] nebula_config ................ { |
|
100.83.134.153: "enabled": false, |
|
100.83.134.153: "persistent_storage_path": null, |
|
100.83.134.153: "persistent_time_interval": 100, |
|
100.83.134.153: "num_of_version_in_retention": 2, |
|
100.83.134.153: "enable_nebula_load": true, |
|
100.83.134.153: "load_path": null |
|
100.83.134.153: } |
|
100.83.134.153: [2024-04-20 12:30:37,028] [INFO] [config.py:996:print] optimizer_legacy_fusion ...... False |
|
100.83.134.153: [2024-04-20 12:30:37,028] [INFO] [config.py:996:print] optimizer_name ............... None |
|
100.83.134.153: [2024-04-20 12:30:37,028] [INFO] [config.py:996:print] optimizer_params ............. None |
|
100.83.134.153: [2024-04-20 12:30:37,028] [INFO] [config.py:996:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': False, 'grad_partitioned': False} |
|
100.83.134.153: [2024-04-20 12:30:37,028] [INFO] [config.py:996:print] pld_enabled .................. False |
|
100.83.134.153: [2024-04-20 12:30:37,028] [INFO] [config.py:996:print] pld_params ................... False |
|
100.83.134.153: [2024-04-20 12:30:37,029] [INFO] [config.py:996:print] prescale_gradients ........... False |
|
100.83.134.153: [2024-04-20 12:30:37,029] [INFO] [config.py:996:print] scheduler_name ............... None |
|
100.83.134.153: [2024-04-20 12:30:37,029] [INFO] [config.py:996:print] scheduler_params ............. None |
|
100.83.134.153: [2024-04-20 12:30:37,029] [INFO] [config.py:996:print] seq_parallel_communication_data_type torch.float32 |
|
100.83.134.153: [2024-04-20 12:30:37,029] [INFO] [config.py:996:print] sparse_attention ............. None |
|
100.83.134.153: [2024-04-20 12:30:37,029] [INFO] [config.py:996:print] sparse_gradients_enabled ..... False |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] steps_per_print .............. 10 |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] train_batch_size ............. 256 |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] train_micro_batch_size_per_gpu 1 |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] use_data_before_expert_parallel_ False |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] use_node_local_storage ....... False |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] wall_clock_breakdown ......... False |
|
100.83.134.153: [2024-04-20 12:30:37,030] [INFO] [config.py:996:print] weight_quantization_config ... None |
|
100.83.134.153: [2024-04-20 12:30:37,031] [INFO] [config.py:996:print] world_size ................... 4 |
|
100.83.134.153: [2024-04-20 12:30:37,031] [INFO] [config.py:996:print] zero_allow_comm_data_type_fp32 False |
|
100.83.134.153: [2024-04-20 12:30:37,031] [INFO] [config.py:996:print] zero_allow_untested_optimizer False |
|
100.83.134.153: [2024-04-20 12:30:37,031] [INFO] [config.py:996:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=False reduce_bucket_size=500,000,000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False use_all_reduce_for_fetch_params=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True |
|
100.83.134.153: [2024-04-20 12:30:37,031] [INFO] [config.py:996:print] zero_enabled ................. False |
|
100.83.134.153: [2024-04-20 12:30:37,032] [INFO] [config.py:996:print] zero_force_ds_cpu_optimizer .. True |
|
100.83.134.153: [2024-04-20 12:30:37,032] [INFO] [config.py:996:print] zero_optimization_stage ...... 0 |
|
100.83.134.153: [2024-04-20 12:30:37,032] [INFO] [config.py:982:print_user_config] json = { |
|
100.83.134.153: "train_batch_size": 256, |
|
100.83.134.153: "train_micro_batch_size_per_gpu": 1, |
|
100.83.134.153: "steps_per_print": 10, |
|
100.83.134.153: "gradient_clipping": 1.0, |
|
100.83.134.153: "zero_optimization": { |
|
100.83.134.153: "stage": 0 |
|
100.83.134.153: }, |
|
100.83.134.153: "bf16": { |
|
100.83.134.153: "enabled": true, |
|
100.83.134.153: "accumulate_grads_via_hooks": true |
|
100.83.134.153: }, |
|
100.83.134.153: "fp16": { |
|
100.83.134.153: "enabled": false |
|
100.83.134.153: }, |
|
100.83.134.153: "wall_clock_breakdown": false, |
|
100.83.134.153: "pipeline": { |
|
100.83.134.153: "pipe_partitioned": false, |
|
100.83.134.153: "grad_partitioned": false |
|
100.83.134.153: } |
|
100.83.134.153: } |
|
100.83.134.153: [2024-04-20 12:30:37,032] [INFO] [engine.py:99:__init__] CONFIG: micro_batches=64 micro_batch_size=1 |
|
100.83.134.153: [2024-04-20 12:30:37,033] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False |
|
100.83.134.148: [2024-04-20 12:30:38,234] [INFO] [engine.py:180:__init__] RANK=8 STAGE=1 LAYERS=13 [11, 24) STAGE_PARAMS=1397969920 (1397.970M) TOTAL_PARAMS=5591869440 (5591.869M) UNIQUE_PARAMS=5591869440 (5591.869M) |
|
100.83.134.153: [2024-04-20 12:30:38,231] [INFO] [engine.py:180:__init__] RANK=1 STAGE=0 LAYERS=11 [0, 11) STAGE_PARAMS=1397964800 (1397.965M) TOTAL_PARAMS=5591869440 (5591.869M) UNIQUE_PARAMS=5591869440 (5591.869M) |
|
100.83.134.153: [2024-04-20 12:30:38,231] [INFO] [engine.py:180:__init__] RANK=0 STAGE=0 LAYERS=11 [0, 11) STAGE_PARAMS=1397964800 (1397.965M) TOTAL_PARAMS=5591869440 (5591.869M) UNIQUE_PARAMS=5591869440 (5591.869M) |
|
100.83.134.148: [2024-04-20 12:30:38,240] [INFO] [engine.py:180:__init__] RANK=9 STAGE=1 LAYERS=13 [11, 24) STAGE_PARAMS=1397969920 (1397.970M) TOTAL_PARAMS=5591869440 (5591.869M) UNIQUE_PARAMS=5591869440 (5591.869M) |
|
100.83.134.148: [2024-04-20 12:30:38,251] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,255] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,255] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,256] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,257] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,258] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,259] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: [2024-04-20 12:30:38,261] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,269] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,269] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,269] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,269] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,270] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,271] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: WARNING: could not find the metadata file /data/output/llama13b_poc//checkpoints_zero_stage_2 |
|
100.83.134.153: will not load any checkpoints and will start from random |
|
100.83.134.153: [2024-04-20 12:30:38,277] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.153: [2024-04-20 12:30:38,277] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/llama13b_poc//checkpoints_zero_stage_2/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
100.83.134.148: time (ms) | load-checkpoint: 44.53 |
|
100.83.134.153: [after model, optimizer, and learning rate scheduler are built] datetime: 2024-04-20 12:30:38 |
|
100.83.134.153: > building train, validation, and test datasets ... |
|
100.83.134.153: > datasets target sizes (minimum size): |
|
100.83.134.153: train: 2560000 |
|
100.83.134.153: validation: 258560 |
|
100.83.134.153: test: 2560 |
|
100.83.134.153: > building train, validation, and test datasets for GPT ... |
|
100.83.134.153: Single data path provided for train, valid & test |
|
100.83.134.153: > building dataset index ... |
|
100.83.134.153: reading sizes... |
|
100.83.134.153: reading pointers... |
|
100.83.134.153: reading document index... |
|
100.83.134.153: creating numpy buffer of mmap... |
|
100.83.134.153: creating memory view of numpy buffer... |
|
100.83.134.153: > finished creating indexed dataset in 0.011392 seconds |
|
100.83.134.153: number of documents: 1558306 |
|
100.83.134.153: > dataset split: |
|
100.83.134.153: train: |
|
100.83.134.153: document indices in [0, 1509999) total of 1509999 documents |
|
100.83.134.153: validation: |
|
100.83.134.153: document indices in [1509999, 1556748) total of 46749 documents |
|
100.83.134.153: test: |
|
100.83.134.153: document indices in [1556748, 1558306) total of 1558 documents |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: > loaded doc-idx mapping from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: > loaded sample-idx mapping from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: > loaded shuffle-idx mapping from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: loaded indexed file in 0.054 seconds |
|
100.83.134.153: total number of samples: 15244235 |
|
100.83.134.153: total number of epochs: 1 |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: |
|
100.83.134.153: |
|
100.83.134.153: > loaded doc-idx mapping from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: > loaded sample-idx mapping from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: |
|
100.83.134.153: > loaded shuffle-idx mapping from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: |
|
100.83.134.153: |
|
100.83.134.153: loaded indexed file in 0.015 seconds |
|
100.83.134.153: total number of samples: 481162 |
|
100.83.134.153: total number of epochs: 1 |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: > loaded doc-idx mapping from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: > loaded sample-idx mapping from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: |
|
100.83.134.153: > loaded shuffle-idx mapping from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy |
|
100.83.134.153: loaded indexed file in 0.004 seconds |
|
100.83.134.153: total number of samples: 16581 |
|
100.83.134.153: total number of epochs: 1 |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: |
|
100.83.134.153: > finished creating GPT datasets ... |
|
100.83.134.148: Loading dataset index file from /data/arxiv//tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy |
|
100.83.134.148: time (ms) | model-and-optimizer-setup: 6332.37 | train/valid/test-data-iterators-setup: 2547.30 |
|
100.83.134.153: [after dataloaders are built] datetime: 2024-04-20 12:30:41 |
|
100.83.134.153: done with setup ... |
|
100.83.134.153: training ... |
|
100.83.134.148: 2024-04-20 12:30:42 Start last rank evaluation |
|
100.83.134.153: [before the start of training step] datetime: 2024-04-20 12:30:42 |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:32:58 | validation loss at iteration 2 | lm loss value: 1.186498E+01 | lm loss PPL: 1.421992E+05 | |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:32:58 Start last rank evaluation |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:34:43 | validation loss at iteration 4 | lm loss value: 1.186648E+01 | lm loss PPL: 1.424115E+05 | |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:34:43 Start last rank evaluation |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:36:28 | validation loss at iteration 6 | lm loss value: 1.187085E+01 | lm loss PPL: 1.430364E+05 | |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:36:28 Start last rank evaluation |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:38:13 | validation loss at iteration 8 | lm loss value: 1.186528E+01 | lm loss PPL: 1.422418E+05 | |
|
100.83.134.148: ------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 10/ 10000 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (ms): 45147.8 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.670 | TFLOPs: 24.22 | |
|
100.83.134.148: 2024-04-20 12:38:13 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:39:56 | validation loss at iteration 10 | lm loss value: 1.186884E+01 | lm loss PPL: 1.427490E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:39:56 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:41:39 | validation loss at iteration 12 | lm loss value: 1.187189E+01 | lm loss PPL: 1.431841E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:41:39 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:43:28 | validation loss at iteration 14 | lm loss value: 1.186675E+01 | lm loss PPL: 1.424502E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:43:28 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:45:19 | validation loss at iteration 16 | lm loss value: 1.186648E+01 | lm loss PPL: 1.424126E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:45:19 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:46:59 | validation loss at iteration 18 | lm loss value: 1.186634E+01 | lm loss PPL: 1.423924E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 20/ 10000 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (ms): 52634.3 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.864 | TFLOPs: 20.78 | |
|
100.83.134.148: 2024-04-20 12:46:59 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:48:33 | validation loss at iteration 20 | lm loss value: 1.186984E+01 | lm loss PPL: 1.428912E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:48:33 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:50:12 | validation loss at iteration 22 | lm loss value: 1.186877E+01 | lm loss PPL: 1.427388E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:50:12 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:51:43 | validation loss at iteration 24 | lm loss value: 1.186635E+01 | lm loss PPL: 1.423932E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:51:43 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:53:26 | validation loss at iteration 26 | lm loss value: 1.186835E+01 | lm loss PPL: 1.426792E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:53:26 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:55:10 | validation loss at iteration 28 | lm loss value: 1.186899E+01 | lm loss PPL: 1.427705E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 30/ 10000 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (ms): 49070.0 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.217 | TFLOPs: 22.29 | |
|
100.83.134.148: 2024-04-20 12:55:10 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:56:51 | validation loss at iteration 30 | lm loss value: 1.186924E+01 | lm loss PPL: 1.428051E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:56:51 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:58:31 | validation loss at iteration 32 | lm loss value: 1.186650E+01 | lm loss PPL: 1.424149E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 12:58:31 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:00:10 | validation loss at iteration 34 | lm loss value: 1.187002E+01 | lm loss PPL: 1.429175E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:00:10 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:01:46 | validation loss at iteration 36 | lm loss value: 1.186934E+01 | lm loss PPL: 1.428197E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:01:47 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:03:20 | validation loss at iteration 38 | lm loss value: 1.186555E+01 | lm loss PPL: 1.422804E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 40/ 10000 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (ms): 48986.4 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.226 | TFLOPs: 22.32 | |
|
100.83.134.148: 2024-04-20 13:03:20 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:04:58 | validation loss at iteration 40 | lm loss value: 1.186608E+01 | lm loss PPL: 1.423557E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:04:58 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:06:38 | validation loss at iteration 42 | lm loss value: 1.187025E+01 | lm loss PPL: 1.429502E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:06:38 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:08:14 | validation loss at iteration 44 | lm loss value: 1.186828E+01 | lm loss PPL: 1.426692E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:08:14 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:09:50 | validation loss at iteration 46 | lm loss value: 1.186816E+01 | lm loss PPL: 1.426514E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:09:50 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:11:25 | validation loss at iteration 48 | lm loss value: 1.186999E+01 | lm loss PPL: 1.429122E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 50/ 10000 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (ms): 48505.6 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.278 | TFLOPs: 22.55 | |
|
100.83.134.148: 2024-04-20 13:11:25 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:12:51 | validation loss at iteration 50 | lm loss value: 1.186793E+01 | lm loss PPL: 1.426192E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:12:51 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:14:17 | validation loss at iteration 52 | lm loss value: 1.187073E+01 | lm loss PPL: 1.430180E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:14:17 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:15:42 | validation loss at iteration 54 | lm loss value: 1.187126E+01 | lm loss PPL: 1.430942E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:15:42 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:17:14 | validation loss at iteration 56 | lm loss value: 1.186893E+01 | lm loss PPL: 1.427617E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:17:14 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:18:51 | validation loss at iteration 58 | lm loss value: 1.186986E+01 | lm loss PPL: 1.428939E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 60/ 10000 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (ms): 44618.4 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.738 | TFLOPs: 24.51 | |
|
100.83.134.148: 2024-04-20 13:18:51 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:20:18 | validation loss at iteration 60 | lm loss value: 1.186818E+01 | lm loss PPL: 1.426545E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:20:18 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:21:56 | validation loss at iteration 62 | lm loss value: 1.186751E+01 | lm loss PPL: 1.425583E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:21:56 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:23:33 | validation loss at iteration 64 | lm loss value: 1.187158E+01 | lm loss PPL: 1.431402E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:23:33 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:25:07 | validation loss at iteration 66 | lm loss value: 1.187133E+01 | lm loss PPL: 1.431050E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:25:07 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:26:53 | validation loss at iteration 68 | lm loss value: 1.186932E+01 | lm loss PPL: 1.428174E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 70/ 10000 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (ms): 48141.6 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.318 | TFLOPs: 22.72 | |
|
100.83.134.148: 2024-04-20 13:26:53 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:28:33 | validation loss at iteration 70 | lm loss value: 1.187103E+01 | lm loss PPL: 1.430608E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:28:33 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:30:12 | validation loss at iteration 72 | lm loss value: 1.186635E+01 | lm loss PPL: 1.423938E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:30:12 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:31:56 | validation loss at iteration 74 | lm loss value: 1.187018E+01 | lm loss PPL: 1.429396E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:31:56 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:33:36 | validation loss at iteration 76 | lm loss value: 1.186723E+01 | lm loss PPL: 1.425195E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:33:36 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:35:16 | validation loss at iteration 78 | lm loss value: 1.187104E+01 | lm loss PPL: 1.430631E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 80/ 10000 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (ms): 50396.0 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.080 | TFLOPs: 21.70 | |
|
100.83.134.148: 2024-04-20 13:35:17 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:37:00 | validation loss at iteration 80 | lm loss value: 1.187048E+01 | lm loss PPL: 1.429827E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:37:00 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:38:47 | validation loss at iteration 82 | lm loss value: 1.187338E+01 | lm loss PPL: 1.433985E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:38:47 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:40:35 | validation loss at iteration 84 | lm loss value: 1.186994E+01 | lm loss PPL: 1.429062E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:40:35 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:42:17 | validation loss at iteration 86 | lm loss value: 1.186928E+01 | lm loss PPL: 1.428114E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:42:17 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:43:57 | validation loss at iteration 88 | lm loss value: 1.186796E+01 | lm loss PPL: 1.426230E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 90/ 10000 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (ms): 52025.2 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.921 | TFLOPs: 21.02 | |
|
100.83.134.148: 2024-04-20 13:43:57 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:45:30 | validation loss at iteration 90 | lm loss value: 1.186771E+01 | lm loss PPL: 1.425871E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:45:30 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:47:19 | validation loss at iteration 92 | lm loss value: 1.187025E+01 | lm loss PPL: 1.429506E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:47:19 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:49:11 | validation loss at iteration 94 | lm loss value: 1.186813E+01 | lm loss PPL: 1.426469E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:49:11 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:51:05 | validation loss at iteration 96 | lm loss value: 1.186629E+01 | lm loss PPL: 1.423854E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:51:05 Start last rank evaluation |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:53:03 | validation loss at iteration 98 | lm loss value: 1.187007E+01 | lm loss PPL: 1.429238E+05 | |
|
100.83.134.148: -------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 100/ 10000 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (ms): 54590.8 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.689 | TFLOPs: 20.03 | |
|
100.83.134.148: 2024-04-20 13:53:03 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:54:53 | validation loss at iteration 100 | lm loss value: 1.186618E+01 | lm loss PPL: 1.423689E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:54:53 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:56:36 | validation loss at iteration 102 | lm loss value: 1.186861E+01 | lm loss PPL: 1.427159E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:56:36 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:58:21 | validation loss at iteration 104 | lm loss value: 1.186992E+01 | lm loss PPL: 1.429028E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 13:58:21 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:00:06 | validation loss at iteration 106 | lm loss value: 1.186686E+01 | lm loss PPL: 1.424667E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:00:06 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:01:55 | validation loss at iteration 108 | lm loss value: 1.186692E+01 | lm loss PPL: 1.424749E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 110/ 10000 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (ms): 53222.8 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.810 | TFLOPs: 20.55 | |
|
100.83.134.148: 2024-04-20 14:01:55 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:03:35 | validation loss at iteration 110 | lm loss value: 1.187188E+01 | lm loss PPL: 1.431832E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:03:35 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:05:15 | validation loss at iteration 112 | lm loss value: 1.187282E+01 | lm loss PPL: 1.433179E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:05:15 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:06:55 | validation loss at iteration 114 | lm loss value: 1.186600E+01 | lm loss PPL: 1.423439E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:06:55 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:08:28 | validation loss at iteration 116 | lm loss value: 1.186373E+01 | lm loss PPL: 1.420216E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:08:28 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:10:22 | validation loss at iteration 118 | lm loss value: 1.186839E+01 | lm loss PPL: 1.426845E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 120/ 10000 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (ms): 50715.2 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.048 | TFLOPs: 21.56 | |
|
100.83.134.148: 2024-04-20 14:10:22 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:12:14 | validation loss at iteration 120 | lm loss value: 1.186989E+01 | lm loss PPL: 1.428985E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:12:14 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:14:09 | validation loss at iteration 122 | lm loss value: 1.186690E+01 | lm loss PPL: 1.424722E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:14:09 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:16:06 | validation loss at iteration 124 | lm loss value: 1.186539E+01 | lm loss PPL: 1.422576E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:16:06 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:18:04 | validation loss at iteration 126 | lm loss value: 1.186744E+01 | lm loss PPL: 1.425483E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:18:04 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:19:46 | validation loss at iteration 128 | lm loss value: 1.187071E+01 | lm loss PPL: 1.430153E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 130/ 10000 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (ms): 56442.0 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.536 | TFLOPs: 19.38 | |
|
100.83.134.148: 2024-04-20 14:19:46 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:21:25 | validation loss at iteration 130 | lm loss value: 1.187044E+01 | lm loss PPL: 1.429771E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:21:25 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:23:08 | validation loss at iteration 132 | lm loss value: 1.186666E+01 | lm loss PPL: 1.424370E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:23:08 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:24:47 | validation loss at iteration 134 | lm loss value: 1.186635E+01 | lm loss PPL: 1.423938E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:24:47 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:26:17 | validation loss at iteration 136 | lm loss value: 1.186969E+01 | lm loss PPL: 1.428706E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:26:17 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:27:49 | validation loss at iteration 138 | lm loss value: 1.187100E+01 | lm loss PPL: 1.430566E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 140/ 10000 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (ms): 48270.8 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.303 | TFLOPs: 22.66 | |
|
100.83.134.148: 2024-04-20 14:27:49 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:29:25 | validation loss at iteration 140 | lm loss value: 1.187148E+01 | lm loss PPL: 1.431266E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:29:25 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:31:08 | validation loss at iteration 142 | lm loss value: 1.186775E+01 | lm loss PPL: 1.425925E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:31:08 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:32:44 | validation loss at iteration 144 | lm loss value: 1.187077E+01 | lm loss PPL: 1.430237E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:32:44 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:34:23 | validation loss at iteration 146 | lm loss value: 1.186169E+01 | lm loss PPL: 1.417313E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:34:23 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:36:19 | validation loss at iteration 148 | lm loss value: 1.186890E+01 | lm loss PPL: 1.427574E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 150/ 10000 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (ms): 50985.6 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.021 | TFLOPs: 21.45 | |
|
100.83.134.148: 2024-04-20 14:36:19 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:38:13 | validation loss at iteration 150 | lm loss value: 1.186829E+01 | lm loss PPL: 1.426706E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:38:13 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:40:08 | validation loss at iteration 152 | lm loss value: 1.186775E+01 | lm loss PPL: 1.425928E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:40:08 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:42:02 | validation loss at iteration 154 | lm loss value: 1.186833E+01 | lm loss PPL: 1.426763E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:42:02 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:43:59 | validation loss at iteration 156 | lm loss value: 1.186666E+01 | lm loss PPL: 1.424370E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:43:59 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:45:53 | validation loss at iteration 158 | lm loss value: 1.187116E+01 | lm loss PPL: 1.430807E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 160/ 10000 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (ms): 57384.4 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.461 | TFLOPs: 19.06 | |
|
100.83.134.148: 2024-04-20 14:45:53 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:47:45 | validation loss at iteration 160 | lm loss value: 1.186766E+01 | lm loss PPL: 1.425796E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:47:45 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:49:38 | validation loss at iteration 162 | lm loss value: 1.186446E+01 | lm loss PPL: 1.421250E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:49:38 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:51:19 | validation loss at iteration 164 | lm loss value: 1.186931E+01 | lm loss PPL: 1.428152E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:51:19 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:53:04 | validation loss at iteration 166 | lm loss value: 1.186959E+01 | lm loss PPL: 1.428559E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:53:04 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:54:52 | validation loss at iteration 168 | lm loss value: 1.186724E+01 | lm loss PPL: 1.425202E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 170/ 10000 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (ms): 53951.6 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.745 | TFLOPs: 20.27 | |
|
100.83.134.148: 2024-04-20 14:54:52 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:56:34 | validation loss at iteration 170 | lm loss value: 1.186861E+01 | lm loss PPL: 1.427159E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:56:34 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:57:24 | validation loss at iteration 172 | lm loss value: 1.186857E+01 | lm loss PPL: 1.427101E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:57:24 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:59:25 | validation loss at iteration 174 | lm loss value: 1.186522E+01 | lm loss PPL: 1.422333E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 14:59:25 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:01:10 | validation loss at iteration 176 | lm loss value: 1.187229E+01 | lm loss PPL: 1.432425E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:01:10 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:02:50 | validation loss at iteration 178 | lm loss value: 1.187032E+01 | lm loss PPL: 1.429603E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 180/ 10000 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (ms): 47769.5 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.359 | TFLOPs: 22.89 | |
|
100.83.134.148: 2024-04-20 15:02:50 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:04:18 | validation loss at iteration 180 | lm loss value: 1.186629E+01 | lm loss PPL: 1.423849E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:04:18 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:05:48 | validation loss at iteration 182 | lm loss value: 1.187278E+01 | lm loss PPL: 1.433123E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:05:48 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:07:25 | validation loss at iteration 184 | lm loss value: 1.186896E+01 | lm loss PPL: 1.427653E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:07:25 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:09:09 | validation loss at iteration 186 | lm loss value: 1.186582E+01 | lm loss PPL: 1.423176E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:09:09 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:10:55 | validation loss at iteration 188 | lm loss value: 1.187152E+01 | lm loss PPL: 1.431316E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 190/ 10000 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (ms): 48476.3 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.281 | TFLOPs: 22.56 | |
|
100.83.134.148: 2024-04-20 15:10:55 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:11:58 | validation loss at iteration 190 | lm loss value: 1.187099E+01 | lm loss PPL: 1.430551E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:11:58 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:13:39 | validation loss at iteration 192 | lm loss value: 1.187028E+01 | lm loss PPL: 1.429548E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:13:39 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:15:09 | validation loss at iteration 194 | lm loss value: 1.187070E+01 | lm loss PPL: 1.430143E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:15:09 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:16:47 | validation loss at iteration 196 | lm loss value: 1.186665E+01 | lm loss PPL: 1.424365E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:16:47 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:18:29 | validation loss at iteration 198 | lm loss value: 1.186438E+01 | lm loss PPL: 1.421126E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 200/ 10000 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (ms): 45415.0 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.637 | TFLOPs: 24.08 | |
|
100.83.134.148: 2024-04-20 15:18:29 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:20:17 | validation loss at iteration 200 | lm loss value: 1.187102E+01 | lm loss PPL: 1.430596E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:20:17 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:22:07 | validation loss at iteration 202 | lm loss value: 1.187050E+01 | lm loss PPL: 1.429863E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:22:07 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:24:01 | validation loss at iteration 204 | lm loss value: 1.186951E+01 | lm loss PPL: 1.428446E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:24:01 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:25:45 | validation loss at iteration 206 | lm loss value: 1.186554E+01 | lm loss PPL: 1.422789E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:25:45 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:26:57 | validation loss at iteration 208 | lm loss value: 1.187101E+01 | lm loss PPL: 1.430586E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 210/ 10000 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (ms): 50789.2 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.040 | TFLOPs: 21.53 | |
|
100.83.134.148: 2024-04-20 15:26:57 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:28:56 | validation loss at iteration 210 | lm loss value: 1.187140E+01 | lm loss PPL: 1.431140E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:28:56 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:30:59 | validation loss at iteration 212 | lm loss value: 1.186699E+01 | lm loss PPL: 1.424853E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:30:59 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:32:52 | validation loss at iteration 214 | lm loss value: 1.186904E+01 | lm loss PPL: 1.427770E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:32:52 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:34:47 | validation loss at iteration 216 | lm loss value: 1.187348E+01 | lm loss PPL: 1.434120E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:34:47 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:36:17 | validation loss at iteration 218 | lm loss value: 1.186895E+01 | lm loss PPL: 1.427648E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 220/ 10000 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (ms): 56003.2 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.571 | TFLOPs: 19.53 | |
|
100.83.134.148: 2024-04-20 15:36:17 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:37:45 | validation loss at iteration 220 | lm loss value: 1.187345E+01 | lm loss PPL: 1.434086E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:37:45 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:39:14 | validation loss at iteration 222 | lm loss value: 1.187202E+01 | lm loss PPL: 1.432032E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:39:14 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:40:44 | validation loss at iteration 224 | lm loss value: 1.187682E+01 | lm loss PPL: 1.438929E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:40:44 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:42:13 | validation loss at iteration 226 | lm loss value: 1.186917E+01 | lm loss PPL: 1.427954E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:42:13 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:43:29 | validation loss at iteration 228 | lm loss value: 1.186981E+01 | lm loss PPL: 1.428864E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 230/ 10000 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (ms): 43163.2 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.931 | TFLOPs: 25.34 | |
|
100.83.134.148: 2024-04-20 15:43:29 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:45:11 | validation loss at iteration 230 | lm loss value: 1.186887E+01 | lm loss PPL: 1.427531E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:45:11 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:46:58 | validation loss at iteration 232 | lm loss value: 1.187210E+01 | lm loss PPL: 1.432152E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:46:58 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:48:38 | validation loss at iteration 234 | lm loss value: 1.187094E+01 | lm loss PPL: 1.430485E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:48:38 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:50:23 | validation loss at iteration 236 | lm loss value: 1.186640E+01 | lm loss PPL: 1.424006E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:50:23 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:52:22 | validation loss at iteration 238 | lm loss value: 1.186799E+01 | lm loss PPL: 1.426273E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 240/ 10000 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (ms): 53392.5 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.795 | TFLOPs: 20.48 | |
|
100.83.134.148: 2024-04-20 15:52:22 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:54:26 | validation loss at iteration 240 | lm loss value: 1.187220E+01 | lm loss PPL: 1.432287E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:54:26 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:56:26 | validation loss at iteration 242 | lm loss value: 1.186974E+01 | lm loss PPL: 1.428772E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:56:26 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:58:15 | validation loss at iteration 244 | lm loss value: 1.187326E+01 | lm loss PPL: 1.433803E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:58:15 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:59:18 | validation loss at iteration 246 | lm loss value: 1.187495E+01 | lm loss PPL: 1.436228E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 15:59:18 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:01:06 | validation loss at iteration 248 | lm loss value: 1.186860E+01 | lm loss PPL: 1.427148E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 250/ 10000 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (ms): 52323.6 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 4.893 | TFLOPs: 20.90 | |
|
100.83.134.148: 2024-04-20 16:01:06 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:02:51 | validation loss at iteration 250 | lm loss value: 1.187388E+01 | lm loss PPL: 1.434699E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:02:51 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:04:28 | validation loss at iteration 252 | lm loss value: 1.187039E+01 | lm loss PPL: 1.429701E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:04:28 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:06:02 | validation loss at iteration 254 | lm loss value: 1.187127E+01 | lm loss PPL: 1.430952E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:06:02 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:07:36 | validation loss at iteration 256 | lm loss value: 1.186484E+01 | lm loss PPL: 1.421784E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:07:36 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:09:17 | validation loss at iteration 258 | lm loss value: 1.186984E+01 | lm loss PPL: 1.428908E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 260/ 10000 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (ms): 49088.5 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.215 | TFLOPs: 22.28 | |
|
100.83.134.148: 2024-04-20 16:09:17 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:11:00 | validation loss at iteration 260 | lm loss value: 1.186593E+01 | lm loss PPL: 1.423340E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:11:00 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:12:38 | validation loss at iteration 262 | lm loss value: 1.186783E+01 | lm loss PPL: 1.426050E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:12:38 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:13:34 | validation loss at iteration 264 | lm loss value: 1.186687E+01 | lm loss PPL: 1.424671E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:13:34 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:15:09 | validation loss at iteration 266 | lm loss value: 1.187080E+01 | lm loss PPL: 1.430289E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:15:09 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:16:39 | validation loss at iteration 268 | lm loss value: 1.186694E+01 | lm loss PPL: 1.424781E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 270/ 10000 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (ms): 44229.6 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.788 | TFLOPs: 24.73 | |
|
100.83.134.148: 2024-04-20 16:16:39 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:18:11 | validation loss at iteration 270 | lm loss value: 1.187060E+01 | lm loss PPL: 1.430001E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:18:11 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:19:44 | validation loss at iteration 272 | lm loss value: 1.186851E+01 | lm loss PPL: 1.427012E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:19:44 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:21:15 | validation loss at iteration 274 | lm loss value: 1.186858E+01 | lm loss PPL: 1.427117E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:21:15 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:22:55 | validation loss at iteration 276 | lm loss value: 1.186752E+01 | lm loss PPL: 1.425599E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:22:55 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:24:34 | validation loss at iteration 278 | lm loss value: 1.186512E+01 | lm loss PPL: 1.422184E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 280/ 10000 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (ms): 47505.4 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 5.389 | TFLOPs: 23.02 | |
|
100.83.134.148: 2024-04-20 16:24:34 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:26:09 | validation loss at iteration 280 | lm loss value: 1.186636E+01 | lm loss PPL: 1.423951E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:26:09 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:26:40 | validation loss at iteration 282 | lm loss value: 1.186418E+01 | lm loss PPL: 1.420854E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:26:40 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:27:03 | validation loss at iteration 284 | lm loss value: 1.187145E+01 | lm loss PPL: 1.431222E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:27:03 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:27:27 | validation loss at iteration 286 | lm loss value: 1.186806E+01 | lm loss PPL: 1.426377E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:27:27 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:27:50 | validation loss at iteration 288 | lm loss value: 1.187208E+01 | lm loss PPL: 1.432118E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 290/ 10000 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (ms): 19620.7 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 13.047 | TFLOPs: 55.74 | |
|
100.83.134.148: 2024-04-20 16:27:50 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:28:14 | validation loss at iteration 290 | lm loss value: 1.186481E+01 | lm loss PPL: 1.421740E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:28:14 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:28:37 | validation loss at iteration 292 | lm loss value: 1.186815E+01 | lm loss PPL: 1.426507E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:28:37 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:29:01 | validation loss at iteration 294 | lm loss value: 1.187083E+01 | lm loss PPL: 1.430327E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:29:01 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:29:24 | validation loss at iteration 296 | lm loss value: 1.186810E+01 | lm loss PPL: 1.426426E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:29:24 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:29:47 | validation loss at iteration 298 | lm loss value: 1.186642E+01 | lm loss PPL: 1.424034E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: iteration 300/ 10000 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (ms): 11725.2 | learning rate: 0.000E+00 | global batch size: 256 | loss scale: 0.0 | grad norm: 1.000 | num zeros: 1.0 | number of skipped iterations: 10 | number of nan iterations: 0 | samples per second: 21.833 | TFLOPs: 93.27 | |
|
100.83.134.148: 2024-04-20 16:29:47 Start last rank evaluation |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:30:11 | validation loss at iteration 300 | lm loss value: 1.187100E+01 | lm loss PPL: 1.430574E+05 | |
|
100.83.134.148: --------------------------------------------------------------------------------------------------------------------- |
|
100.83.134.148: 2024-04-20 16:30:11 Start last rank evaluation |
|
100.83.134.153: [2024-04-20 16:30:29,636] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 3498 |
|
|