applied-ai-018 commited on
Commit
076fce5
·
verified ·
1 Parent(s): ffa40b7

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 132node/ds_config.json +19 -0
  2. 132node/log.txt +1173 -0
  3. llama13b_5M/checkpoints_zero_stage_2/latest +1 -0
  4. llama13b_5M/checkpoints_zero_stage_2/latest_checkpointed_iteration.txt +1 -0
  5. llama13b_5M/checkpoints_zero_stage_2/zero_to_fp32.py +592 -0
  6. llama13b_5M/ds_config.json +19 -0
  7. llama13b_5M/first_run.txt +0 -0
  8. llama13b_5M/log.txt +0 -0
  9. llama13b_multiling_800M/13-05-2024-09:15:33/ds_config.json +19 -0
  10. llama13b_multiling_800M/13-05-2024-09:15:33/log.txt +192 -0
  11. llama13b_multiling_800M/13-05-2024-09:15:33/mds_to_hf_llama_custom.json +40 -0
  12. llama13b_multiling_800M/13-05-2024-09:17:37/ds_config.json +19 -0
  13. llama13b_multiling_800M/13-05-2024-09:17:37/log.txt +192 -0
  14. llama13b_multiling_800M/13-05-2024-09:17:37/mds_to_hf_llama_custom.json +40 -0
  15. llama13b_multiling_800M/13-05-2024-09:19:04/ds_config.json +19 -0
  16. llama13b_multiling_800M/13-05-2024-09:19:04/log.txt +338 -0
  17. llama13b_multiling_800M/13-05-2024-09:19:04/mds_to_hf_llama_custom.json +40 -0
  18. llama13b_multiling_800M/13-05-2024-09:21:14/ds_config.json +19 -0
  19. llama13b_multiling_800M/13-05-2024-09:21:14/log.txt +352 -0
  20. llama13b_multiling_800M/13-05-2024-09:21:14/mds_to_hf_llama_custom.json +40 -0
  21. llama13b_multiling_800M/13-05-2024-09:23:20/ds_config.json +19 -0
  22. llama13b_multiling_800M/13-05-2024-09:23:20/log.txt +352 -0
  23. llama13b_multiling_800M/13-05-2024-09:23:20/mds_to_hf_llama_custom.json +40 -0
  24. llama13b_multiling_800M/13-05-2024-09:29:05/ds_config.json +19 -0
  25. llama13b_multiling_800M/13-05-2024-09:29:05/log.txt +192 -0
  26. llama13b_multiling_800M/13-05-2024-09:29:05/mds_to_hf_llama_custom.json +40 -0
  27. llama13b_multiling_800M/13-05-2024-09:32:36/ds_config.json +19 -0
  28. llama13b_multiling_800M/13-05-2024-09:32:36/log.txt +192 -0
  29. llama13b_multiling_800M/13-05-2024-09:32:36/mds_to_hf_llama_custom.json +40 -0
  30. llama13b_multiling_800M/13-05-2024-09:34:09/ds_config.json +19 -0
  31. llama13b_multiling_800M/13-05-2024-09:34:09/log.txt +656 -0
  32. llama13b_multiling_800M/13-05-2024-09:34:09/mds_to_hf_llama_custom.json +40 -0
  33. llama13b_multiling_800M/13-05-2024-09:58:53/ds_config.json +19 -0
  34. llama13b_multiling_800M/13-05-2024-09:58:53/log.txt +128 -0
  35. llama13b_multiling_800M/13-05-2024-09:58:53/mds_to_hf_llama_custom.json +40 -0
  36. llama13b_multiling_800M/13-05-2024-09:59:29/ds_config.json +19 -0
  37. llama13b_multiling_800M/13-05-2024-09:59:29/log.txt +0 -0
  38. llama13b_multiling_800M/13-05-2024-09:59:29/mds_to_hf_llama_custom.json +40 -0
  39. llama13b_multiling_800M/13-05-2024-11:50:01/ds_config.json +19 -0
  40. llama13b_multiling_800M/13-05-2024-11:50:01/log.txt +0 -0
  41. llama13b_multiling_800M/13-05-2024-11:50:01/mds_to_hf_llama_custom.json +40 -0
  42. llama13b_multiling_800M/13-05-2024-11:52:31/ds_config.json +19 -0
  43. llama13b_multiling_800M/13-05-2024-11:52:31/log.txt +144 -0
  44. llama13b_multiling_800M/13-05-2024-11:52:31/mds_to_hf_llama_custom.json +40 -0
  45. llama13b_multiling_800M/13-05-2024-11:55:44/ds_config.json +19 -0
  46. llama13b_multiling_800M/13-05-2024-11:55:44/log.txt +0 -0
  47. llama13b_multiling_800M/13-05-2024-11:55:44/mds_to_hf_llama_custom.json +40 -0
  48. llama13b_x/ds_config.json +19 -0
  49. llama13b_x/log.txt +857 -0
  50. univ_ckpt_new/zero/10.attention.dense.weight/exp_avg.pt +3 -0
132node/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
132node/log.txt ADDED
@@ -0,0 +1,1173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-04-03 22:02:56,583] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-04-03 22:02:57,638] [WARNING] [runner.py:206:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
5
+ [2024-04-03 22:02:57,702] [INFO] [runner.py:585:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgMywgNCwgNSwgNiwgN119 --master_addr=127.0.0.1 --master_port=29500 --no_python --no_local_rank --enable_each_rank_log=None /usr/bin/bash -c cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 2 --pipeline-model-parallel-size 2 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 40 --hidden-size 5120 --ffn-hidden-size 13824 --num-attention-heads 40 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 100 --data-path /data/arxiv/tokenized_text_document --vocab-file /data/arxiv/gpt2-vocab.json --merge-file /data/arxiv/gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/132node/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/132node/checkpoints --deepspeed_config=/data/output/132node/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/132node/checkpoints --save-interval 2000 --verify-checkpoint --verify-checkpoint-model-type LLAMA
6
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
7
+ warnings.warn(
8
+ [2024-04-03 22:02:59,064] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
9
+ [2024-04-03 22:03:00,115] [INFO] [launch.py:146:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]}
10
+ [2024-04-03 22:03:00,115] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=8, node_rank=0
11
+ [2024-04-03 22:03:00,115] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]})
12
+ [2024-04-03 22:03:00,115] [INFO] [launch.py:164:main] dist_world_size=8
13
+ [2024-04-03 22:03:00,115] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
14
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
15
+ warnings.warn(
16
+ [2024-04-03 22:03:01,731] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
17
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
18
+ warnings.warn(
19
+ [2024-04-03 22:03:01,736] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
20
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
21
+ warnings.warn(
22
+ [2024-04-03 22:03:01,736] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
23
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
24
+ warnings.warn(
25
+ [2024-04-03 22:03:01,738] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
26
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
27
+ warnings.warn(
28
+ [2024-04-03 22:03:01,740] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
29
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
30
+ warnings.warn(
31
+ [2024-04-03 22:03:01,749] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
32
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
33
+ warnings.warn(
34
+ [2024-04-03 22:03:01,830] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
35
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
36
+ warnings.warn(
37
+ [2024-04-03 22:03:01,845] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
38
+ --------------------------------------------------
39
+ DeepSpeed C++/CUDA extension op report
40
+ --------------------------------------------------
41
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
42
+ runtime if needed. Op compatibility means that your system
43
+ meet the required dependencies to JIT install the op.
44
+ --------------------------------------------------
45
+ JIT compiled ops requires ninja
46
+ ninja .................. [OKAY]
47
+ --------------------------------------------------
48
+ op name ................ installed .. compatible
49
+ --------------------------------------------------
50
+ cpu_adam ............... [NO] ....... [OKAY]
51
+ fused_adam ............. [NO] ....... [OKAY]
52
+ deepspeed_not_implemented [NO] ....... [OKAY]
53
+ transformer_inference .. [NO] ....... [OKAY]
54
+ --------------------------------------------------
55
+ DeepSpeed general environment info:
56
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
57
+ torch version .................... 2.1.1a0+gitb51c9f6
58
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
59
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
60
+ deepspeed wheel compiled w. ...... torch 2.1
61
+ shared memory (/dev/shm) size .... 503.72 GB
62
+ fatal: detected dubious ownership in repository at '/Model-References'
63
+ To add an exception for this directory, call:
64
+
65
+ git config --global --add safe.directory /Model-References
66
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
67
+ --------------------------------------------------
68
+ DeepSpeed C++/CUDA extension op report
69
+ --------------------------------------------------
70
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
71
+ runtime if needed. Op compatibility means that your system
72
+ meet the required dependencies to JIT install the op.
73
+ --------------------------------------------------
74
+ JIT compiled ops requires ninja
75
+ ninja .................. [OKAY]
76
+ --------------------------------------------------
77
+ op name ................ installed .. compatible
78
+ --------------------------------------------------
79
+ cpu_adam ............... [NO] ....... [OKAY]
80
+ fused_adam ............. [NO] ....... [OKAY]
81
+ deepspeed_not_implemented [NO] ....... [OKAY]
82
+ transformer_inference .. [NO] ....... [OKAY]
83
+ --------------------------------------------------
84
+ DeepSpeed general environment info:
85
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
86
+ torch version .................... 2.1.1a0+gitb51c9f6
87
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
88
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
89
+ deepspeed wheel compiled w. ...... torch 2.1
90
+ shared memory (/dev/shm) size .... 503.72 GB
91
+ fatal: detected dubious ownership in repository at '/Model-References'
92
+ To add an exception for this directory, call:
93
+
94
+ git config --global --add safe.directory /Model-References
95
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
96
+ --------------------------------------------------
97
+ DeepSpeed C++/CUDA extension op report
98
+ --------------------------------------------------
99
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
100
+ runtime if needed. Op compatibility means that your system
101
+ meet the required dependencies to JIT install the op.
102
+ --------------------------------------------------
103
+ JIT compiled ops requires ninja
104
+ ninja .................. [OKAY]
105
+ --------------------------------------------------
106
+ op name ................ installed .. compatible
107
+ --------------------------------------------------
108
+ cpu_adam ............... [NO] ....... [OKAY]
109
+ fused_adam ............. [NO] ....... [OKAY]
110
+ deepspeed_not_implemented [NO] ....... [OKAY]
111
+ transformer_inference .. [NO] ....... [OKAY]
112
+ --------------------------------------------------
113
+ DeepSpeed general environment info:
114
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
115
+ torch version .................... 2.1.1a0+gitb51c9f6
116
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
117
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
118
+ deepspeed wheel compiled w. ...... torch 2.1
119
+ shared memory (/dev/shm) size .... 503.72 GB
120
+ --------------------------------------------------
121
+ DeepSpeed C++/CUDA extension op report
122
+ --------------------------------------------------
123
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
124
+ runtime if needed. Op compatibility means that your system
125
+ meet the required dependencies to JIT install the op.
126
+ --------------------------------------------------
127
+ JIT compiled ops requires ninja
128
+ ninja .................. [OKAY]
129
+ --------------------------------------------------
130
+ op name ................ installed .. compatible
131
+ --------------------------------------------------
132
+ cpu_adam ............... [NO] ....... [OKAY]
133
+ fused_adam ............. [NO] ....... [OKAY]
134
+ deepspeed_not_implemented [NO] ....... [OKAY]
135
+ transformer_inference .. [NO] ....... [OKAY]
136
+ --------------------------------------------------
137
+ DeepSpeed general environment info:
138
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
139
+ torch version .................... 2.1.1a0+gitb51c9f6
140
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
141
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
142
+ deepspeed wheel compiled w. ...... torch 2.1
143
+ shared memory (/dev/shm) size .... 503.72 GB
144
+ --------------------------------------------------
145
+ DeepSpeed C++/CUDA extension op report
146
+ --------------------------------------------------
147
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
148
+ runtime if needed. Op compatibility means that your system
149
+ meet the required dependencies to JIT install the op.
150
+ --------------------------------------------------
151
+ JIT compiled ops requires ninja
152
+ ninja .................. [OKAY]
153
+ --------------------------------------------------
154
+ op name ................ installed .. compatible
155
+ --------------------------------------------------
156
+ cpu_adam ............... [NO] ....... [OKAY]
157
+ fused_adam ............. [NO] ....... [OKAY]
158
+ deepspeed_not_implemented [NO] ....... [OKAY]
159
+ transformer_inference .. [NO] ....... [OKAY]
160
+ --------------------------------------------------
161
+ DeepSpeed general environment info:
162
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
163
+ torch version .................... 2.1.1a0+gitb51c9f6
164
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
165
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
166
+ deepspeed wheel compiled w. ...... torch 2.1
167
+ shared memory (/dev/shm) size .... 503.72 GB
168
+ fatal: detected dubious ownership in repository at '/Model-References'
169
+ To add an exception for this directory, call:
170
+
171
+ git config --global --add safe.directory /Model-References
172
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
173
+ fatal: detected dubious ownership in repository at '/Model-References'
174
+ To add an exception for this directory, call:
175
+
176
+ git config --global --add safe.directory /Model-References
177
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
178
+ fatal: detected dubious ownership in repository at '/Model-References'
179
+ To add an exception for this directory, call:
180
+
181
+ git config --global --add safe.directory /Model-References
182
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
183
+ --------------------------------------------------
184
+ DeepSpeed C++/CUDA extension op report
185
+ --------------------------------------------------
186
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
187
+ runtime if needed. Op compatibility means that your system
188
+ meet the required dependencies to JIT install the op.
189
+ --------------------------------------------------
190
+ JIT compiled ops requires ninja
191
+ ninja .................. [OKAY]
192
+ --------------------------------------------------
193
+ op name ................ installed .. compatible
194
+ --------------------------------------------------
195
+ cpu_adam ............... [NO] ....... [OKAY]
196
+ fused_adam ............. [NO] ....... [OKAY]
197
+ deepspeed_not_implemented [NO] ....... [OKAY]
198
+ transformer_inference .. [NO] ....... [OKAY]
199
+ --------------------------------------------------
200
+ DeepSpeed general environment info:
201
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
202
+ torch version .................... 2.1.1a0+gitb51c9f6
203
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
204
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
205
+ deepspeed wheel compiled w. ...... torch 2.1
206
+ shared memory (/dev/shm) size .... 503.72 GB
207
+ --------------------------------------------------
208
+ DeepSpeed C++/CUDA extension op report
209
+ --------------------------------------------------
210
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
211
+ runtime if needed. Op compatibility means that your system
212
+ meet the required dependencies to JIT install the op.
213
+ --------------------------------------------------
214
+ JIT compiled ops requires ninja
215
+ ninja .................. [OKAY]
216
+ --------------------------------------------------
217
+ op name ................ installed .. compatible
218
+ --------------------------------------------------
219
+ cpu_adam ............... [NO] ....... [OKAY]
220
+ fused_adam ............. [NO] ....... [OKAY]
221
+ deepspeed_not_implemented [NO] ....... [OKAY]
222
+ transformer_inference .. [NO] ....... [OKAY]
223
+ --------------------------------------------------
224
+ DeepSpeed general environment info:
225
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
226
+ torch version .................... 2.1.1a0+gitb51c9f6
227
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
228
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
229
+ deepspeed wheel compiled w. ...... torch 2.1
230
+ shared memory (/dev/shm) size .... 503.72 GB
231
+ fatal: detected dubious ownership in repository at '/Model-References'
232
+ To add an exception for this directory, call:
233
+
234
+ git config --global --add safe.directory /Model-References
235
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
236
+ using world size: 8, data-parallel-size: 2, tensor-model-parallel size: 2, pipeline-model-parallel size: 2
237
+ accumulate and all-reduce gradients in fp32 for bfloat16 data type.
238
+ using torch.bfloat16 for parameters ...
239
+ ------------------------ arguments ------------------------
240
+ accumulate_allreduce_grads_in_fp32 .............. True
241
+ activation_func_type ............................ swiglu
242
+ adam_beta1 ...................................... 0.9
243
+ adam_beta2 ...................................... 0.95
244
+ adam_eps ........................................ 1e-06
245
+ adlr_autoresume ................................. False
246
+ adlr_autoresume_interval ........................ 1000
247
+ aml_data_download_path .......................... None
248
+ apply_layernorm_weight_plus_one ................. False
249
+ apply_query_key_layer_scaling ................... True
250
+ apply_residual_connection_post_layernorm ........ False
251
+ attention_dropout ............................... 0.1
252
+ attention_softmax_in_fp32 ....................... False
253
+ bert_binary_head ................................ True
254
+ bert_load ....................................... None
255
+ bf16 ............................................ True
256
+ bias_dropout_fusion ............................. False
257
+ bias_gelu_fusion ................................ False
258
+ biencoder_projection_dim ........................ 0
259
+ biencoder_shared_query_context_model ............ False
260
+ block_data_path ................................. None
261
+ cache_fp8_weight ................................ False
262
+ cache_fp8_weight_fwd ............................ True
263
+ checkpoint_activations .......................... False
264
+ checkpoint_activations_granularity .............. full
265
+ checkpoint_in_cpu ............................... False
266
+ checkpoint_num_layers ........................... 1
267
+ clearml_config_path ............................. None
268
+ clearml_continue_exp ............................ False
269
+ clearml_exp_name ................................ None
270
+ clip_grad ....................................... 1.0
271
+ compression_training ............................ False
272
+ consumed_train_samples .......................... 0
273
+ consumed_train_tokens ........................... 0
274
+ consumed_valid_samples .......................... 0
275
+ contigious_checkpointing ........................ False
276
+ cpu_optimizer ................................... False
277
+ cpu_torch_adam .................................. False
278
+ create_moe_param_group .......................... False
279
+ curriculum_learning ............................. False
280
+ data_idx_path ................................... None
281
+ data_impl ....................................... infer
282
+ data_parallel_size .............................. 2
283
+ data_path ....................................... ['/data/arxiv/tokenized_text_document']
284
+ data_sharding ................................... True
285
+ dataloader_type ................................. single
286
+ DDP_impl ........................................ local
287
+ decoder_seq_length .............................. None
288
+ deepscale ....................................... False
289
+ deepscale_config ................................ None
290
+ deepspeed ....................................... True
291
+ deepspeed_activation_checkpointing .............. False
292
+ deepspeed_config ................................ /data/output/132node/ds_config.json
293
+ deepspeed_mpi ................................... False
294
+ distribute_checkpointed_activations ............. False
295
+ distributed_backend ............................. hccl
296
+ do_layernorm_bias_weight_decay .................. False
297
+ do_pretrain_validation .......................... False
298
+ ds_inference .................................... False
299
+ ds_pipeline_enabled ............................. True
300
+ embed_layernorm ................................. False
301
+ embedding_path .................................. None
302
+ enable_expert_tensor_parallelism ................ False
303
+ encoder_seq_length .............................. 2048
304
+ eod_mask_loss ................................... False
305
+ eval_interval ................................... 100
306
+ eval_iters ...................................... 10
307
+ eval_loss_exit_value ............................ None
308
+ eval_micro_batch_size ........................... 1
309
+ evidence_data_path .............................. None
310
+ exit_duration_in_mins ........................... None
311
+ exit_interval ................................... 0
312
+ expert_interval ................................. 2
313
+ ffn_hidden_coeff ................................ 2.6666666666666665
314
+ ffn_hidden_size ................................. 13824
315
+ finetune ........................................ False
316
+ fix_position_emb_redundant_alloc ................ False
317
+ flatten_linear_operands ......................... False
318
+ fp16 ............................................ False
319
+ fp16_lm_cross_entropy ........................... False
320
+ fp32_residual_connection ........................ False
321
+ global_batch_size ............................... 256
322
+ hidden_dropout .................................. 0.1
323
+ hidden_size ..................................... 5120
324
+ hidden_size_teacher ............................. None
325
+ hpu_deterministic ............................... True
326
+ hpu_fp8_format .................................. e5m2
327
+ hpu_fp8_measure_interval ........................ 10
328
+ hysteresis ...................................... 2
329
+ ict_head_size ................................... None
330
+ ict_load ........................................ None
331
+ img_dim ......................................... 224
332
+ indexer_batch_size .............................. 128
333
+ indexer_log_interval ............................ 1000
334
+ inference ....................................... False
335
+ init_method_std ................................. 0.02
336
+ init_method_xavier_uniform ...................... False
337
+ initial_loss_scale .............................. 4294967296
338
+ kd .............................................. False
339
+ kd_alpha_ce ..................................... 1
340
+ kd_beta_ce ...................................... 1
341
+ kd_temp ......................................... 1.0
342
+ kill_switch_path ................................ None
343
+ kv_channels ..................................... 128
344
+ layernorm_epsilon ............................... 1e-06
345
+ layernorm_type .................................. rmsnorm
346
+ lazy_mpu_init ................................... None
347
+ load ............................................ /data/output/132node/checkpoints
348
+ load_teacher .................................... None
349
+ local_rank ...................................... 0
350
+ log_batch_size_to_tensorboard ................... True
351
+ log_bwd_grads ................................... False
352
+ log_fwd_activations ............................. False
353
+ log_interval .................................... 10
354
+ log_learning_rate_to_tensorboard ................ True
355
+ log_loss_scale_to_tensorboard ................... True
356
+ log_model_inputs ................................ False
357
+ log_num_zeros_in_grad ........................... False
358
+ log_optimizer_states_to_tensorboard ............. False
359
+ log_params_norm ................................. False
360
+ log_timers_to_tensorboard ....................... True
361
+ log_validation_ppl_to_tensorboard ............... True
362
+ loss_scale ...................................... None
363
+ loss_scale_window ............................... 1000
364
+ lr .............................................. 0.0003
365
+ lr_decay_iters .................................. None
366
+ lr_decay_samples ................................ None
367
+ lr_decay_style .................................. cosine
368
+ lr_decay_tokens ................................. None
369
+ lr_warmup_fraction .............................. None
370
+ lr_warmup_iters ................................. 2000
371
+ lr_warmup_samples ............................... 0
372
+ lr_warmup_tokens ................................ None
373
+ make_vocab_size_divisible_by .................... 128
374
+ mask_prob ....................................... 0.15
375
+ mask_tensor_adding .............................. False
376
+ masked_softmax_fusion ........................... False
377
+ max_position_embeddings ......................... None
378
+ memory_centric_tiled_linear ..................... False
379
+ merge_file ...................................... /data/arxiv/gpt2-merges.txt
380
+ micro_batch_size ................................ 1
381
+ min_loss_scale .................................. 1.0
382
+ min_lr .......................................... 0.0
383
+ mlp_type ........................................ standard
384
+ mmap_warmup ..................................... False
385
+ moe_eval_capacity_factor ........................ 1.0
386
+ moe_expert_parallel_size ........................ 1
387
+ moe_loss_coeff .................................. 0.1
388
+ moe_min_capacity ................................ 4
389
+ moe_token_dropping .............................. True
390
+ moe_train_capacity_factor ....................... 1.0
391
+ mos ............................................. False
392
+ no_bias ......................................... True
393
+ no_cuda ......................................... False
394
+ no_load_lr_state ................................ False
395
+ no_load_optim ................................... None
396
+ no_load_rng ..................................... None
397
+ no_pipeline_parallel ............................ False
398
+ no_save_optim ................................... None
399
+ no_save_rng ..................................... None
400
+ no_scaled_init .................................. False
401
+ num_attention_heads ............................. 40
402
+ num_attention_heads_teacher ..................... None
403
+ num_channels .................................... 3
404
+ num_classes ..................................... 1000
405
+ num_experts ..................................... [1]
406
+ num_experts_teacher ............................. [1]
407
+ num_key_value_heads ............................. 40
408
+ num_layers ...................................... 40
409
+ num_layers_per_virtual_pipeline_stage ........... None
410
+ num_layers_teacher .............................. None
411
+ num_workers ..................................... 2
412
+ onnx_safe ....................................... None
413
+ openai_gelu ..................................... False
414
+ optimizer ....................................... adamw
415
+ override_lr_scheduler ........................... False
416
+ params_dtype .................................... torch.bfloat16
417
+ partition_activations ........................... False
418
+ patch_dim ....................................... 16
419
+ pipeline_model_parallel_size .................... 2
420
+ position_embedding_type ......................... PositionEmbeddingType.rotary
421
+ profile ......................................... None
422
+ profile_backward ................................ False
423
+ profile_steps ................................... 2,3
424
+ query_in_block_prob ............................. 0.1
425
+ rampup_batch_size ............................... None
426
+ rank ............................................ 0
427
+ remote_device ................................... none
428
+ reset_attention_mask ............................ False
429
+ reset_iteration ................................. False
430
+ reset_position_ids .............................. False
431
+ retriever_report_topk_accuracies ................ []
432
+ retriever_score_scaling ......................... False
433
+ retriever_seq_length ............................ 256
434
+ sample_rate ..................................... 1.0
435
+ save ............................................ /data/output/132node/checkpoints
436
+ save_interval ................................... 2000
437
+ scatter_gather_tensors_in_pipeline .............. True
438
+ scattered_embeddings ............................ False
439
+ seed ............................................ 1234
440
+ seq_length ...................................... 2048
441
+ sequence_parallel ............................... True
442
+ sgd_momentum .................................... 0.9
443
+ short_seq_prob .................................. 0.1
444
+ skip_train ...................................... False
445
+ split ........................................... 969, 30, 1
446
+ split_transformers .............................. False
447
+ synchronize_each_layer .......................... False
448
+ tensor_logger_max_iter .......................... 0
449
+ tensor_logger_path .............................. None
450
+ tensor_model_parallel_size ...................... 2
451
+ tensorboard_dir ................................. /data/output/132node/tensorboard
452
+ tensorboard_log_interval ........................ 1
453
+ tensorboard_queue_size .......................... 1000
454
+ test_data_path .................................. None
455
+ tile_factor ..................................... 1
456
+ titles_data_path ................................ None
457
+ tokenizer_eod_id ................................ None
458
+ tokenizer_model_file ............................ None
459
+ tokenizer_type .................................. GPT2BPETokenizer
460
+ topk ............................................ 1
461
+ train_data_path ................................. None
462
+ train_iters ..................................... 10000
463
+ train_samples ................................... None
464
+ train_tokens .................................... None
465
+ universal_checkpoint ............................ False
466
+ use_checkpoint_lr_scheduler ..................... False
467
+ use_contiguous_buffers_in_ddp ................... True
468
+ use_cpu_initialization .......................... None
469
+ use_fused_sdpa .................................. True
470
+ use_fused_sdpa_with_recompute ................... False
471
+ use_hpu ......................................... True
472
+ use_hpu_fp8_transformer_engine .................. False
473
+ use_hpu_graphs .................................. False
474
+ use_one_sent_docs ............................... False
475
+ use_pin_memory .................................. False
476
+ use_rotary_v2 ................................... False
477
+ use_seq_len_plus_one_tokens ..................... True
478
+ use_torch_compile ............................... False
479
+ use_tutel ....................................... False
480
+ valid_data_path ................................. None
481
+ verify_checkpoint ............................... True
482
+ verify_checkpoint_model_type .................... LLAMA
483
+ verify_tp_workers ............................... False
484
+ verify_tp_workers_hash .......................... False
485
+ virtual_pipeline_model_parallel_size ............ None
486
+ vocab_extra_ids ................................. 0
487
+ vocab_file ...................................... /data/arxiv/gpt2-vocab.json
488
+ weight_decay .................................... 0.1
489
+ world_size ...................................... 8
490
+ zero_allgather_bucket_size ...................... 0.0
491
+ zero_contigious_gradients ....................... False
492
+ zero_reduce_bucket_size ......................... 0.0
493
+ zero_reduce_scatter ............................. False
494
+ zero_stage ...................................... 0
495
+ -------------------- end of arguments ---------------------
496
+ setting number of micro-batches to constant 128
497
+ setting number of micro-batches to constant 128
498
+ > building GPT2BPETokenizer tokenizer ...
499
+ fatal: detected dubious ownership in repository at '/Model-References'
500
+ To add an exception for this directory, call:
501
+
502
+ git config --global --add safe.directory /Model-References
503
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
504
+ _initialize_distributed: Initializing with below params:
505
+ args.local_rank: 6
506
+ args.world_size: 8
507
+ args.rank: 6
508
+ args.distributed_backend: hccl
509
+ _initialize_distributed: Initializing with below params:
510
+ args.local_rank: 1
511
+ args.world_size: 8
512
+ args.rank: 1
513
+ args.distributed_backend: hccl
514
+ _initialize_distributed: Initializing with below params:
515
+ args.local_rank: 3
516
+ args.world_size: 8
517
+ args.rank: 3
518
+ args.distributed_backend: hccl
519
+ _initialize_distributed: Initializing with below params:
520
+ args.local_rank: 2
521
+ args.world_size: 8
522
+ args.rank: 2
523
+ args.distributed_backend: hccl
524
+ --------------------------------------------------
525
+ DeepSpeed C++/CUDA extension op report
526
+ --------------------------------------------------
527
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
528
+ runtime if needed. Op compatibility means that your system
529
+ meet the required dependencies to JIT install the op.
530
+ --------------------------------------------------
531
+ JIT compiled ops requires ninja
532
+ ninja .................. [OKAY]
533
+ --------------------------------------------------
534
+ op name ................ installed .. compatible
535
+ --------------------------------------------------
536
+ cpu_adam ............... [NO] ....... [OKAY]
537
+ fused_adam ............. [NO] ....... [OKAY]
538
+ deepspeed_not_implemented [NO] ....... [OKAY]
539
+ transformer_inference .. [NO] ....... [OKAY]
540
+ --------------------------------------------------
541
+ DeepSpeed general environment info:
542
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
543
+ torch version .................... 2.1.1a0+gitb51c9f6
544
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
545
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
546
+ deepspeed wheel compiled w. ...... torch 2.1
547
+ shared memory (/dev/shm) size .... 503.72 GB
548
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
549
+ warnings.warn(
550
+ hccl device_count: 8
551
+ [2024-04-03 22:03:03,241] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
552
+ [2024-04-03 22:03:03,241] [INFO] [comm.py:637:init_distributed] cdb=None
553
+ fatal: detected dubious ownership in repository at '/Model-References'
554
+ To add an exception for this directory, call:
555
+
556
+ git config --global --add safe.directory /Model-References
557
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
558
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
559
+ warnings.warn(
560
+ hccl device_count: 8
561
+ [2024-04-03 22:03:03,252] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
562
+ [2024-04-03 22:03:03,252] [INFO] [comm.py:637:init_distributed] cdb=None
563
+ _initialize_distributed: Initializing with below params:
564
+ args.local_rank: 4
565
+ args.world_size: 8
566
+ args.rank: 4
567
+ args.distributed_backend: hccl
568
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
569
+ _initialize_distributed: Initializing with below params:
570
+ args.local_rank: 0
571
+ args.world_size: 8
572
+ args.rank: 0
573
+ args.distributed_backend: hccl
574
+ > setting tensorboard ...
575
+ _initialize_distributed: Initializing with below params:
576
+ args.local_rank: 7
577
+ args.world_size: 8
578
+ args.rank: 7
579
+ args.distributed_backend: hccl
580
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
581
+ warnings.warn(
582
+ hccl device_count: 8
583
+ [2024-04-03 22:03:03,289] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
584
+ [2024-04-03 22:03:03,289] [INFO] [comm.py:637:init_distributed] cdb=None
585
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
586
+ warnings.warn(
587
+ hccl device_count: 8
588
+ [2024-04-03 22:03:03,296] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
589
+ [2024-04-03 22:03:03,296] [INFO] [comm.py:637:init_distributed] cdb=None
590
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
591
+ warnings.warn(
592
+ hccl device_count: 8
593
+ [2024-04-03 22:03:03,326] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
594
+ [2024-04-03 22:03:03,327] [INFO] [comm.py:637:init_distributed] cdb=None
595
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
596
+ warnings.warn(
597
+ hccl device_count: 8
598
+ > initializing torch distributed ...
599
+ [2024-04-03 22:03:03,343] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
600
+ [2024-04-03 22:03:03,343] [INFO] [comm.py:637:init_distributed] cdb=None
601
+ [2024-04-03 22:03:03,343] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend hccl
602
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
603
+ warnings.warn(
604
+ hccl device_count: 8
605
+ [2024-04-03 22:03:03,345] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
606
+ [2024-04-03 22:03:03,345] [INFO] [comm.py:637:init_distributed] cdb=None
607
+ _initialize_distributed: Initializing with below params:
608
+ args.local_rank: 5
609
+ args.world_size: 8
610
+ args.rank: 5
611
+ args.distributed_backend: hccl
612
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
613
+ warnings.warn(
614
+ hccl device_count: 8
615
+ [2024-04-03 22:03:03,441] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
616
+ [2024-04-03 22:03:03,441] [INFO] [comm.py:637:init_distributed] cdb=None
617
+ > initializing tensor model parallel with size 2
618
+ > initializing pipeline model parallel with size 2
619
+ > setting random seeds to 1234 ...
620
+ > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
621
+ ============================= HABANA PT BRIDGE CONFIGURATION ===========================
622
+ PT_HPU_LAZY_MODE = 1
623
+ PT_RECIPE_CACHE_PATH =
624
+ PT_CACHE_FOLDER_DELETE = 0
625
+ PT_HPU_RECIPE_CACHE_CONFIG =
626
+ PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
627
+ PT_HPU_LAZY_ACC_PAR_MODE = 0
628
+ PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
629
+ ---------------------------: System Configuration :---------------------------
630
+ Num CPU Cores : 160
631
+ CPU RAM : 1056375244 KB
632
+ ------------------------------------------------------------------------------
633
+ > compiling dataset index builder ...
634
+ make: Entering directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data'
635
+ make: Nothing to be done for 'default'.
636
+ make: Leaving directory '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data'
637
+ >>> done with dataset index builder. Compilation time: 0.072 seconds
638
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
639
+ > compiling and loading fused kernels ...
640
+ >>> done with compiling and loading fused kernels. Compilation time: 0.005 seconds
641
+ time to initialize megatron (seconds): 29.272
642
+ [after megatron is initialized] datetime: 2024-04-03 22:03:09
643
+ building LLaMA model ...
644
+ *************** Using FusedSDPA ******************
645
+ *************** Using FusedSDPA ******************
646
+ *************** Using FusedSDPA ******************
647
+ *************** Using FusedSDPA ******************
648
+ *************** Using FusedSDPA ******************
649
+ *************** Using FusedSDPA ******************
650
+ *************** Using FusedSDPA ******************
651
+ *************** Using FusedSDPA ******************
652
+ *************** Using FusedSDPA ******************
653
+ *************** Using FusedSDPA ******************
654
+ *************** Using FusedSDPA ******************
655
+ *************** Using FusedSDPA ******************
656
+ *************** Using FusedSDPA ******************
657
+ *************** Using FusedSDPA ******************
658
+ *************** Using FusedSDPA ******************
659
+ *************** Using FusedSDPA ******************
660
+ *************** Using FusedSDPA ******************
661
+ *************** Using FusedSDPA ******************
662
+ *************** Using FusedSDPA ******************
663
+ *************** Using FusedSDPA ******************
664
+ *************** Using FusedSDPA ******************
665
+ *************** Using FusedSDPA ******************
666
+ *************** Using FusedSDPA ******************
667
+ *************** Using FusedSDPA ******************
668
+ *************** Using FusedSDPA ******************
669
+ *************** Using FusedSDPA ******************
670
+ *************** Using FusedSDPA ******************
671
+ *************** Using FusedSDPA ******************
672
+ *************** Using FusedSDPA ******************
673
+ *************** Using FusedSDPA ******************
674
+ *************** Using FusedSDPA ******************
675
+ *************** Using FusedSDPA ******************
676
+ *************** Using FusedSDPA ******************
677
+ *************** Using FusedSDPA ******************
678
+ *************** Using FusedSDPA ******************
679
+ *************** Using FusedSDPA ******************
680
+ *************** Using FusedSDPA ******************
681
+ *************** Using FusedSDPA ******************
682
+ *************** Using FusedSDPA ******************
683
+ *************** Using FusedSDPA ******************
684
+ *************** Using FusedSDPA ******************
685
+ *************** Using FusedSDPA ******************
686
+ *************** Using FusedSDPA ******************
687
+ *************** Using FusedSDPA ******************
688
+ *************** Using FusedSDPA ******************
689
+ *************** Using FusedSDPA ******************
690
+ *************** Using FusedSDPA ******************
691
+ *************** Using FusedSDPA ******************
692
+ *************** Using FusedSDPA ******************
693
+ *************** Using FusedSDPA ******************
694
+ *************** Using FusedSDPA ******************
695
+ *************** Using FusedSDPA ******************
696
+ *************** Using FusedSDPA ******************
697
+ *************** Using FusedSDPA ******************
698
+ *************** Using FusedSDPA ******************
699
+ *************** Using FusedSDPA ******************
700
+ *************** Using FusedSDPA ******************
701
+ *************** Using FusedSDPA ******************
702
+ *************** Using FusedSDPA ******************
703
+ *************** Using FusedSDPA ******************
704
+ *************** Using FusedSDPA ******************
705
+ *************** Using FusedSDPA ******************
706
+ *************** Using FusedSDPA ******************
707
+ *************** Using FusedSDPA ******************
708
+ *************** Using FusedSDPA ******************
709
+ *************** Using FusedSDPA ******************
710
+ *************** Using FusedSDPA ******************
711
+ *************** Using FusedSDPA ******************
712
+ *************** Using FusedSDPA ******************
713
+ *************** Using FusedSDPA ******************
714
+ *************** Using FusedSDPA ******************
715
+ *************** Using FusedSDPA ******************
716
+ *************** Using FusedSDPA ******************
717
+ *************** Using FusedSDPA ******************
718
+ *************** Using FusedSDPA ******************
719
+ *************** Using FusedSDPA ******************
720
+ *************** Using FusedSDPA ******************
721
+ *************** Using FusedSDPA ******************
722
+ *************** Using FusedSDPA ******************
723
+ *************** Using FusedSDPA ******************
724
+ *************** Using FusedSDPA ******************
725
+ *************** Using FusedSDPA ******************
726
+ *************** Using FusedSDPA ******************
727
+ *************** Using FusedSDPA ******************
728
+ *************** Using FusedSDPA ******************
729
+ *************** Using FusedSDPA ******************
730
+ *************** Using FusedSDPA ********************************* Using FusedSDPA ******************
731
+
732
+ *************** Using FusedSDPA ******************
733
+ *************** Using FusedSDPA ******************
734
+ *************** Using FusedSDPA ******************
735
+ *************** Using FusedSDPA ******************
736
+ *************** Using FusedSDPA ******************
737
+ *************** Using FusedSDPA ******************
738
+ *************** Using FusedSDPA ******************
739
+ *************** Using FusedSDPA ******************
740
+ *************** Using FusedSDPA ******************
741
+ *************** Using FusedSDPA ******************
742
+ *************** Using FusedSDPA ******************
743
+ *************** Using FusedSDPA ********************************* Using FusedSDPA ******************
744
+
745
+ *************** Using FusedSDPA ******************
746
+ *************** Using FusedSDPA ******************
747
+ *************** Using FusedSDPA ******************
748
+ *************** Using FusedSDPA ******************
749
+ *************** Using FusedSDPA ******************
750
+ *************** Using FusedSDPA ******************
751
+ *************** Using FusedSDPA ******************
752
+ *************** Using FusedSDPA ******************
753
+ *************** Using FusedSDPA ******************
754
+ *************** Using FusedSDPA ******************
755
+ *************** Using FusedSDPA ******************
756
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
757
+ return super().__torch_function__(func, types, new_args, kwargs)
758
+ *************** Using FusedSDPA ******************
759
+ *************** Using FusedSDPA ******************
760
+ *************** Using FusedSDPA ******************
761
+ *************** Using FusedSDPA ******************
762
+ *************** Using FusedSDPA ******************
763
+ *************** Using FusedSDPA ******************
764
+ *************** Using FusedSDPA ******************
765
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
766
+ return super().__torch_function__(func, types, new_args, kwargs)
767
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
768
+ return super().__torch_function__(func, types, new_args, kwargs)
769
+ *************** Using FusedSDPA ******************
770
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
771
+ return super().__torch_function__(func, types, new_args, kwargs)
772
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
773
+ return super().__torch_function__(func, types, new_args, kwargs)
774
+ *************** Using FusedSDPA ******************
775
+ *************** Using FusedSDPA ******************
776
+ *************** Using FusedSDPA ******************
777
+ *************** Using FusedSDPA ******************
778
+ *************** Using FusedSDPA ******************
779
+ *************** Using FusedSDPA ******************
780
+ *************** Using FusedSDPA ******************
781
+ *************** Using FusedSDPA ******************
782
+ *************** Using FusedSDPA ******************
783
+ *************** Using FusedSDPA ******************
784
+ *************** Using FusedSDPA ******************
785
+ *************** Using FusedSDPA ******************
786
+ *************** Using FusedSDPA ******************
787
+ *************** Using FusedSDPA ******************
788
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 3301253120
789
+ *************** Using FusedSDPA ******************
790
+ *************** Using FusedSDPA ******************
791
+ *************** Using FusedSDPA ******************
792
+ *************** Using FusedSDPA ******************
793
+ *************** Using FusedSDPA ******************
794
+ *************** Using FusedSDPA ******************
795
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
796
+ return super().__torch_function__(func, types, new_args, kwargs)
797
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/weight_sharing.py:53: UserWarning: "hpu:X" notation is not supported by Gaudi PyTorch intergration bridge. Please change to "hpu" without index (Triggered internally at /npu-stack/pytorch-integration/pytorch_helpers/lazy_to_backend.cpp:53.)
798
+ return super().__torch_function__(func, types, new_args, kwargs)
799
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 1): 3301258240
800
+ [2024-04-03 22:03:09,344] [INFO] [utils.py:824:see_memory_usage] Before Building Model
801
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 1): 3301258240
802
+ [2024-04-03 22:03:09,347] [INFO] [utils.py:825:see_memory_usage] MA 0.01 GB Max_MA 0.01 GB CA 0.0 GB Max_CA 0 GB
803
+ [2024-04-03 22:03:09,348] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 345.07 GB, percent = 34.3%
804
+ SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None
805
+ Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=0, model=1): 1, ProcessCoord(pipe=0, data=1, model=0): 2, ProcessCoord(pipe=0, data=1, model=1): 3, ProcessCoord(pipe=1, data=0, model=0): 4, ProcessCoord(pipe=1, data=0, model=1): 5, ProcessCoord(pipe=1, data=1, model=0): 6, ProcessCoord(pipe=1, data=1, model=1): 7}
806
+ [2024-04-03 22:03:09,350] [INFO] [module.py:375:_partition_layers] Partitioning pipeline stages with method type:transformer
807
+ stage=0 layers=23
808
+ 0: _to_float16
809
+ 1: EmbeddingPipe
810
+ 2: <lambda>
811
+ 3: ParallelTransformerLayerPipe
812
+ 4: ParallelTransformerLayerPipe
813
+ 5: ParallelTransformerLayerPipe
814
+ 6: ParallelTransformerLayerPipe
815
+ 7: ParallelTransformerLayerPipe
816
+ 8: ParallelTransformerLayerPipe
817
+ 9: ParallelTransformerLayerPipe
818
+ 10: ParallelTransformerLayerPipe
819
+ 11: ParallelTransformerLayerPipe
820
+ 12: ParallelTransformerLayerPipe
821
+ 13: ParallelTransformerLayerPipe
822
+ 14: ParallelTransformerLayerPipe
823
+ 15: ParallelTransformerLayerPipe
824
+ 16: ParallelTransformerLayerPipe
825
+ 17: ParallelTransformerLayerPipe
826
+ 18: ParallelTransformerLayerPipe
827
+ 19: ParallelTransformerLayerPipe
828
+ 20: ParallelTransformerLayerPipe
829
+ 21: ParallelTransformerLayerPipe
830
+ 22: ParallelTransformerLayerPipe
831
+ stage=1 layers=25
832
+ 23: ParallelTransformerLayerPipe
833
+ 24: ParallelTransformerLayerPipe
834
+ 25: ParallelTransformerLayerPipe
835
+ 26: ParallelTransformerLayerPipe
836
+ 27: ParallelTransformerLayerPipe
837
+ 28: ParallelTransformerLayerPipe
838
+ 29: ParallelTransformerLayerPipe
839
+ 30: ParallelTransformerLayerPipe
840
+ 31: ParallelTransformerLayerPipe
841
+ 32: ParallelTransformerLayerPipe
842
+ 33: ParallelTransformerLayerPipe
843
+ 34: ParallelTransformerLayerPipe
844
+ 35: ParallelTransformerLayerPipe
845
+ 36: ParallelTransformerLayerPipe
846
+ 37: ParallelTransformerLayerPipe
847
+ 38: ParallelTransformerLayerPipe
848
+ 39: ParallelTransformerLayerPipe
849
+ 40: ParallelTransformerLayerPipe
850
+ 41: ParallelTransformerLayerPipe
851
+ 42: ParallelTransformerLayerPipe
852
+ 43: <lambda>
853
+ 44: WrapName
854
+ 45: WrapName
855
+ 46: <lambda>
856
+ 47: float16_to_fp32
857
+ loss: CrossEntropy
858
+ *************** Using FusedSDPA ******************
859
+ *************** Using FusedSDPA ******************
860
+ *************** Using FusedSDPA ******************
861
+ *************** Using FusedSDPA ******************
862
+ *************** Using FusedSDPA ******************
863
+ *************** Using FusedSDPA ******************
864
+ *************** Using FusedSDPA ******************
865
+ *************** Using FusedSDPA ******************
866
+ *************** Using FusedSDPA ******************
867
+ *************** Using FusedSDPA ******************
868
+ *************** Using FusedSDPA ******************
869
+ *************** Using FusedSDPA ******************
870
+ *************** Using FusedSDPA ******************
871
+ *************** Using FusedSDPA ******************
872
+ *************** Using FusedSDPA ******************
873
+ *************** Using FusedSDPA ******************
874
+ *************** Using FusedSDPA ******************
875
+ *************** Using FusedSDPA ******************
876
+ *************** Using FusedSDPA ******************
877
+ *************** Using FusedSDPA ******************
878
+ [2024-04-03 22:03:09,470] [INFO] [utils.py:824:see_memory_usage] After Building Model
879
+ [2024-04-03 22:03:09,474] [INFO] [utils.py:825:see_memory_usage] MA 0.01 GB Max_MA 0.01 GB CA 0.0 GB Max_CA 0 GB
880
+ [2024-04-03 22:03:09,474] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 345.56 GB, percent = 34.3%
881
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 3301253120
882
+ > learning rate decay style: cosine
883
+ DeepSpeed is enabled.
884
+ [2024-04-03 22:03:09,479] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.12.4+hpu.synapse.v1.14.0, git-hash=fad45b2, git-branch=1.14.0
885
+ [2024-04-03 22:03:10,368] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
886
+ [2024-04-03 22:03:10,369] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
887
+ [2024-04-03 22:03:10,369] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
888
+ [2024-04-03 22:03:10,371] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW
889
+ [2024-04-03 22:03:10,371] [INFO] [logging.py:96:log_dist] [Rank 0] Creating BF16 optimizer
890
+ [2024-04-03 22:03:10,445] [INFO] [utils.py:824:see_memory_usage] begin bf16_optimizer
891
+ [2024-04-03 22:03:10,448] [INFO] [utils.py:825:see_memory_usage] MA 6.16 GB Max_MA 6.18 GB CA 0.0 GB Max_CA 0 GB
892
+ [2024-04-03 22:03:10,448] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.0 GB, percent = 34.5%
893
+ [2024-04-03 22:03:10,513] [INFO] [utils.py:824:see_memory_usage] before initializing group 0
894
+ [2024-04-03 22:03:10,516] [INFO] [utils.py:825:see_memory_usage] MA 6.16 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
895
+ [2024-04-03 22:03:10,516] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.08 GB, percent = 34.6%
896
+ [2024-04-03 22:03:10,777] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
897
+ [2024-04-03 22:03:10,789] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
898
+ [2024-04-03 22:03:10,837] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
899
+ [2024-04-03 22:03:10,838] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
900
+ [2024-04-03 22:03:10,861] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
901
+ [2024-04-03 22:03:11,089] [INFO] [utils.py:824:see_memory_usage] after initializing group 0
902
+ [2024-04-03 22:03:11,092] [INFO] [utils.py:825:see_memory_usage] MA 6.16 GB Max_MA 12.31 GB CA 0.0 GB Max_CA 0 GB
903
+ [2024-04-03 22:03:11,092] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.52 GB, percent = 34.6%
904
+ [2024-04-03 22:03:11,147] [INFO] [utils.py:824:see_memory_usage] before initializing group 1
905
+ [2024-04-03 22:03:11,150] [INFO] [utils.py:825:see_memory_usage] MA 6.16 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
906
+ [2024-04-03 22:03:11,150] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.52 GB, percent = 34.6%
907
+ [2024-04-03 22:03:11,231] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
908
+ [2024-04-03 22:03:11,236] [INFO] [utils.py:824:see_memory_usage] after initializing group 1
909
+ [2024-04-03 22:03:11,239] [INFO] [utils.py:825:see_memory_usage] MA 24.61 GB Max_MA 24.61 GB CA 0.0 GB Max_CA 0 GB
910
+ [2024-04-03 22:03:11,239] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.52 GB, percent = 34.6%
911
+ [2024-04-03 22:03:11,240] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
912
+ [2024-04-03 22:03:11,294] [INFO] [utils.py:824:see_memory_usage] before initialize_optimizer
913
+ [2024-04-03 22:03:11,297] [INFO] [utils.py:825:see_memory_usage] MA 24.61 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
914
+ [2024-04-03 22:03:11,297] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.55 GB, percent = 34.6%
915
+ [2024-04-03 22:03:11,348] [INFO] [utils.py:824:see_memory_usage] end initialize_optimizer
916
+ [2024-04-03 22:03:11,352] [INFO] [utils.py:825:see_memory_usage] MA 24.61 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
917
+ [2024-04-03 22:03:11,352] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.52 GB, percent = 34.6%
918
+ [2024-04-03 22:03:11,403] [INFO] [utils.py:824:see_memory_usage] end bf16_optimizer
919
+ [2024-04-03 22:03:11,406] [INFO] [utils.py:825:see_memory_usage] MA 24.61 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
920
+ [2024-04-03 22:03:11,406] [INFO] [utils.py:832:see_memory_usage] CPU Virtual Memory: used = 348.52 GB, percent = 34.6%
921
+ [2024-04-03 22:03:11,407] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = BF16_Optimizer
922
+ [2024-04-03 22:03:11,407] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client LR scheduler
923
+ [2024-04-03 22:03:11,407] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x7fe01c1a9db0>
924
+ [2024-04-03 22:03:11,407] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.95), (0.9, 0.95)]
925
+ [2024-04-03 22:03:11,408] [INFO] [config.py:992:print] DeepSpeedEngine configuration:
926
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] activation_checkpointing_config {
927
+ "partition_activations": false,
928
+ "contiguous_memory_optimization": false,
929
+ "cpu_checkpointing": false,
930
+ "number_checkpoints": null,
931
+ "synchronize_checkpoint_boundary": false,
932
+ "profile": false
933
+ }
934
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
935
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] amp_enabled .................. False
936
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] amp_params ................... False
937
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] autotuning_config ............ {
938
+ "enabled": false,
939
+ "start_step": null,
940
+ "end_step": null,
941
+ "metric_path": null,
942
+ "arg_mappings": null,
943
+ "metric": "throughput",
944
+ "model_info": null,
945
+ "results_dir": "autotuning_results",
946
+ "exps_dir": "autotuning_exps",
947
+ "overwrite": true,
948
+ "fast": true,
949
+ "start_profile_step": 3,
950
+ "end_profile_step": 5,
951
+ "tuner_type": "gridsearch",
952
+ "tuner_early_stopping": 5,
953
+ "tuner_num_trials": 50,
954
+ "model_info_path": null,
955
+ "mp_size": 1,
956
+ "max_train_batch_size": null,
957
+ "min_train_batch_size": 1,
958
+ "max_train_micro_batch_size_per_gpu": 1.024000e+03,
959
+ "min_train_micro_batch_size_per_gpu": 1,
960
+ "num_tuning_micro_batch_sizes": 3
961
+ }
962
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] bfloat16_accumulate_grads_via_hooks True
963
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] bfloat16_enabled ............. True
964
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] checkpoint_parallel_write_pipeline False
965
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] checkpoint_tag_validation_enabled True
966
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] checkpoint_tag_validation_fail False
967
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7fe01c1a9810>
968
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] communication_data_type ...... None
969
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
970
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] curriculum_enabled_legacy .... False
971
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] curriculum_params_legacy ..... False
972
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
973
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] data_efficiency_enabled ...... False
974
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] dataloader_drop_last ......... False
975
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] disable_allgather ............ False
976
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] dump_state ................... False
977
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] dynamic_loss_scale_args ...... None
978
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_enabled ........... False
979
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_gas_boundary_resolution 1
980
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_layer_name ........ bert.encoder.layer
981
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_layer_num ......... 0
982
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_max_iter .......... 100
983
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_stability ......... 1e-06
984
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_tol ............... 0.01
985
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] eigenvalue_verbose ........... False
986
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] elasticity_enabled ........... False
987
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] flops_profiler_config ........ {
988
+ "enabled": false,
989
+ "recompute_fwd_factor": 0.0,
990
+ "profile_step": 1,
991
+ "module_depth": -1,
992
+ "top_modules": 1,
993
+ "detailed": true,
994
+ "output_file": null
995
+ }
996
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] fp16_auto_cast ............... None
997
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] fp16_enabled ................. False
998
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] fp16_master_weights_and_gradients False
999
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] global_rank .................. 0
1000
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] grad_accum_dtype ............. None
1001
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] gradient_accumulation_steps .. 128
1002
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] gradient_clipping ............ 1.0
1003
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] gradient_predivide_factor .... 1.0
1004
+ [2024-04-03 22:03:11,408] [INFO] [config.py:996:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
1005
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] initial_dynamic_scale ........ 1
1006
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] load_universal_checkpoint .... False
1007
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] loss_scale ................... 1.0
1008
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] memory_breakdown ............. False
1009
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] mics_hierarchial_params_gather False
1010
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] mics_shard_size .............. -1
1011
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
1012
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] nebula_config ................ {
1013
+ "enabled": false,
1014
+ "persistent_storage_path": null,
1015
+ "persistent_time_interval": 100,
1016
+ "num_of_version_in_retention": 2,
1017
+ "enable_nebula_load": true,
1018
+ "load_path": null
1019
+ }
1020
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] optimizer_legacy_fusion ...... False
1021
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] optimizer_name ............... None
1022
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] optimizer_params ............. None
1023
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': False, 'grad_partitioned': False}
1024
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] pld_enabled .................. False
1025
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] pld_params ................... False
1026
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] prescale_gradients ........... False
1027
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] scheduler_name ............... None
1028
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] scheduler_params ............. None
1029
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] seq_parallel_communication_data_type torch.float32
1030
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] sparse_attention ............. None
1031
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] sparse_gradients_enabled ..... False
1032
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] steps_per_print .............. 10
1033
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] train_batch_size ............. 256
1034
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] train_micro_batch_size_per_gpu 1
1035
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] use_data_before_expert_parallel_ False
1036
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] use_node_local_storage ....... False
1037
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] wall_clock_breakdown ......... False
1038
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] weight_quantization_config ... None
1039
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] world_size ................... 2
1040
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] zero_allow_comm_data_type_fp32 False
1041
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] zero_allow_untested_optimizer False
1042
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=False reduce_bucket_size=500,000,000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False use_all_reduce_for_fetch_params=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
1043
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] zero_enabled ................. False
1044
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] zero_force_ds_cpu_optimizer .. True
1045
+ [2024-04-03 22:03:11,409] [INFO] [config.py:996:print] zero_optimization_stage ...... 0
1046
+ [2024-04-03 22:03:11,409] [INFO] [config.py:982:print_user_config] json = {
1047
+ "train_batch_size": 256,
1048
+ "train_micro_batch_size_per_gpu": 1,
1049
+ "steps_per_print": 10,
1050
+ "gradient_clipping": 1.0,
1051
+ "zero_optimization": {
1052
+ "stage": 0
1053
+ },
1054
+ "bf16": {
1055
+ "enabled": true,
1056
+ "accumulate_grads_via_hooks": true
1057
+ },
1058
+ "fp16": {
1059
+ "enabled": false
1060
+ },
1061
+ "wall_clock_breakdown": false,
1062
+ "pipeline": {
1063
+ "pipe_partitioned": false,
1064
+ "grad_partitioned": false
1065
+ }
1066
+ }
1067
+ [2024-04-03 22:03:11,409] [INFO] [engine.py:99:__init__] CONFIG: micro_batches=128 micro_batch_size=1
1068
+ [2024-04-03 22:03:11,409] [INFO] [engine.py:139:__init__] is_pipe_partitioned= False is_grad_partitioned= False
1069
+ [2024-04-03 22:03:12,129] [INFO] [engine.py:180:__init__] RANK=4 STAGE=1 LAYERS=25 [23, 48) STAGE_PARAMS=3301258240 (3301.258M) TOTAL_PARAMS=13205022720 (13205.023M) UNIQUE_PARAMS=13205022720 (13205.023M)
1070
+ [2024-04-03 22:03:12,129] [INFO] [engine.py:180:__init__] RANK=1 STAGE=0 LAYERS=23 [0, 23) STAGE_PARAMS=3301253120 (3301.253M) TOTAL_PARAMS=13205022720 (13205.023M) UNIQUE_PARAMS=13205022720 (13205.023M)
1071
+ [2024-04-03 22:03:12,129] [INFO] [engine.py:180:__init__] RANK=5 STAGE=1 LAYERS=25 [23, 48) STAGE_PARAMS=3301258240 (3301.258M) TOTAL_PARAMS=13205022720 (13205.023M) UNIQUE_PARAMS=13205022720 (13205.023M)
1072
+ [2024-04-03 22:03:12,130] [INFO] [engine.py:180:__init__] RANK=0 STAGE=0 LAYERS=23 [0, 23) STAGE_PARAMS=3301253120 (3301.253M) TOTAL_PARAMS=13205022720 (13205.023M) UNIQUE_PARAMS=13205022720 (13205.023M)
1073
+ [2024-04-03 22:03:12,132] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1074
+ WARNING: could not find the metadata file /data/output/132node/checkpoints
1075
+ will not load any checkpoints and will start from random
1076
+ [2024-04-03 22:03:12,132] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1077
+ [2024-04-03 22:03:12,133] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1078
+ [2024-04-03 22:03:12,133] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1079
+ [2024-04-03 22:03:12,133] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1080
+ [2024-04-03 22:03:12,133] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1081
+ [2024-04-03 22:03:12,133] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1082
+ [2024-04-03 22:03:12,133] [WARNING] [engine.py:2763:load_checkpoint] Unable to find latest file at /data/output/132node/checkpoints/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
1083
+ time (ms) | load-checkpoint: 2.42
1084
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2024-04-03 22:03:12
1085
+ > building train, validation, and test datasets ...
1086
+ > datasets target sizes (minimum size):
1087
+ train: 2560000
1088
+ validation: 258560
1089
+ test: 2560
1090
+ > building train, validation, and test datasets for GPT ...
1091
+ Single data path provided for train, valid & test
1092
+ > building dataset index ...
1093
+ reading sizes...
1094
+ reading pointers...
1095
+ reading document index...
1096
+ creating numpy buffer of mmap...
1097
+ creating memory view of numpy buffer...
1098
+ > finished creating indexed dataset in 0.001090 seconds
1099
+ number of documents: 1558306
1100
+ > dataset split:
1101
+ train:
1102
+ document indices in [0, 1509999) total of 1509999 documents
1103
+ validation:
1104
+ document indices in [1509999, 1556748) total of 46749 documents
1105
+ test:
1106
+ document indices in [1556748, 1558306) total of 1558 documents
1107
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy
1108
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy
1109
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy
1110
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy
1111
+ > loaded doc-idx mapping from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_doc_idx.npy
1112
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy
1113
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy
1114
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy
1115
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy
1116
+ > loaded sample-idx mapping from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_sample_idx.npy
1117
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy
1118
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy
1119
+
1120
+ Loading dataset index file from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy
1121
+ > loaded shuffle-idx mapping from /data/arxiv/tokenized_text_document_train_indexmap_2560000ns_2048sl_1234s_shuffle_idx.npy
1122
+ loaded indexed file in 0.002 seconds
1123
+ total number of samples: 15244235
1124
+ total number of epochs: 1
1125
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy
1126
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy
1127
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy
1128
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy
1129
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy
1130
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy
1131
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy
1132
+ > loaded doc-idx mapping from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_doc_idx.npy
1133
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy
1134
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy > loaded sample-idx mapping from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_sample_idx.npy
1135
+
1136
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy
1137
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy
1138
+ Loading dataset index file from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy
1139
+ > loaded shuffle-idx mapping from /data/arxiv/tokenized_text_document_valid_indexmap_258560ns_2048sl_1234s_shuffle_idx.npy
1140
+ loaded indexed file in 0.002 seconds
1141
+ total number of samples: 481162
1142
+ total number of epochs: 1
1143
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy
1144
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy
1145
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy
1146
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy
1147
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npyLoading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy
1148
+
1149
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy
1150
+ > loaded doc-idx mapping from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_doc_idx.npy
1151
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy
1152
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy
1153
+ > loaded sample-idx mapping from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_sample_idx.npy
1154
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy
1155
+ Loading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npyLoading dataset index file from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy
1156
+
1157
+ > loaded shuffle-idx mapping from /data/arxiv/tokenized_text_document_test_indexmap_2560ns_2048sl_1234s_shuffle_idx.npy
1158
+ loaded indexed file in 0.001 seconds
1159
+ total number of samples: 16581
1160
+ total number of epochs: 1
1161
+ > finished creating GPT datasets ...
1162
+ time (ms) | model-and-optimizer-setup: 2860.75 | train/valid/test-data-iterators-setup: 1113.55
1163
+ [after dataloaders are built] datetime: 2024-04-03 22:03:13
1164
+ done with setup ...
1165
+ training ...
1166
+ [before the start of training step] datetime: 2024-04-03 22:03:13
1167
+ [2024-04-03 22:08:15,101] [INFO] [logging.py:96:log_dist] [Rank 0] step=10, skipped=0, lr=[1.4999999999999998e-06, 1.4999999999999998e-06], mom=[(0.9, 0.95), (0.9, 0.95)]
1168
+ steps: 10 loss: 11.4461 iter time (s): 30.238 samples/sec: 8.466
1169
+ [Rank 0] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0
1170
+ [Rank 1] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0
1171
+ [Rank 4] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0
1172
+ [Rank 5] (after 10 iterations) memory (MB) | allocated: 0.0 | max allocated: 0.0 | reserved: 0.0 | max reserved: 0.0
1173
+ iteration 10/ 10000 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (ms): 30185.1 | learning rate: 1.500E-06 | global batch size: 256 | lm loss: 1.177418E+01 | loss scale: 0.0 | grad norm: 13.481 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 8.481 | TFLOPs: 177.37 |
llama13b_5M/checkpoints_zero_stage_2/latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step0
llama13b_5M/checkpoints_zero_stage_2/latest_checkpointed_iteration.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ 40
llama13b_5M/checkpoints_zero_stage_2/zero_to_fp32.py ADDED
@@ -0,0 +1,592 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example: python zero_to_fp32.py . pytorch_model.bin
14
+
15
+ import argparse
16
+ import torch
17
+ import glob
18
+ import math
19
+ import os
20
+ import re
21
+ from collections import OrderedDict
22
+ from dataclasses import dataclass
23
+
24
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
25
+ # DeepSpeed data structures it has to be available in the current python environment.
26
+ from deepspeed.utils import logger
27
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
28
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
29
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
30
+
31
+
32
+ @dataclass
33
+ class zero_model_state:
34
+ buffers: dict()
35
+ param_shapes: dict()
36
+ shared_params: list
37
+ ds_version: int
38
+ frozen_param_shapes: dict()
39
+ frozen_param_fragments: dict()
40
+
41
+
42
+ debug = 0
43
+
44
+ # load to cpu
45
+ device = torch.device('cpu')
46
+
47
+
48
+ def atoi(text):
49
+ return int(text) if text.isdigit() else text
50
+
51
+
52
+ def natural_keys(text):
53
+ '''
54
+ alist.sort(key=natural_keys) sorts in human order
55
+ http://nedbatchelder.com/blog/200712/human_sorting.html
56
+ (See Toothy's implementation in the comments)
57
+ '''
58
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
59
+
60
+
61
+ def get_model_state_file(checkpoint_dir, zero_stage):
62
+ if not os.path.isdir(checkpoint_dir):
63
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
64
+
65
+ # there should be only one file
66
+ if zero_stage <= 2:
67
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
68
+ elif zero_stage == 3:
69
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
70
+
71
+ if not os.path.exists(file):
72
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
73
+
74
+ return file
75
+
76
+
77
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
78
+ # XXX: need to test that this simple glob rule works for multi-node setup too
79
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
80
+
81
+ if len(ckpt_files) == 0:
82
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
83
+
84
+ return ckpt_files
85
+
86
+
87
+ def get_optim_files(checkpoint_dir):
88
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
89
+
90
+
91
+ def get_model_state_files(checkpoint_dir):
92
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
93
+
94
+
95
+ def parse_model_states(files):
96
+ zero_model_states = []
97
+ for file in files:
98
+ state_dict = torch.load(file, map_location=device)
99
+
100
+ if BUFFER_NAMES not in state_dict:
101
+ raise ValueError(f"{file} is not a model state checkpoint")
102
+ buffer_names = state_dict[BUFFER_NAMES]
103
+ if debug:
104
+ print("Found buffers:", buffer_names)
105
+
106
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
107
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
108
+ param_shapes = state_dict[PARAM_SHAPES]
109
+
110
+ # collect parameters that are included in param_shapes
111
+ param_names = []
112
+ for s in param_shapes:
113
+ for name in s.keys():
114
+ param_names.append(name)
115
+
116
+ # update with frozen parameters
117
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
118
+ if frozen_param_shapes is not None:
119
+ if debug:
120
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
121
+ param_names += list(frozen_param_shapes.keys())
122
+
123
+ # handle shared params
124
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
125
+
126
+ ds_version = state_dict.get(DS_VERSION, None)
127
+
128
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
129
+
130
+ z_model_state = zero_model_state(buffers=buffers,
131
+ param_shapes=param_shapes,
132
+ shared_params=shared_params,
133
+ ds_version=ds_version,
134
+ frozen_param_shapes=frozen_param_shapes,
135
+ frozen_param_fragments=frozen_param_fragments)
136
+ zero_model_states.append(z_model_state)
137
+
138
+ return zero_model_states
139
+
140
+
141
+ def parse_optim_states(files, ds_checkpoint_dir):
142
+
143
+ total_files = len(files)
144
+ state_dicts = []
145
+ for f in files:
146
+ state_dict = torch.load(f, map_location=device)
147
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
148
+ # and also handle the case where it was already removed by another helper script
149
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
150
+ state_dicts.append(state_dict)
151
+
152
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
153
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
154
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
155
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
156
+
157
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
158
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
159
+ # use the max of the partition_count to get the dp world_size.
160
+
161
+ if type(world_size) is list:
162
+ world_size = max(world_size)
163
+
164
+ if world_size != total_files:
165
+ raise ValueError(
166
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
167
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
168
+ )
169
+
170
+ # the groups are named differently in each stage
171
+ if zero_stage <= 2:
172
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
173
+ elif zero_stage == 3:
174
+ fp32_groups_key = FP32_FLAT_GROUPS
175
+ else:
176
+ raise ValueError(f"unknown zero stage {zero_stage}")
177
+
178
+ if zero_stage <= 2:
179
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
180
+ elif zero_stage == 3:
181
+ # if there is more than one param group, there will be multiple flattened tensors - one
182
+ # flattened tensor per group - for simplicity merge them into a single tensor
183
+ #
184
+ # XXX: could make the script more memory efficient for when there are multiple groups - it
185
+ # will require matching the sub-lists of param_shapes for each param group flattened tensor
186
+
187
+ fp32_flat_groups = [
188
+ torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
189
+ ]
190
+
191
+ return zero_stage, world_size, fp32_flat_groups
192
+
193
+
194
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir):
195
+ """
196
+ Returns fp32 state_dict reconstructed from ds checkpoint
197
+
198
+ Args:
199
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
200
+
201
+ """
202
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
203
+
204
+ optim_files = get_optim_files(ds_checkpoint_dir)
205
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
206
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
207
+
208
+ model_files = get_model_state_files(ds_checkpoint_dir)
209
+
210
+ zero_model_states = parse_model_states(model_files)
211
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
212
+
213
+ if zero_stage <= 2:
214
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states)
215
+ elif zero_stage == 3:
216
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states)
217
+
218
+
219
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
220
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
221
+ return
222
+
223
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
224
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
225
+
226
+ if debug:
227
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
228
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
229
+
230
+ wanted_params = len(frozen_param_shapes)
231
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
232
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
233
+ print(f'Frozen params: Have {avail_numel} numels to process.')
234
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
235
+
236
+ total_params = 0
237
+ total_numel = 0
238
+ for name, shape in frozen_param_shapes.items():
239
+ total_params += 1
240
+ unpartitioned_numel = shape.numel()
241
+ total_numel += unpartitioned_numel
242
+
243
+ state_dict[name] = frozen_param_fragments[name]
244
+
245
+ if debug:
246
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
247
+
248
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
249
+
250
+
251
+ def _has_callable(obj, fn):
252
+ attr = getattr(obj, fn, None)
253
+ return callable(attr)
254
+
255
+
256
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
257
+ param_shapes = zero_model_states[0].param_shapes
258
+
259
+ # Reconstruction protocol:
260
+ #
261
+ # XXX: document this
262
+
263
+ if debug:
264
+ for i in range(world_size):
265
+ for j in range(len(fp32_flat_groups[0])):
266
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
267
+
268
+ # XXX: memory usage doubles here (zero2)
269
+ num_param_groups = len(fp32_flat_groups[0])
270
+ merged_single_partition_of_fp32_groups = []
271
+ for i in range(num_param_groups):
272
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
273
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
274
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
275
+ avail_numel = sum(
276
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
277
+
278
+ if debug:
279
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
280
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
281
+ # not asserting if there is a mismatch due to possible padding
282
+ print(f"Have {avail_numel} numels to process.")
283
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
284
+
285
+ # params
286
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
287
+ # out-of-core computing solution
288
+ total_numel = 0
289
+ total_params = 0
290
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
291
+ offset = 0
292
+ avail_numel = full_single_fp32_vector.numel()
293
+ for name, shape in shapes.items():
294
+
295
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
296
+ total_numel += unpartitioned_numel
297
+ total_params += 1
298
+
299
+ if debug:
300
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
301
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
302
+ offset += unpartitioned_numel
303
+
304
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
305
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
306
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
307
+ # live optimizer object, so we are checking that the numbers are within the right range
308
+ align_to = 2 * world_size
309
+
310
+ def zero2_align(x):
311
+ return align_to * math.ceil(x / align_to)
312
+
313
+ if debug:
314
+ print(f"original offset={offset}, avail_numel={avail_numel}")
315
+
316
+ offset = zero2_align(offset)
317
+ avail_numel = zero2_align(avail_numel)
318
+
319
+ if debug:
320
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
321
+
322
+ # Sanity check
323
+ if offset != avail_numel:
324
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
325
+
326
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
327
+
328
+
329
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states):
330
+ state_dict = OrderedDict()
331
+
332
+ # buffers
333
+ buffers = zero_model_states[0].buffers
334
+ state_dict.update(buffers)
335
+ if debug:
336
+ print(f"added {len(buffers)} buffers")
337
+
338
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
339
+
340
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
341
+
342
+ # recover shared parameters
343
+ for pair in zero_model_states[0].shared_params:
344
+ if pair[1] in state_dict:
345
+ state_dict[pair[0]] = state_dict[pair[1]]
346
+
347
+ return state_dict
348
+
349
+
350
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
351
+ remainder = unpartitioned_numel % world_size
352
+ padding_numel = (world_size - remainder) if remainder else 0
353
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
354
+ return partitioned_numel, padding_numel
355
+
356
+
357
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
358
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
359
+ return
360
+
361
+ if debug:
362
+ for i in range(world_size):
363
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
364
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
365
+
366
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
367
+ wanted_params = len(frozen_param_shapes)
368
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
369
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
370
+ print(f'Frozen params: Have {avail_numel} numels to process.')
371
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
372
+
373
+ total_params = 0
374
+ total_numel = 0
375
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
376
+ total_params += 1
377
+ unpartitioned_numel = shape.numel()
378
+ total_numel += unpartitioned_numel
379
+
380
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
381
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
382
+
383
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
384
+
385
+ if debug:
386
+ print(
387
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
388
+ )
389
+
390
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
391
+
392
+
393
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
394
+ param_shapes = zero_model_states[0].param_shapes
395
+ avail_numel = fp32_flat_groups[0].numel() * world_size
396
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
397
+ # param, re-consolidating each param, while dealing with padding if any
398
+
399
+ # merge list of dicts, preserving order
400
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
401
+
402
+ if debug:
403
+ for i in range(world_size):
404
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
405
+
406
+ wanted_params = len(param_shapes)
407
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
408
+ # not asserting if there is a mismatch due to possible padding
409
+ avail_numel = fp32_flat_groups[0].numel() * world_size
410
+ print(f"Trainable params: Have {avail_numel} numels to process.")
411
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
412
+
413
+ # params
414
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
415
+ # out-of-core computing solution
416
+ offset = 0
417
+ total_numel = 0
418
+ total_params = 0
419
+ for name, shape in param_shapes.items():
420
+
421
+ unpartitioned_numel = shape.numel()
422
+ total_numel += unpartitioned_numel
423
+ total_params += 1
424
+
425
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
426
+
427
+ if debug:
428
+ print(
429
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
430
+ )
431
+
432
+ # XXX: memory usage doubles here
433
+ state_dict[name] = torch.cat(
434
+ tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
435
+ 0).narrow(0, 0, unpartitioned_numel).view(shape)
436
+ offset += partitioned_numel
437
+
438
+ offset *= world_size
439
+
440
+ # Sanity check
441
+ if offset != avail_numel:
442
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
443
+
444
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
445
+
446
+
447
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states):
448
+ state_dict = OrderedDict()
449
+
450
+ # buffers
451
+ buffers = zero_model_states[0].buffers
452
+ state_dict.update(buffers)
453
+ if debug:
454
+ print(f"added {len(buffers)} buffers")
455
+
456
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
457
+
458
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
459
+
460
+ # recover shared parameters
461
+ for pair in zero_model_states[0].shared_params:
462
+ if pair[1] in state_dict:
463
+ state_dict[pair[0]] = state_dict[pair[1]]
464
+
465
+ return state_dict
466
+
467
+
468
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None):
469
+ """
470
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
471
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
472
+ via a model hub.
473
+
474
+ Args:
475
+ - ``checkpoint_dir``: path to the desired checkpoint folder
476
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
477
+
478
+ Returns:
479
+ - pytorch ``state_dict``
480
+
481
+ Note: this approach may not work if your application doesn't have sufficient free CPU memory and
482
+ you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
483
+ the checkpoint.
484
+
485
+ A typical usage might be ::
486
+
487
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
488
+ # do the training and checkpoint saving
489
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
490
+ model = model.cpu() # move to cpu
491
+ model.load_state_dict(state_dict)
492
+ # submit to model hub or save the model to share with others
493
+
494
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
495
+ application. i.e. you will need to re-initialize the deepspeed engine, since
496
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
497
+
498
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
499
+
500
+ """
501
+ if tag is None:
502
+ latest_path = os.path.join(checkpoint_dir, 'latest')
503
+ if os.path.isfile(latest_path):
504
+ with open(latest_path, 'r') as fd:
505
+ tag = fd.read().strip()
506
+ else:
507
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
508
+
509
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
510
+
511
+ if not os.path.isdir(ds_checkpoint_dir):
512
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
513
+
514
+ return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir)
515
+
516
+
517
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None):
518
+ """
519
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
520
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
521
+
522
+ Args:
523
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
524
+ - ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
525
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
526
+ """
527
+
528
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
529
+ print(f"Saving fp32 state dict to {output_file}")
530
+ torch.save(state_dict, output_file)
531
+
532
+
533
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
534
+ """
535
+ 1. Put the provided model to cpu
536
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
537
+ 3. Load it into the provided model
538
+
539
+ Args:
540
+ - ``model``: the model object to update
541
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
542
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
543
+
544
+ Returns:
545
+ - ``model`: modified model
546
+
547
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
548
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
549
+ conveniently placed for you in the checkpoint folder.
550
+
551
+ A typical usage might be ::
552
+
553
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
554
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
555
+ # submit to model hub or save the model to share with others
556
+
557
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
558
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
559
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
560
+
561
+ """
562
+ logger.info(f"Extracting fp32 weights")
563
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
564
+
565
+ logger.info(f"Overwriting model with fp32 weights")
566
+ model = model.cpu()
567
+ model.load_state_dict(state_dict, strict=False)
568
+
569
+ return model
570
+
571
+
572
+ if __name__ == "__main__":
573
+
574
+ parser = argparse.ArgumentParser()
575
+ parser.add_argument("checkpoint_dir",
576
+ type=str,
577
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
578
+ parser.add_argument(
579
+ "output_file",
580
+ type=str,
581
+ help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
582
+ parser.add_argument("-t",
583
+ "--tag",
584
+ type=str,
585
+ default=None,
586
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
587
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
588
+ args = parser.parse_args()
589
+
590
+ debug = args.debug
591
+
592
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir, args.output_file, tag=args.tag)
llama13b_5M/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_5M/first_run.txt ADDED
The diff for this file is too large to render. See raw diff
 
llama13b_5M/log.txt ADDED
The diff for this file is too large to render. See raw diff
 
llama13b_multiling_800M/13-05-2024-09:15:33/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:15:33/log.txt ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:15:34,948] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:15:36,265] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:15:36,266] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:15:36,266] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:15:33/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 09:15:37,925] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 09:15:37,963] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:15:39,161] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:15:39,161] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:15:39,161] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:15:39,161] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:15:39,161] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.175: [2024-05-13 09:15:39,324] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
41
+ 100.83.37.175: [2024-05-13 09:15:39,324] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
42
+ 100.83.37.175: [2024-05-13 09:15:39,324] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
43
+ 100.83.37.175: [2024-05-13 09:15:39,324] [INFO] [launch.py:164:main] dist_world_size=16
44
+ 100.83.37.175: [2024-05-13 09:15:39,324] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
45
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
49
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
50
+ 100.83.37.175: ...done.
51
+ 100.83.37.175: ...done.
52
+ 100.83.37.175: ...done.
53
+ 100.83.37.175: ...done.
54
+ 100.83.37.175: ...done.
55
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
56
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
57
+ 100.83.37.175: ...done.
58
+ 100.83.37.175: ...done.
59
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
60
+ 100.83.37.175: ...done.
61
+ 100.83.37.188: [2024-05-13 09:15:40,973] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.188: warnings.warn(
64
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
65
+ 100.83.37.188: warnings.warn(
66
+ 100.83.37.188: [2024-05-13 09:15:40,973] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
67
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
68
+ 100.83.37.188: warnings.warn(
69
+ 100.83.37.188: [2024-05-13 09:15:40,982] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
70
+ 100.83.37.188: [2024-05-13 09:15:41,032] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
71
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
72
+ 100.83.37.188: warnings.warn(
73
+ 100.83.37.175: [2024-05-13 09:15:41,048] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
74
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
75
+ 100.83.37.175: warnings.warn(
76
+ 100.83.37.175: [2024-05-13 09:15:41,067] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
77
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
78
+ 100.83.37.175: warnings.warn(
79
+ 100.83.37.175: [2024-05-13 09:15:41,099] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
80
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
81
+ 100.83.37.175: warnings.warn(
82
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
83
+ 100.83.37.175: warnings.warn(
84
+ 100.83.37.175: [2024-05-13 09:15:41,106] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
85
+ 100.83.37.188: [2024-05-13 09:15:41,108] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.188: warnings.warn(
88
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
89
+ 100.83.37.188: warnings.warn(
90
+ 100.83.37.188: [2024-05-13 09:15:41,111] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
91
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
92
+ 100.83.37.188: warnings.warn(
93
+ 100.83.37.188: [2024-05-13 09:15:41,117] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
94
+ 100.83.37.175: [2024-05-13 09:15:41,139] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.175: [2024-05-13 09:15:41,145] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
98
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
99
+ 100.83.37.175: warnings.warn(
100
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
101
+ 100.83.37.175: warnings.warn(
102
+ 100.83.37.175: [2024-05-13 09:15:41,147] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
103
+ 100.83.37.175: [2024-05-13 09:15:41,152] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
104
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
105
+ 100.83.37.175: warnings.warn(
106
+ 100.83.37.188: [2024-05-13 09:15:41,183] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
107
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
108
+ 100.83.37.188: warnings.warn(
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: Traceback (most recent call last):
111
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
112
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
113
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
114
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
115
+ 100.83.37.188: ImportError: ImportErrorcannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
116
+ 100.83.37.188: : cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
117
+ 100.83.37.188: Traceback (most recent call last):
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
119
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
120
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
121
+ 100.83.37.188: Traceback (most recent call last):
122
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
123
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
124
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
125
+ 100.83.37.188: Traceback (most recent call last):
126
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
127
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
128
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
129
+ 100.83.37.188: Traceback (most recent call last):
130
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
131
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
132
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
133
+ 100.83.37.188: Traceback (most recent call last):
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
135
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
136
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
137
+ 100.83.37.188: Traceback (most recent call last):
138
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
139
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
140
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
141
+ 100.83.37.175: Traceback (most recent call last):
142
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
143
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
144
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
145
+ 100.83.37.175: Traceback (most recent call last):
146
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
147
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
148
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
149
+ 100.83.37.175: Traceback (most recent call last):
150
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
151
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
152
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
153
+ 100.83.37.175: Traceback (most recent call last):
154
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
155
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
156
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
157
+ 100.83.37.188: [2024-05-13 09:15:43,168] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115299
158
+ 100.83.37.188: [2024-05-13 09:15:43,170] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115300
159
+ 100.83.37.188: [2024-05-13 09:15:43,170] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115301
160
+ 100.83.37.188: [2024-05-13 09:15:43,170] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115302
161
+ 100.83.37.188: [2024-05-13 09:15:43,171] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115303
162
+ 100.83.37.188: [2024-05-13 09:15:43,171] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115304
163
+ 100.83.37.175: Traceback (most recent call last):
164
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
165
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
166
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
167
+ 100.83.37.188: [2024-05-13 09:15:43,198] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115305
168
+ 100.83.37.175: Traceback (most recent call last):
169
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
170
+ 100.83.37.175: Traceback (most recent call last):
171
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
172
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
173
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
174
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
175
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
176
+ 100.83.37.175: Traceback (most recent call last):
177
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
178
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
179
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
180
+ 100.83.37.188: [2024-05-13 09:15:43,251] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 115306
181
+ 100.83.37.188: [2024-05-13 09:15:43,252] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:15:33/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
182
+ 100.83.37.175: [2024-05-13 09:15:43,331] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19210
183
+ 100.83.37.175: [2024-05-13 09:15:43,386] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19211
184
+ 100.83.37.175: [2024-05-13 09:15:43,386] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19212
185
+ 100.83.37.175: [2024-05-13 09:15:43,413] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19213
186
+ 100.83.37.175: [2024-05-13 09:15:43,414] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19214
187
+ 100.83.37.175: [2024-05-13 09:15:43,442] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19218
188
+ 100.83.37.175: [2024-05-13 09:15:43,494] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19221
189
+ 100.83.37.175: [2024-05-13 09:15:43,495] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 19225
190
+ 100.83.37.175: [2024-05-13 09:15:43,522] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:15:33/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:15:33/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
191
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
192
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:15:33/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:17:37/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:17:37/log.txt ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:17:39,563] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:17:40,878] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:17:40,878] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:17:40,878] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:17:37/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.188: warnings.warn(
15
+ 100.83.37.188: [2024-05-13 09:17:42,581] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.175: warnings.warn(
18
+ 100.83.37.175: [2024-05-13 09:17:42,604] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:17:43,708] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:17:43,709] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:17:43,709] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:17:43,709] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:17:43,709] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.175: [2024-05-13 09:17:44,192] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
41
+ 100.83.37.175: [2024-05-13 09:17:44,192] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
42
+ 100.83.37.175: [2024-05-13 09:17:44,192] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
43
+ 100.83.37.175: [2024-05-13 09:17:44,192] [INFO] [launch.py:164:main] dist_world_size=16
44
+ 100.83.37.175: [2024-05-13 09:17:44,192] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
45
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.175: ...done.
49
+ 100.83.37.175: ...done.
50
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
51
+ 100.83.37.175: ...done.
52
+ 100.83.37.175: ...done.
53
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
54
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
55
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
56
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
57
+ 100.83.37.175: ...done.
58
+ 100.83.37.175: ...done.
59
+ 100.83.37.175: ...done.
60
+ 100.83.37.175: ...done.
61
+ 100.83.37.188: [2024-05-13 09:17:45,563] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.188: warnings.warn(
64
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
65
+ 100.83.37.188: warnings.warn(
66
+ 100.83.37.188: [2024-05-13 09:17:45,570] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
67
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
68
+ 100.83.37.188: warnings.warn(
69
+ 100.83.37.188: [2024-05-13 09:17:45,591] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
70
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
71
+ 100.83.37.188: warnings.warn(
72
+ 100.83.37.188: [2024-05-13 09:17:45,601] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
73
+ 100.83.37.188: [2024-05-13 09:17:45,602] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
74
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
75
+ 100.83.37.188: warnings.warn(
76
+ 100.83.37.188: [2024-05-13 09:17:45,653] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
77
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
78
+ 100.83.37.188: warnings.warn(
79
+ 100.83.37.188: [2024-05-13 09:17:45,657] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
80
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
81
+ 100.83.37.188: warnings.warn(
82
+ 100.83.37.188: [2024-05-13 09:17:45,693] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
83
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
84
+ 100.83.37.188: warnings.warn(
85
+ 100.83.37.175: [2024-05-13 09:17:45,897] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.175: warnings.warn(
88
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
89
+ 100.83.37.175: warnings.warn(
90
+ 100.83.37.175: [2024-05-13 09:17:45,898] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
91
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
92
+ 100.83.37.175: warnings.warn(
93
+ 100.83.37.175: [2024-05-13 09:17:45,908] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
94
+ 100.83.37.175: [2024-05-13 09:17:45,958] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.175: [2024-05-13 09:17:46,008] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
98
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
99
+ 100.83.37.175: warnings.warn(
100
+ 100.83.37.175: [2024-05-13 09:17:46,041] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
101
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
102
+ 100.83.37.175: warnings.warn(
103
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
104
+ 100.83.37.175: warnings.warn(
105
+ 100.83.37.175: [2024-05-13 09:17:46,042] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
106
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
107
+ 100.83.37.175: warnings.warn(
108
+ 100.83.37.175: [2024-05-13 09:17:46,045] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
111
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
112
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
113
+ 100.83.37.188: Traceback (most recent call last):
114
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
115
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
116
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
117
+ 100.83.37.188: Traceback (most recent call last):
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
119
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
120
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
121
+ 100.83.37.188: Traceback (most recent call last):
122
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
123
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
124
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
125
+ 100.83.37.188: Traceback (most recent call last):
126
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
127
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
128
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
129
+ 100.83.37.188: Traceback (most recent call last):
130
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
131
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
132
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
133
+ 100.83.37.188: Traceback (most recent call last):
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
135
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
136
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
137
+ 100.83.37.188: Traceback (most recent call last):
138
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
139
+ 100.83.37.188: from megatron.model import LLaMAModel, LLaMAModelPipe
140
+ 100.83.37.188: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
141
+ 100.83.37.175: Traceback (most recent call last):
142
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
143
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
144
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
145
+ 100.83.37.175: Traceback (most recent call last):
146
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
147
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
148
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
149
+ 100.83.37.175: Traceback (most recent call last):
150
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
151
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
152
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
153
+ 100.83.37.188: [2024-05-13 09:17:47,715] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116640
154
+ 100.83.37.188: [2024-05-13 09:17:47,717] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116641
155
+ 100.83.37.175: Traceback (most recent call last):
156
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
157
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
158
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
159
+ 100.83.37.188: [2024-05-13 09:17:47,744] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116642
160
+ 100.83.37.175: Traceback (most recent call last):
161
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
162
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
163
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
164
+ 100.83.37.188: [2024-05-13 09:17:47,771] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116643
165
+ 100.83.37.188: [2024-05-13 09:17:47,771] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116644
166
+ 100.83.37.188: [2024-05-13 09:17:47,772] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116645
167
+ 100.83.37.188: [2024-05-13 09:17:47,799] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116646
168
+ 100.83.37.188: [2024-05-13 09:17:47,799] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 116647
169
+ 100.83.37.188: [2024-05-13 09:17:47,826] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:17:37/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
170
+ 100.83.37.175: Traceback (most recent call last):
171
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
172
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
173
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
174
+ 100.83.37.175: Traceback (most recent call last):
175
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
176
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
177
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
178
+ 100.83.37.175: Traceback (most recent call last):
179
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 25, in <module>
180
+ 100.83.37.175: from megatron.model import LLaMAModel, LLaMAModelPipe
181
+ 100.83.37.175: ImportError: cannot import name 'LLaMAModel' from 'megatron.model' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
182
+ 100.83.37.175: [2024-05-13 09:17:48,200] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20742
183
+ 100.83.37.175: [2024-05-13 09:17:48,201] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20743
184
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
185
+ 100.83.37.175: [2024-05-13 09:17:48,229] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20744
186
+ 100.83.37.175: [2024-05-13 09:17:48,229] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20745
187
+ 100.83.37.175: [2024-05-13 09:17:48,229] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20749
188
+ 100.83.37.175: [2024-05-13 09:17:48,230] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20753
189
+ 100.83.37.175: [2024-05-13 09:17:48,230] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20755
190
+ 100.83.37.175: [2024-05-13 09:17:48,257] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 20760
191
+ 100.83.37.175: [2024-05-13 09:17:48,310] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:17:37/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:17:37/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
192
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:17:37/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:19:04/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:19:04/log.txt ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:19:05,899] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:19:07,222] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:19:07,222] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:19:07,222] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:19:04/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.188: warnings.warn(
15
+ 100.83.37.188: [2024-05-13 09:19:08,928] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.175: warnings.warn(
18
+ 100.83.37.175: [2024-05-13 09:19:08,948] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:19:10,060] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:19:10,061] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:19:10,061] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:19:10,061] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:19:10,061] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.175: [2024-05-13 09:19:10,515] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
41
+ 100.83.37.175: [2024-05-13 09:19:10,515] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
42
+ 100.83.37.175: [2024-05-13 09:19:10,515] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
43
+ 100.83.37.175: [2024-05-13 09:19:10,515] [INFO] [launch.py:164:main] dist_world_size=16
44
+ 100.83.37.175: [2024-05-13 09:19:10,515] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
45
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
49
+ 100.83.37.175: ...done.
50
+ 100.83.37.175: ...done.
51
+ 100.83.37.175: ...done.
52
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
53
+ 100.83.37.175: ...done.
54
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
55
+ 100.83.37.175: ...done.
56
+ 100.83.37.175: ...done.
57
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
58
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
59
+ 100.83.37.175: ...done.
60
+ 100.83.37.175: ...done.
61
+ 100.83.37.188: [2024-05-13 09:19:11,871] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.188: warnings.warn(
64
+ 100.83.37.188: [2024-05-13 09:19:11,885] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
65
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
66
+ 100.83.37.188: warnings.warn(
67
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
68
+ 100.83.37.188: warnings.warn(
69
+ 100.83.37.188: [2024-05-13 09:19:11,891] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
70
+ 100.83.37.188: [2024-05-13 09:19:11,941] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
71
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
72
+ 100.83.37.188: warnings.warn(
73
+ 100.83.37.188: [2024-05-13 09:19:11,983] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
74
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
75
+ 100.83.37.188: warnings.warn(
76
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
77
+ 100.83.37.188: warnings.warn(
78
+ 100.83.37.188: [2024-05-13 09:19:11,989] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
79
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
80
+ 100.83.37.188: warnings.warn(
81
+ 100.83.37.188: [2024-05-13 09:19:11,991] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
82
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
83
+ 100.83.37.188: warnings.warn(
84
+ 100.83.37.188: [2024-05-13 09:19:11,998] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
85
+ 100.83.37.175: [2024-05-13 09:19:12,271] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.175: warnings.warn(
88
+ 100.83.37.175: [2024-05-13 09:19:12,293] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
89
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
90
+ 100.83.37.175: warnings.warn(
91
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
92
+ 100.83.37.175: warnings.warn(
93
+ 100.83.37.175: [2024-05-13 09:19:12,303] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
94
+ 100.83.37.175: [2024-05-13 09:19:12,347] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
98
+ 100.83.37.175: warnings.warn(
99
+ 100.83.37.175: [2024-05-13 09:19:12,348] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
100
+ 100.83.37.175: [2024-05-13 09:19:12,356] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
101
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
102
+ 100.83.37.175: warnings.warn(
103
+ 100.83.37.175: [2024-05-13 09:19:12,379] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
104
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
105
+ 100.83.37.175: warnings.warn(
106
+ 100.83.37.175: [2024-05-13 09:19:12,820] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
107
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
108
+ 100.83.37.175: warnings.warn(
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
111
+ 100.83.37.188: from megatron import get_args
112
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
113
+ 100.83.37.188: from .initialize import initialize_megatron
114
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
115
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
116
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
117
+ 100.83.37.188: from megatron.model.utils import init_method_normal
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
119
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
120
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
121
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
122
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
123
+ 100.83.37.188: Traceback (most recent call last):
124
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
125
+ 100.83.37.188: from megatron import get_args
126
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
127
+ 100.83.37.188: from .initialize import initialize_megatron
128
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
129
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
130
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
131
+ 100.83.37.188: from megatron.model.utils import init_method_normal
132
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
133
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
135
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
136
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
137
+ 100.83.37.188: Traceback (most recent call last):
138
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
139
+ 100.83.37.188: from megatron import get_args
140
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
141
+ 100.83.37.188: from .initialize import initialize_megatron
142
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
143
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
144
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
145
+ 100.83.37.188: from megatron.model.utils import init_method_normal
146
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
147
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
148
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
149
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
150
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
151
+ 100.83.37.188: Traceback (most recent call last):
152
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
153
+ 100.83.37.188: from megatron import get_args
154
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
155
+ 100.83.37.188: from .initialize import initialize_megatron
156
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
157
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
158
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
159
+ 100.83.37.188: from megatron.model.utils import init_method_normal
160
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
161
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
162
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
163
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
164
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
165
+ 100.83.37.188: Traceback (most recent call last):
166
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
167
+ 100.83.37.188: from megatron import get_args
168
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
169
+ 100.83.37.188: from .initialize import initialize_megatron
170
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
171
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
172
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
173
+ 100.83.37.188: from megatron.model.utils import init_method_normal
174
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
175
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
176
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
177
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
178
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
179
+ 100.83.37.188: Traceback (most recent call last):
180
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
181
+ 100.83.37.188: from megatron import get_args
182
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
183
+ 100.83.37.188: from .initialize import initialize_megatron
184
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
185
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
186
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
187
+ 100.83.37.188: from megatron.model.utils import init_method_normal
188
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
189
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
190
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
191
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
192
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
193
+ 100.83.37.188: Traceback (most recent call last):
194
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
195
+ 100.83.37.188: from megatron import get_args
196
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
197
+ 100.83.37.188: from .initialize import initialize_megatron
198
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
199
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
200
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
201
+ 100.83.37.188: from megatron.model.utils import init_method_normal
202
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
203
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
204
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
205
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
206
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
207
+ 100.83.37.188: Traceback (most recent call last):
208
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
209
+ 100.83.37.188: from megatron import get_args
210
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
211
+ 100.83.37.188: from .initialize import initialize_megatron
212
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
213
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
214
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
215
+ 100.83.37.188: from megatron.model.utils import init_method_normal
216
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
217
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
218
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
219
+ 100.83.37.188: from .utils import init_method_normal, scaled_init_method_normal, WrapName
220
+ 100.83.37.188: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
221
+ 100.83.37.175: Traceback (most recent call last):
222
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
223
+ 100.83.37.175: from megatron import get_args
224
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
225
+ 100.83.37.175: from .initialize import initialize_megatron
226
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
227
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
228
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
229
+ 100.83.37.175: from megatron.model.utils import init_method_normal
230
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
231
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
232
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
233
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
234
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
235
+ 100.83.37.188: [2024-05-13 09:19:14,067] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117981
236
+ 100.83.37.188: [2024-05-13 09:19:14,069] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117982
237
+ 100.83.37.188: [2024-05-13 09:19:14,069] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117983
238
+ 100.83.37.188: [2024-05-13 09:19:14,069] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117984
239
+ 100.83.37.188: [2024-05-13 09:19:14,070] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117985
240
+ 100.83.37.188: [2024-05-13 09:19:14,070] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117986
241
+ 100.83.37.188: [2024-05-13 09:19:14,070] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117987
242
+ 100.83.37.188: [2024-05-13 09:19:14,070] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 117988
243
+ 100.83.37.188: [2024-05-13 09:19:14,071] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:19:04/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
244
+ 100.83.37.175: Traceback (most recent call last):
245
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
246
+ 100.83.37.175: from megatron import get_args
247
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
248
+ 100.83.37.175: from .initialize import initialize_megatron
249
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
250
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
251
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
252
+ 100.83.37.175: from megatron.model.utils import init_method_normal
253
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
254
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
255
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
256
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
257
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
258
+ 100.83.37.175: Traceback (most recent call last):
259
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
260
+ 100.83.37.175: from megatron import get_args
261
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
262
+ 100.83.37.175: from .initialize import initialize_megatron
263
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
264
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
265
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
266
+ 100.83.37.175: from megatron.model.utils import init_method_normal
267
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
268
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
269
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
270
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
271
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
272
+ 100.83.37.175: Traceback (most recent call last):
273
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
274
+ 100.83.37.175: from megatron import get_args
275
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
276
+ 100.83.37.175: from .initialize import initialize_megatron
277
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
278
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
279
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
280
+ 100.83.37.175: from megatron.model.utils import init_method_normal
281
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
282
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
283
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
284
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
285
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
286
+ 100.83.37.175: Traceback (most recent call last):
287
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
288
+ 100.83.37.175: from megatron import get_args
289
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
290
+ 100.83.37.175: from .initialize import initialize_megatron
291
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
292
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
293
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
294
+ 100.83.37.175: from megatron.model.utils import init_method_normal
295
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
296
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
297
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
298
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
299
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
300
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
301
+ 100.83.37.175: Traceback (most recent call last):
302
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
303
+ 100.83.37.175: from megatron import get_args
304
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
305
+ 100.83.37.175: from .initialize import initialize_megatron
306
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
307
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
308
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
309
+ 100.83.37.175: from megatron.model.utils import init_method_normal
310
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
311
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
312
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
313
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
314
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
315
+ 100.83.37.175: Traceback (most recent call last):
316
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
317
+ 100.83.37.175: from megatron import get_args
318
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
319
+ 100.83.37.175: from .initialize import initialize_megatron
320
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
321
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
322
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
323
+ 100.83.37.175: from megatron.model.utils import init_method_normal
324
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
325
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
326
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 29, in <module>
327
+ 100.83.37.175: from .utils import init_method_normal, scaled_init_method_normal, WrapName
328
+ 100.83.37.175: ImportError: cannot import name 'WrapName' from 'megatron.model.utils' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/utils.py)
329
+ 100.83.37.175: [2024-05-13 09:19:14,523] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22272
330
+ 100.83.37.175: [2024-05-13 09:19:14,551] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22273
331
+ 100.83.37.175: [2024-05-13 09:19:14,551] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22274
332
+ 100.83.37.175: [2024-05-13 09:19:14,605] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22275
333
+ 100.83.37.175: [2024-05-13 09:19:14,632] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22276
334
+ 100.83.37.175: [2024-05-13 09:19:14,685] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22280
335
+ 100.83.37.175: [2024-05-13 09:19:14,686] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22286
336
+ 100.83.37.175: [2024-05-13 09:19:14,738] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 22288
337
+ 100.83.37.175: [2024-05-13 09:19:14,739] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:19:04/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:19:04/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
338
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:19:04/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:21:14/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:21:14/log.txt ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:21:16,057] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:21:17,379] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:21:17,380] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:21:17,380] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:21:14/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 09:21:19,106] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 09:21:19,149] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:21:20,287] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:21:20,287] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:21:20,287] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:21:20,287] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:21:20,287] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: ...done.
31
+ 100.83.37.188: ...done.
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.175: [2024-05-13 09:21:20,675] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
41
+ 100.83.37.175: [2024-05-13 09:21:20,675] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
42
+ 100.83.37.175: [2024-05-13 09:21:20,675] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
43
+ 100.83.37.175: [2024-05-13 09:21:20,675] [INFO] [launch.py:164:main] dist_world_size=16
44
+ 100.83.37.175: [2024-05-13 09:21:20,675] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
45
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
49
+ 100.83.37.175: ...done.
50
+ 100.83.37.175: ...done.
51
+ 100.83.37.175: ...done.
52
+ 100.83.37.175: ...done.
53
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
54
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
55
+ 100.83.37.175: ...done.
56
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
57
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
58
+ 100.83.37.175: ...done.
59
+ 100.83.37.175: ...done.
60
+ 100.83.37.175: ...done.
61
+ 100.83.37.188: [2024-05-13 09:21:22,063] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.188: warnings.warn(
64
+ 100.83.37.188: [2024-05-13 09:21:22,131] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
65
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
66
+ 100.83.37.188: warnings.warn(
67
+ 100.83.37.188: [2024-05-13 09:21:22,159] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
68
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
69
+ 100.83.37.188: warnings.warn(
70
+ 100.83.37.188: [2024-05-13 09:21:22,173] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
71
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
72
+ 100.83.37.188: warnings.warn(
73
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
74
+ 100.83.37.188: warnings.warn(
75
+ 100.83.37.188: [2024-05-13 09:21:22,190] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
76
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
77
+ 100.83.37.188: warnings.warn(
78
+ 100.83.37.188: [2024-05-13 09:21:22,196] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
79
+ 100.83.37.188: [2024-05-13 09:21:22,200] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
80
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
81
+ 100.83.37.188: warnings.warn(
82
+ 100.83.37.188: [2024-05-13 09:21:22,243] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
83
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
84
+ 100.83.37.188: warnings.warn(
85
+ 100.83.37.175: [2024-05-13 09:21:22,365] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.175: warnings.warn(
88
+ 100.83.37.175: [2024-05-13 09:21:22,427] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
89
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
90
+ 100.83.37.175: warnings.warn(
91
+ 100.83.37.175: [2024-05-13 09:21:22,438] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
92
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
93
+ 100.83.37.175: warnings.warn(
94
+ 100.83.37.175: [2024-05-13 09:21:22,458] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
98
+ 100.83.37.175: warnings.warn(
99
+ 100.83.37.175: [2024-05-13 09:21:22,458] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
100
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
101
+ 100.83.37.175: warnings.warn(
102
+ 100.83.37.175: [2024-05-13 09:21:22,504] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
103
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
104
+ 100.83.37.175: warnings.warn(
105
+ 100.83.37.175: [2024-05-13 09:21:22,506] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
106
+ 100.83.37.175: [2024-05-13 09:21:22,543] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
107
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
108
+ 100.83.37.175: warnings.warn(
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
111
+ 100.83.37.188: from megatron import get_args
112
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
113
+ 100.83.37.188: from .initialize import initialize_megatron
114
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
115
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
116
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
117
+ 100.83.37.188: from megatron.model.utils import init_method_normal
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
119
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
120
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
121
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
122
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
123
+ 100.83.37.188: Traceback (most recent call last):
124
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
125
+ 100.83.37.188: from megatron import get_args
126
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
127
+ 100.83.37.188: from .initialize import initialize_megatron
128
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
129
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
130
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
131
+ 100.83.37.188: from megatron.model.utils import init_method_normal
132
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
133
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
135
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
136
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
137
+ 100.83.37.188: Traceback (most recent call last):
138
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
139
+ 100.83.37.188: from megatron import get_args
140
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
141
+ 100.83.37.188: from .initialize import initialize_megatron
142
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
143
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
144
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
145
+ 100.83.37.188: from megatron.model.utils import init_method_normal
146
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
147
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
148
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
149
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
150
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
151
+ 100.83.37.188: Traceback (most recent call last):
152
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
153
+ 100.83.37.188: from megatron import get_args
154
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
155
+ 100.83.37.188: from .initialize import initialize_megatron
156
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
157
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
158
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
159
+ 100.83.37.188: from megatron.model.utils import init_method_normal
160
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
161
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
162
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
163
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
164
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
165
+ 100.83.37.188: Traceback (most recent call last):
166
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
167
+ 100.83.37.188: from megatron import get_args
168
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
169
+ 100.83.37.188: from .initialize import initialize_megatron
170
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
171
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
172
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
173
+ 100.83.37.188: from megatron.model.utils import init_method_normal
174
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
175
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
176
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
177
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
178
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
179
+ 100.83.37.188: Traceback (most recent call last):
180
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
181
+ 100.83.37.188: from megatron import get_args
182
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
183
+ 100.83.37.188: from .initialize import initialize_megatron
184
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
185
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
186
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
187
+ 100.83.37.188: from megatron.model.utils import init_method_normal
188
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
189
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
190
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
191
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
192
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
193
+ 100.83.37.188: Traceback (most recent call last):
194
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
195
+ 100.83.37.188: from megatron import get_args
196
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
197
+ 100.83.37.188: from .initialize import initialize_megatron
198
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
199
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
200
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
201
+ 100.83.37.188: from megatron.model.utils import init_method_normal
202
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
203
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
204
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
205
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
206
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
207
+ 100.83.37.188: Traceback (most recent call last):
208
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
209
+ 100.83.37.188: from megatron import get_args
210
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
211
+ 100.83.37.188: from .initialize import initialize_megatron
212
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
213
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
214
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
215
+ 100.83.37.188: from megatron.model.utils import init_method_normal
216
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
217
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
218
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
219
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
220
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
221
+ 100.83.37.175: Traceback (most recent call last):
222
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
223
+ 100.83.37.175: from megatron import get_args
224
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
225
+ 100.83.37.175: from .initialize import initialize_megatron
226
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
227
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
228
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
229
+ 100.83.37.175: from megatron.model.utils import init_method_normal
230
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
231
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
232
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
233
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
234
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
235
+ 100.83.37.175: Traceback (most recent call last):
236
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
237
+ 100.83.37.175: from megatron import get_args
238
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
239
+ 100.83.37.175: from .initialize import initialize_megatron
240
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
241
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
242
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
243
+ 100.83.37.175: from megatron.model.utils import init_method_normal
244
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
245
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
246
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
247
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
248
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
249
+ 100.83.37.175: Traceback (most recent call last):
250
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
251
+ 100.83.37.175: from megatron import get_args
252
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
253
+ 100.83.37.175: from .initialize import initialize_megatron
254
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
255
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
256
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
257
+ 100.83.37.175: from megatron.model.utils import init_method_normal
258
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
259
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
260
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
261
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
262
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
263
+ 100.83.37.175: Traceback (most recent call last):
264
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
265
+ 100.83.37.175: from megatron import get_args
266
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
267
+ 100.83.37.175: from .initialize import initialize_megatron
268
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
269
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
270
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
271
+ 100.83.37.175: from megatron.model.utils import init_method_normal
272
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
273
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
274
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
275
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
276
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
277
+ 100.83.37.175: Traceback (most recent call last):
278
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
279
+ 100.83.37.175: from megatron import get_args
280
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
281
+ 100.83.37.175: from .initialize import initialize_megatron
282
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
283
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
284
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
285
+ 100.83.37.175: from megatron.model.utils import init_method_normal
286
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
287
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
288
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
289
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
290
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
291
+ 100.83.37.188: [2024-05-13 09:21:24,294] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119322
292
+ 100.83.37.188: [2024-05-13 09:21:24,296] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119323
293
+ 100.83.37.188: [2024-05-13 09:21:24,296] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119324
294
+ 100.83.37.188: [2024-05-13 09:21:24,296] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119325
295
+ 100.83.37.188: [2024-05-13 09:21:24,324] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119326
296
+ 100.83.37.188: [2024-05-13 09:21:24,324] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119327
297
+ 100.83.37.188: [2024-05-13 09:21:24,376] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119328
298
+ 100.83.37.188: [2024-05-13 09:21:24,376] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 119329
299
+ 100.83.37.188: [2024-05-13 09:21:24,377] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:21:14/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
300
+ 100.83.37.175: Traceback (most recent call last):
301
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
302
+ 100.83.37.175: from megatron import get_args
303
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
304
+ 100.83.37.175: from .initialize import initialize_megatron
305
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
306
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
307
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
308
+ 100.83.37.175: from megatron.model.utils import init_method_normal
309
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
310
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
311
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
312
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
313
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
314
+ 100.83.37.175: Traceback (most recent call last):
315
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
316
+ 100.83.37.175: from megatron import get_args
317
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
318
+ 100.83.37.175: from .initialize import initialize_megatron
319
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
320
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
321
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
322
+ 100.83.37.175: from megatron.model.utils import init_method_normal
323
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
324
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
325
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
326
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
327
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
328
+ 100.83.37.175: Traceback (most recent call last):
329
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
330
+ 100.83.37.175: from megatron import get_args
331
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
332
+ 100.83.37.175: from .initialize import initialize_megatron
333
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
334
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
335
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
336
+ 100.83.37.175: from megatron.model.utils import init_method_normal
337
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
338
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
339
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
340
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
341
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
342
+ 100.83.37.175: [2024-05-13 09:21:24,683] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23800
343
+ 100.83.37.175: [2024-05-13 09:21:24,685] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23801
344
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
345
+ 100.83.37.175: [2024-05-13 09:21:24,738] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23802
346
+ 100.83.37.175: [2024-05-13 09:21:24,739] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23803
347
+ 100.83.37.175: [2024-05-13 09:21:24,739] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23807
348
+ 100.83.37.175: [2024-05-13 09:21:24,791] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23810
349
+ 100.83.37.175: [2024-05-13 09:21:24,819] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23813
350
+ 100.83.37.175: [2024-05-13 09:21:24,820] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 23816
351
+ 100.83.37.175: [2024-05-13 09:21:24,820] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:21:14/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:21:14/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
352
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:21:14/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:23:20/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:23:20/log.txt ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:23:22,419] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:23:23,730] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:23:23,730] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:23:23,730] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:23:20/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.188: warnings.warn(
15
+ 100.83.37.188: [2024-05-13 09:23:25,440] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.175: warnings.warn(
18
+ 100.83.37.175: [2024-05-13 09:23:25,443] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:23:26,645] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:23:26,645] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:23:26,645] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:23:26,645] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:23:26,645] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.175: [2024-05-13 09:23:26,998] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
41
+ 100.83.37.175: [2024-05-13 09:23:26,998] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
42
+ 100.83.37.175: [2024-05-13 09:23:26,998] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
43
+ 100.83.37.175: [2024-05-13 09:23:26,998] [INFO] [launch.py:164:main] dist_world_size=16
44
+ 100.83.37.175: [2024-05-13 09:23:26,998] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
45
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.175: ...done.
49
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
50
+ 100.83.37.175: ...done.
51
+ 100.83.37.175: ...done.
52
+ 100.83.37.175: ...done.
53
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
54
+ 100.83.37.175: ...done.
55
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
56
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
57
+ 100.83.37.175: ...done.
58
+ 100.83.37.175: ...done.
59
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
60
+ 100.83.37.175: ...done.
61
+ 100.83.37.188: [2024-05-13 09:23:28,463] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.188: warnings.warn(
64
+ 100.83.37.188: [2024-05-13 09:23:28,476] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
65
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
66
+ 100.83.37.188: warnings.warn(
67
+ 100.83.37.188: [2024-05-13 09:23:28,493] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
68
+ 100.83.37.188: [2024-05-13 09:23:28,493] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
69
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
70
+ 100.83.37.188: warnings.warn(
71
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
72
+ 100.83.37.188: warnings.warn(
73
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
74
+ 100.83.37.188: warnings.warn(
75
+ 100.83.37.188: [2024-05-13 09:23:28,495] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
76
+ 100.83.37.188: [2024-05-13 09:23:28,533] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
77
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
78
+ 100.83.37.188: warnings.warn(
79
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
80
+ 100.83.37.188: warnings.warn(
81
+ 100.83.37.188: [2024-05-13 09:23:28,545] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
82
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
83
+ 100.83.37.188: warnings.warn(
84
+ 100.83.37.188: [2024-05-13 09:23:28,551] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
85
+ 100.83.37.175: [2024-05-13 09:23:28,740] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.175: warnings.warn(
88
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
89
+ 100.83.37.175: warnings.warn(
90
+ 100.83.37.175: [2024-05-13 09:23:28,745] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
91
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
92
+ 100.83.37.175: warnings.warn(
93
+ 100.83.37.175: [2024-05-13 09:23:28,747] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
94
+ 100.83.37.175: [2024-05-13 09:23:28,759] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.175: [2024-05-13 09:23:28,795] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
98
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
99
+ 100.83.37.175: warnings.warn(
100
+ 100.83.37.175: [2024-05-13 09:23:28,838] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
101
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
102
+ 100.83.37.175: warnings.warn(
103
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
104
+ 100.83.37.175: warnings.warn(
105
+ 100.83.37.175: [2024-05-13 09:23:28,838] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
106
+ 100.83.37.175: [2024-05-13 09:23:29,024] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
107
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
108
+ 100.83.37.175: warnings.warn(
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
111
+ 100.83.37.188: from megatron import get_args
112
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
113
+ 100.83.37.188: from .initialize import initialize_megatron
114
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
115
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
116
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
117
+ 100.83.37.188: from megatron.model.utils import init_method_normal
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
119
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
120
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
121
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
122
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
123
+ 100.83.37.188: Traceback (most recent call last):
124
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
125
+ 100.83.37.188: from megatron import get_args
126
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
127
+ 100.83.37.188: from .initialize import initialize_megatron
128
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
129
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
130
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
131
+ 100.83.37.188: from megatron.model.utils import init_method_normal
132
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
133
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
135
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
136
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
137
+ 100.83.37.188: Traceback (most recent call last):
138
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
139
+ 100.83.37.188: from megatron import get_args
140
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
141
+ 100.83.37.188: from .initialize import initialize_megatron
142
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
143
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
144
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
145
+ 100.83.37.188: from megatron.model.utils import init_method_normal
146
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
147
+ 100.83.37.188: Traceback (most recent call last):
148
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
149
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
150
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
151
+ 100.83.37.188: from megatron import get_args
152
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
153
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
154
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
155
+ 100.83.37.188: from .initialize import initialize_megatron
156
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
157
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
158
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
159
+ 100.83.37.188: from megatron.model.utils import init_method_normal
160
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
161
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
162
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
163
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
164
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
165
+ 100.83.37.188: Traceback (most recent call last):
166
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
167
+ 100.83.37.188: from megatron import get_args
168
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
169
+ 100.83.37.188: from .initialize import initialize_megatron
170
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
171
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
172
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
173
+ 100.83.37.188: from megatron.model.utils import init_method_normal
174
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
175
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
176
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
177
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
178
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
179
+ 100.83.37.188: Traceback (most recent call last):
180
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
181
+ 100.83.37.188: from megatron import get_args
182
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
183
+ 100.83.37.188: from .initialize import initialize_megatron
184
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
185
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
186
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
187
+ 100.83.37.188: from megatron.model.utils import init_method_normal
188
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
189
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
190
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
191
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
192
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
193
+ 100.83.37.188: Traceback (most recent call last):
194
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
195
+ 100.83.37.188: from megatron import get_args
196
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
197
+ 100.83.37.188: from .initialize import initialize_megatron
198
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
199
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
200
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
201
+ 100.83.37.188: from megatron.model.utils import init_method_normal
202
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
203
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
204
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
205
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
206
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
207
+ 100.83.37.188: Traceback (most recent call last):
208
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
209
+ 100.83.37.188: from megatron import get_args
210
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
211
+ 100.83.37.188: from .initialize import initialize_megatron
212
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
213
+ 100.83.37.188: from megatron.arguments import (parse_args, validate_args)
214
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
215
+ 100.83.37.188: from megatron.model.utils import init_method_normal
216
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
217
+ 100.83.37.188: from .llama_model import LLaMAModel, LLaMAModelPipe
218
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
219
+ 100.83.37.188: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
220
+ 100.83.37.188: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
221
+ 100.83.37.175: Traceback (most recent call last):
222
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
223
+ 100.83.37.175: from megatron import get_args
224
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
225
+ 100.83.37.175: from .initialize import initialize_megatron
226
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
227
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
228
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
229
+ 100.83.37.175: Traceback (most recent call last):
230
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
231
+ 100.83.37.175: from megatron.model.utils import init_method_normal
232
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
233
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
234
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
235
+ 100.83.37.175: from megatron import get_args
236
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
237
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
238
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
239
+ 100.83.37.175: from .initialize import initialize_megatron
240
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
241
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
242
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
243
+ 100.83.37.175: from megatron.model.utils import init_method_normal
244
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
245
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
246
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
247
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
248
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
249
+ 100.83.37.175: Traceback (most recent call last):
250
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
251
+ 100.83.37.175: from megatron import get_args
252
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
253
+ 100.83.37.175: from .initialize import initialize_megatron
254
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
255
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
256
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
257
+ 100.83.37.175: from megatron.model.utils import init_method_normal
258
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
259
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
260
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
261
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
262
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
263
+ 100.83.37.188: [2024-05-13 09:23:30,652] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120663
264
+ 100.83.37.188: [2024-05-13 09:23:30,654] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120664
265
+ 100.83.37.188: [2024-05-13 09:23:30,654] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120665
266
+ 100.83.37.188: [2024-05-13 09:23:30,654] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120666
267
+ 100.83.37.188: [2024-05-13 09:23:30,655] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120667
268
+ 100.83.37.188: [2024-05-13 09:23:30,655] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120668
269
+ 100.83.37.188: [2024-05-13 09:23:30,655] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120669
270
+ 100.83.37.188: [2024-05-13 09:23:30,655] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 120670
271
+ 100.83.37.188: [2024-05-13 09:23:30,656] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:23:20/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
272
+ 100.83.37.175: Traceback (most recent call last):
273
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
274
+ 100.83.37.175: from megatron import get_args
275
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
276
+ 100.83.37.175: from .initialize import initialize_megatron
277
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
278
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
279
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
280
+ 100.83.37.175: from megatron.model.utils import init_method_normal
281
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
282
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
283
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
284
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
285
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
286
+ 100.83.37.175: Traceback (most recent call last):
287
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
288
+ 100.83.37.175: from megatron import get_args
289
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
290
+ 100.83.37.175: from .initialize import initialize_megatron
291
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
292
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
293
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
294
+ 100.83.37.175: from megatron.model.utils import init_method_normal
295
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
296
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
297
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
298
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
299
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
300
+ 100.83.37.175: Traceback (most recent call last):
301
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
302
+ 100.83.37.175: from megatron import get_args
303
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
304
+ 100.83.37.175: from .initialize import initialize_megatron
305
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
306
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
307
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
308
+ 100.83.37.175: from megatron.model.utils import init_method_normal
309
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
310
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
311
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
312
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
313
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
314
+ 100.83.37.175: Traceback (most recent call last):
315
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
316
+ 100.83.37.175: from megatron import get_args
317
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
318
+ 100.83.37.175: from .initialize import initialize_megatron
319
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
320
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
321
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
322
+ 100.83.37.175: from megatron.model.utils import init_method_normal
323
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
324
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
325
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
326
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
327
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
328
+ 100.83.37.175: [2024-05-13 09:23:31,007] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25330
329
+ 100.83.37.175: [2024-05-13 09:23:31,009] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25331
330
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
331
+ 100.83.37.175: [2024-05-13 09:23:31,036] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25332
332
+ 100.83.37.175: Traceback (most recent call last):
333
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 20, in <module>
334
+ 100.83.37.175: from megatron import get_args
335
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__init__.py", line 17, in <module>
336
+ 100.83.37.175: from .initialize import initialize_megatron
337
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 20, in <module>
338
+ 100.83.37.175: from megatron.arguments import (parse_args, validate_args)
339
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/arguments.py", line 20, in <module>
340
+ 100.83.37.175: from megatron.model.utils import init_method_normal
341
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py", line 16, in <module>
342
+ 100.83.37.175: from .llama_model import LLaMAModel, LLaMAModelPipe
343
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/llama_model.py", line 32, in <module>
344
+ 100.83.37.175: from megatron.model import RMSNorm, LayerNorm, CrossEntropy
345
+ 100.83.37.175: ImportError: cannot import name 'RMSNorm' from partially initialized module 'megatron.model' (most likely due to a circular import) (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/model/__init__.py)
346
+ 100.83.37.175: [2024-05-13 09:23:31,089] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25333
347
+ 100.83.37.175: [2024-05-13 09:23:31,090] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25337
348
+ 100.83.37.175: [2024-05-13 09:23:31,118] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25340
349
+ 100.83.37.175: [2024-05-13 09:23:31,118] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25344
350
+ 100.83.37.175: [2024-05-13 09:23:31,145] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 25348
351
+ 100.83.37.175: [2024-05-13 09:23:31,198] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:23:20/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:23:20/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
352
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:23:20/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:29:05/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:29:05/log.txt ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:29:06,726] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:29:08,058] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:29:08,058] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:29:08,058] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:29:05/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 09:29:09,769] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 09:29:09,793] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:29:10,927] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:29:10,927] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:29:10,927] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:29:10,927] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:29:10,927] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.175: [2024-05-13 09:29:11,349] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
41
+ 100.83.37.175: [2024-05-13 09:29:11,349] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
42
+ 100.83.37.175: [2024-05-13 09:29:11,349] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
43
+ 100.83.37.175: [2024-05-13 09:29:11,349] [INFO] [launch.py:164:main] dist_world_size=16
44
+ 100.83.37.175: [2024-05-13 09:29:11,349] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
45
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
49
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
50
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
51
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
52
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
53
+ 100.83.37.175: ...done.
54
+ 100.83.37.175: ...done.
55
+ 100.83.37.175: ...done.
56
+ 100.83.37.175: ...done.
57
+ 100.83.37.175: ...done.
58
+ 100.83.37.175: ...done.
59
+ 100.83.37.175: ...done.
60
+ 100.83.37.175: ...done.
61
+ 100.83.37.188: [2024-05-13 09:29:12,742] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.188: warnings.warn(
64
+ 100.83.37.188: [2024-05-13 09:29:12,778] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
65
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
66
+ 100.83.37.188: warnings.warn(
67
+ 100.83.37.188: [2024-05-13 09:29:12,800] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
68
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
69
+ 100.83.37.188: warnings.warn(
70
+ 100.83.37.188: [2024-05-13 09:29:12,837] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
71
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
72
+ 100.83.37.188: warnings.warn(
73
+ 100.83.37.188: [2024-05-13 09:29:12,847] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
74
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
75
+ 100.83.37.188: warnings.warn(
76
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
77
+ 100.83.37.188: warnings.warn(
78
+ 100.83.37.188: [2024-05-13 09:29:12,851] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
79
+ 100.83.37.175: [2024-05-13 09:29:13,074] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
80
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
81
+ 100.83.37.175: warnings.warn(
82
+ 100.83.37.175: [2024-05-13 09:29:13,080] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
83
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
84
+ 100.83.37.175: warnings.warn(
85
+ 100.83.37.188: [2024-05-13 09:29:13,083] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.188: warnings.warn(
88
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
89
+ 100.83.37.175: warnings.warn(
90
+ 100.83.37.175: [2024-05-13 09:29:13,090] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
91
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
92
+ 100.83.37.175: warnings.warn(
93
+ 100.83.37.175: [2024-05-13 09:29:13,091] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
94
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
95
+ 100.83.37.175: warnings.warn(
96
+ 100.83.37.175: [2024-05-13 09:29:13,095] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
97
+ 100.83.37.175: [2024-05-13 09:29:13,188] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
98
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
99
+ 100.83.37.175: warnings.warn(
100
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
101
+ 100.83.37.175: warnings.warn(
102
+ 100.83.37.175: [2024-05-13 09:29:13,189] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
103
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
104
+ 100.83.37.175: warnings.warn(
105
+ 100.83.37.175: [2024-05-13 09:29:13,189] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
106
+ 100.83.37.188: [2024-05-13 09:29:13,261] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
107
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
108
+ 100.83.37.188: warnings.warn(
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
111
+ 100.83.37.188: from megatron.global_vars import get_current_device
112
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
113
+ 100.83.37.188: Traceback (most recent call last):
114
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
115
+ 100.83.37.188: from megatron.global_vars import get_current_device
116
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
117
+ 100.83.37.188: Traceback (most recent call last):
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
119
+ 100.83.37.188: from megatron.global_vars import get_current_device
120
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
121
+ 100.83.37.188: Traceback (most recent call last):
122
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
123
+ 100.83.37.188: from megatron.global_vars import get_current_device
124
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
125
+ 100.83.37.188: Traceback (most recent call last):
126
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
127
+ 100.83.37.188: Traceback (most recent call last):
128
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
129
+ 100.83.37.188: from megatron.global_vars import get_current_device
130
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
131
+ 100.83.37.188: from megatron.global_vars import get_current_device
132
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
133
+ 100.83.37.188: Traceback (most recent call last):
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
135
+ 100.83.37.188: from megatron.global_vars import get_current_device
136
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
137
+ 100.83.37.175: Traceback (most recent call last):
138
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
139
+ 100.83.37.175: from megatron.global_vars import get_current_device
140
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
141
+ 100.83.37.175: Traceback (most recent call last):
142
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
143
+ 100.83.37.175: Traceback (most recent call last):
144
+ 100.83.37.175: from megatron.global_vars import get_current_device File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
145
+ 100.83.37.175:
146
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
147
+ 100.83.37.175: Traceback (most recent call last):
148
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
149
+ 100.83.37.175: from megatron.global_vars import get_current_device
150
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
151
+ 100.83.37.175: from megatron.global_vars import get_current_device
152
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
153
+ 100.83.37.175: Traceback (most recent call last):
154
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
155
+ 100.83.37.175: from megatron.global_vars import get_current_device
156
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
157
+ 100.83.37.175: Traceback (most recent call last):
158
+ 100.83.37.175: Traceback (most recent call last):
159
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
160
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
161
+ 100.83.37.175: from megatron.global_vars import get_current_device
162
+ 100.83.37.175: from megatron.global_vars import get_current_device
163
+ 100.83.37.175: ImportError: ImportErrorcannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
164
+ 100.83.37.175: : cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
165
+ 100.83.37.175: Traceback (most recent call last):
166
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
167
+ 100.83.37.175: from megatron.global_vars import get_current_device
168
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
169
+ 100.83.37.188: Traceback (most recent call last):
170
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
171
+ 100.83.37.188: from megatron.global_vars import get_current_device
172
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
173
+ 100.83.37.188: [2024-05-13 09:29:15,935] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122004
174
+ 100.83.37.188: [2024-05-13 09:29:15,937] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122005
175
+ 100.83.37.188: [2024-05-13 09:29:15,937] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122006
176
+ 100.83.37.188: [2024-05-13 09:29:15,937] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122007
177
+ 100.83.37.188: [2024-05-13 09:29:15,938] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122008
178
+ 100.83.37.188: [2024-05-13 09:29:15,938] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122009
179
+ 100.83.37.188: [2024-05-13 09:29:15,938] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122010
180
+ 100.83.37.188: [2024-05-13 09:29:15,938] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 122011
181
+ 100.83.37.188: [2024-05-13 09:29:15,965] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:29:05/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
182
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
183
+ 100.83.37.175: [2024-05-13 09:29:16,357] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26867
184
+ 100.83.37.175: [2024-05-13 09:29:16,359] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26868
185
+ 100.83.37.175: [2024-05-13 09:29:16,359] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26869
186
+ 100.83.37.175: [2024-05-13 09:29:16,360] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26870
187
+ 100.83.37.175: [2024-05-13 09:29:16,360] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26873
188
+ 100.83.37.175: [2024-05-13 09:29:16,361] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26875
189
+ 100.83.37.175: [2024-05-13 09:29:16,361] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26877
190
+ 100.83.37.175: [2024-05-13 09:29:16,362] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 26878
191
+ 100.83.37.175: [2024-05-13 09:29:16,362] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:29:05/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:29:05/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
192
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:29:05/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:32:36/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:32:36/log.txt ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:32:38,292] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:32:39,628] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:32:39,628] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:32:39,628] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:32:36/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 09:32:41,274] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 09:32:41,392] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:32:42,611] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:32:42,611] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:32:42,611] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:32:42,611] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:32:42,611] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.175: [2024-05-13 09:32:42,649] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
25
+ 100.83.37.175: [2024-05-13 09:32:42,649] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
26
+ 100.83.37.175: [2024-05-13 09:32:42,649] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
27
+ 100.83.37.175: [2024-05-13 09:32:42,649] [INFO] [launch.py:164:main] dist_world_size=16
28
+ 100.83.37.175: [2024-05-13 09:32:42,649] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
29
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
33
+ 100.83.37.175: ...done.
34
+ 100.83.37.175: ...done.
35
+ 100.83.37.175: ...done.
36
+ 100.83.37.175: ...done.
37
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
38
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
39
+ 100.83.37.175: ...done.
40
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
41
+ 100.83.37.175: ...done.
42
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
43
+ 100.83.37.175: ...done.
44
+ 100.83.37.175: ...done.
45
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
49
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
50
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
51
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
52
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
53
+ 100.83.37.188: ...done.
54
+ 100.83.37.188: ...done.
55
+ 100.83.37.188: ...done.
56
+ 100.83.37.188: ...done.
57
+ 100.83.37.188: ...done.
58
+ 100.83.37.188: ...done.
59
+ 100.83.37.188: ...done.
60
+ 100.83.37.188: ...done.
61
+ 100.83.37.175: [2024-05-13 09:32:44,388] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.175: warnings.warn(
64
+ 100.83.37.175: [2024-05-13 09:32:44,393] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
65
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
66
+ 100.83.37.175: warnings.warn(
67
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
68
+ 100.83.37.175: warnings.warn(
69
+ 100.83.37.175: [2024-05-13 09:32:44,401] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
70
+ 100.83.37.175: [2024-05-13 09:32:44,405] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
71
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
72
+ 100.83.37.175: warnings.warn(
73
+ 100.83.37.188: [2024-05-13 09:32:44,411] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
74
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
75
+ 100.83.37.188: warnings.warn(
76
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
77
+ 100.83.37.188: warnings.warn(
78
+ 100.83.37.188: [2024-05-13 09:32:44,416] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
79
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
80
+ 100.83.37.188: warnings.warn(
81
+ 100.83.37.188: [2024-05-13 09:32:44,423] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
82
+ 100.83.37.175: [2024-05-13 09:32:44,476] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
83
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
84
+ 100.83.37.175: warnings.warn(
85
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
86
+ 100.83.37.175: warnings.warn(
87
+ 100.83.37.175: [2024-05-13 09:32:44,478] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
88
+ 100.83.37.188: [2024-05-13 09:32:44,496] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
89
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
90
+ 100.83.37.188: warnings.warn(
91
+ 100.83.37.175: [2024-05-13 09:32:44,546] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
92
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
93
+ 100.83.37.175: warnings.warn(
94
+ 100.83.37.175: [2024-05-13 09:32:44,557] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
98
+ 100.83.37.188: warnings.warn(
99
+ 100.83.37.188: [2024-05-13 09:32:44,575] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
100
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
101
+ 100.83.37.188: warnings.warn(
102
+ 100.83.37.188: [2024-05-13 09:32:44,602] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
103
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
104
+ 100.83.37.188: warnings.warn(
105
+ 100.83.37.188: [2024-05-13 09:32:44,603] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
106
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
107
+ 100.83.37.188: warnings.warn(
108
+ 100.83.37.188: [2024-05-13 09:32:44,606] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
109
+ 100.83.37.188: Traceback (most recent call last):
110
+ 100.83.37.188: Traceback (most recent call last):
111
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
112
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
113
+ 100.83.37.188: from megatron.global_vars import get_current_device
114
+ 100.83.37.188: from megatron.global_vars import get_current_device
115
+ 100.83.37.188: ImportError: ImportErrorcannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py):
116
+ 100.83.37.188: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
117
+ 100.83.37.188: Traceback (most recent call last):
118
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
119
+ 100.83.37.188: from megatron.global_vars import get_current_device
120
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
121
+ 100.83.37.188: Traceback (most recent call last):
122
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
123
+ 100.83.37.188: from megatron.global_vars import get_current_device
124
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
125
+ 100.83.37.175: Traceback (most recent call last):
126
+ 100.83.37.175: Traceback (most recent call last):
127
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
128
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
129
+ 100.83.37.175: from megatron.global_vars import get_current_device
130
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
131
+ 100.83.37.175: from megatron.global_vars import get_current_device
132
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
133
+ 100.83.37.188: Traceback (most recent call last):
134
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
135
+ 100.83.37.188: from megatron.global_vars import get_current_device
136
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
137
+ 100.83.37.188: Traceback (most recent call last):
138
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
139
+ 100.83.37.188: from megatron.global_vars import get_current_device
140
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
141
+ 100.83.37.188: Traceback (most recent call last):
142
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
143
+ 100.83.37.188: from megatron.global_vars import get_current_device
144
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
145
+ 100.83.37.188: Traceback (most recent call last):
146
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
147
+ 100.83.37.188: from megatron.global_vars import get_current_device
148
+ 100.83.37.188: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
149
+ 100.83.37.175: Traceback (most recent call last):
150
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
151
+ 100.83.37.175: from megatron.global_vars import get_current_device
152
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
153
+ 100.83.37.175: Traceback (most recent call last):
154
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
155
+ 100.83.37.175: from megatron.global_vars import get_current_device
156
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
157
+ 100.83.37.175: Traceback (most recent call last):
158
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
159
+ 100.83.37.175: from megatron.global_vars import get_current_device
160
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
161
+ 100.83.37.175: Traceback (most recent call last):
162
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
163
+ 100.83.37.175: from megatron.global_vars import get_current_device
164
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
165
+ 100.83.37.175: Traceback (most recent call last):
166
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
167
+ 100.83.37.175: from megatron.global_vars import get_current_device
168
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
169
+ 100.83.37.175: Traceback (most recent call last):
170
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 27, in <module>
171
+ 100.83.37.175: from megatron.global_vars import get_current_device
172
+ 100.83.37.175: ImportError: cannot import name 'get_current_device' from 'megatron.global_vars' (/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/global_vars.py)
173
+ 100.83.37.188: [2024-05-13 09:32:47,619] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123369
174
+ 100.83.37.188: [2024-05-13 09:32:47,620] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123370
175
+ 100.83.37.188: [2024-05-13 09:32:47,621] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123371
176
+ 100.83.37.188: [2024-05-13 09:32:47,621] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123372
177
+ 100.83.37.188: [2024-05-13 09:32:47,621] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123373
178
+ 100.83.37.188: [2024-05-13 09:32:47,621] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123374
179
+ 100.83.37.188: [2024-05-13 09:32:47,621] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123375
180
+ 100.83.37.188: [2024-05-13 09:32:47,622] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 123376
181
+ 100.83.37.188: [2024-05-13 09:32:47,622] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:32:36/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
182
+ 100.83.37.175: [2024-05-13 09:32:47,658] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28420
183
+ 100.83.37.175: [2024-05-13 09:32:47,659] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28421
184
+ 100.83.37.175: [2024-05-13 09:32:47,660] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28422
185
+ 100.83.37.175: [2024-05-13 09:32:47,660] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28423
186
+ 100.83.37.175: [2024-05-13 09:32:47,661] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28427
187
+ 100.83.37.175: [2024-05-13 09:32:47,661] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28432
188
+ 100.83.37.175: [2024-05-13 09:32:47,661] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28435
189
+ 100.83.37.175: [2024-05-13 09:32:47,662] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 28441
190
+ 100.83.37.175: [2024-05-13 09:32:47,662] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:32:36/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:32:36/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
191
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
192
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:32:36/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:34:09/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:34:09/log.txt ADDED
@@ -0,0 +1,656 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:34:11,123] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:34:12,433] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:34:12,434] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:34:12,434] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:34:09/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 09:34:14,112] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 09:34:14,229] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:34:15,453] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:34:15,454] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:34:15,454] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:34:15,454] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:34:15,454] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.175: [2024-05-13 09:34:15,503] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
25
+ 100.83.37.175: [2024-05-13 09:34:15,503] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
26
+ 100.83.37.175: [2024-05-13 09:34:15,503] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
27
+ 100.83.37.175: [2024-05-13 09:34:15,503] [INFO] [launch.py:164:main] dist_world_size=16
28
+ 100.83.37.175: [2024-05-13 09:34:15,503] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
29
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
33
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
34
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
35
+ 100.83.37.175: ...done.
36
+ 100.83.37.175: ...done.
37
+ 100.83.37.175: ...done.
38
+ 100.83.37.175: ...done.
39
+ 100.83.37.175: ...done.
40
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
41
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
42
+ 100.83.37.175: ...done.
43
+ 100.83.37.175: ...done.
44
+ 100.83.37.175: ...done.
45
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
46
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
47
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
48
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
49
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
50
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
51
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
52
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
53
+ 100.83.37.188: ...done.
54
+ 100.83.37.188: ...done.
55
+ 100.83.37.188: ...done.
56
+ 100.83.37.188: ...done.
57
+ 100.83.37.188: ...done.
58
+ 100.83.37.188: ...done.
59
+ 100.83.37.188: ...done.
60
+ 100.83.37.188: ...done.
61
+ 100.83.37.175: [2024-05-13 09:34:17,241] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
62
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
63
+ 100.83.37.175: warnings.warn(
64
+ 100.83.37.175: [2024-05-13 09:34:17,263] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
65
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
66
+ 100.83.37.175: warnings.warn(
67
+ 100.83.37.175: [2024-05-13 09:34:17,267] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
68
+ 100.83.37.188: [2024-05-13 09:34:17,266] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
69
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
70
+ 100.83.37.188: [2024-05-13 09:34:17,266] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
71
+ 100.83.37.175: warnings.warn(
72
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
73
+ 100.83.37.188: warnings.warn(
74
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
75
+ 100.83.37.188: warnings.warn(
76
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
77
+ 100.83.37.188: warnings.warn(
78
+ 100.83.37.188: [2024-05-13 09:34:17,267] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
79
+ 100.83.37.188: [2024-05-13 09:34:17,314] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
80
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
81
+ 100.83.37.188: warnings.warn(
82
+ 100.83.37.188: [2024-05-13 09:34:17,373] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
83
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
84
+ 100.83.37.188: warnings.warn(
85
+ 100.83.37.175: [2024-05-13 09:34:17,383] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
86
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
87
+ 100.83.37.175: warnings.warn(
88
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
89
+ 100.83.37.175: warnings.warn(
90
+ 100.83.37.175: [2024-05-13 09:34:17,386] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
91
+ 100.83.37.175: [2024-05-13 09:34:17,387] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
92
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
93
+ 100.83.37.175: warnings.warn(
94
+ 100.83.37.175: [2024-05-13 09:34:17,439] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
95
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
96
+ 100.83.37.175: warnings.warn(
97
+ 100.83.37.188: [2024-05-13 09:34:17,464] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
98
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
99
+ 100.83.37.188: warnings.warn(
100
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
101
+ 100.83.37.188: warnings.warn(
102
+ 100.83.37.188: [2024-05-13 09:34:17,465] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
103
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
104
+ 100.83.37.188: warnings.warn(
105
+ 100.83.37.188: [2024-05-13 09:34:17,467] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
106
+ 100.83.37.175: [2024-05-13 09:34:17,530] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
107
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
108
+ 100.83.37.175: warnings.warn(
109
+ 100.83.37.188: ----------------------------------------------------------------------------------------------------
110
+ 100.83.37.188:
111
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
112
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report--------------------------------------------------
113
+ 100.83.37.188:
114
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
115
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
116
+ 100.83.37.188: meet the required dependencies to JIT install the op.--------------------------------------------------
117
+ 100.83.37.188: --------------------------------------------------
118
+ 100.83.37.188:
119
+ 100.83.37.188: JIT compiled ops requires ninja
120
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
121
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
122
+ 100.83.37.188: meet the required dependencies to JIT install the op.
123
+ 100.83.37.188: --------------------------------------------------
124
+ 100.83.37.188: JIT compiled ops requires ninja
125
+ 100.83.37.188: ninja ninja.................. [OKAY]..................
126
+ 100.83.37.188: [OKAY]
127
+ 100.83.37.188: --------------------------------------------------
128
+ 100.83.37.188: --------------------------------------------------
129
+ 100.83.37.188: op name op name................ ................installed installed.. ..compatible
130
+ 100.83.37.188: compatible--------------------------------------------------
131
+ 100.83.37.188:
132
+ 100.83.37.188: --------------------------------------------------
133
+ 100.83.37.188: cpu_adam ...............cpu_adam [NO]............... .......[NO] [OKAY].......
134
+ 100.83.37.188: [OKAY]
135
+ 100.83.37.188: fused_adam fused_adam............. .............[NO] [NO]....... [OKAY].......
136
+ 100.83.37.188: [OKAY]
137
+ 100.83.37.188: deepspeed_not_implemented deepspeed_not_implemented[NO] ....... [OKAY][NO]
138
+ 100.83.37.188: ....... transformer_inference[OKAY]
139
+ 100.83.37.188: .. [NO] transformer_inference....... ..[OKAY]
140
+ 100.83.37.188: [NO]--------------------------------------------------
141
+ 100.83.37.188: ....... [OKAY]
142
+ 100.83.37.188: --------------------------------------------------
143
+ 100.83.37.188: DeepSpeed general environment info:
144
+ 100.83.37.188: torch install path ...............DeepSpeed general environment info:
145
+ 100.83.37.188: torch install path['/usr/local/lib/python3.10/dist-packages/torch']
146
+ 100.83.37.188: ............... torch version .................... 2.1.1a0+gitb51c9f6['/usr/local/lib/python3.10/dist-packages/torch']
147
+ 100.83.37.188: deepspeed install path
148
+ 100.83.37.188: ........... torch version['/usr/local/lib/python3.10/dist-packages/deepspeed']
149
+ 100.83.37.188: .................... deepspeed info 2.1.1a0+gitb51c9f6...................
150
+ 100.83.37.188: 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0deepspeed install path
151
+ 100.83.37.188: deepspeed wheel compiled w............ ...... ['/usr/local/lib/python3.10/dist-packages/deepspeed']torch 2.1
152
+ 100.83.37.188:
153
+ 100.83.37.188: deepspeed infoshared memory (/dev/shm) size ....................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0503.75 GB
154
+ 100.83.37.188:
155
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
156
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
157
+ 100.83.37.188: --------------------------------------------------
158
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
159
+ 100.83.37.188: --------------------------------------------------
160
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
161
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
162
+ 100.83.37.188: meet the required dependencies to JIT install the op.
163
+ 100.83.37.188: --------------------------------------------------
164
+ 100.83.37.188: JIT compiled ops requires ninja
165
+ 100.83.37.188: ninja .................. [OKAY]
166
+ 100.83.37.188: --------------------------------------------------
167
+ 100.83.37.188: op name ................ installed .. compatible
168
+ 100.83.37.188: --------------------------------------------------
169
+ 100.83.37.188: cpu_adam ............... [NO] ....... [OKAY]
170
+ 100.83.37.188: fused_adam ............. [NO] ....... [OKAY]
171
+ 100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY]
172
+ 100.83.37.188: transformer_inference .. [NO] ....... [OKAY]
173
+ 100.83.37.188: --------------------------------------------------
174
+ 100.83.37.188: DeepSpeed general environment info:
175
+ 100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
176
+ 100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6
177
+ 100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
178
+ 100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
179
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
180
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
181
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
182
+ 100.83.37.188: To add an exception for this directory, call:
183
+ 100.83.37.188:
184
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
185
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
186
+ 100.83.37.188: To add an exception for this directory, call:
187
+ 100.83.37.188:
188
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
189
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ******** Git info for Megatron: git_hash=unknown git_branch=unknown ****
190
+ 100.83.37.188:
191
+ 100.83.37.188: Traceback (most recent call last):
192
+ 100.83.37.188: Traceback (most recent call last):
193
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
194
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
195
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
196
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
197
+ 100.83.37.188: TypeError: TypeErrorpretrain() missing 1 required positional argument: 'forward_step_func':
198
+ 100.83.37.188: pretrain() missing 1 required positional argument: 'forward_step_func'
199
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
200
+ 100.83.37.188: To add an exception for this directory, call:
201
+ 100.83.37.188:
202
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
203
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
204
+ 100.83.37.188: Traceback (most recent call last):
205
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
206
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
207
+ 100.83.37.188: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
208
+ 100.83.37.188: --------------------------------------------------
209
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
210
+ 100.83.37.188: --------------------------------------------------
211
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
212
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
213
+ 100.83.37.188: meet the required dependencies to JIT install the op.
214
+ 100.83.37.188: --------------------------------------------------
215
+ 100.83.37.188: JIT compiled ops requires ninja
216
+ 100.83.37.188: ninja .................. [OKAY]
217
+ 100.83.37.188: --------------------------------------------------
218
+ 100.83.37.188: op name ................ installed .. compatible
219
+ 100.83.37.188: --------------------------------------------------
220
+ 100.83.37.188: cpu_adam ............... [NO] ....... [OKAY]
221
+ 100.83.37.188: fused_adam ............. [NO] ....... [OKAY]
222
+ 100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY]
223
+ 100.83.37.188: transformer_inference .. [NO] ....... [OKAY]
224
+ 100.83.37.188: --------------------------------------------------
225
+ 100.83.37.188: DeepSpeed general environment info:
226
+ 100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
227
+ 100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6
228
+ 100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
229
+ 100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
230
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
231
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
232
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
233
+ 100.83.37.188: To add an exception for this directory, call:
234
+ 100.83.37.188:
235
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
236
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
237
+ 100.83.37.188: Traceback (most recent call last):
238
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
239
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
240
+ 100.83.37.188: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
241
+ 100.83.37.188: --------------------------------------------------
242
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
243
+ 100.83.37.188: --------------------------------------------------
244
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
245
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
246
+ 100.83.37.188: meet the required dependencies to JIT install the op.
247
+ 100.83.37.188: --------------------------------------------------
248
+ 100.83.37.188: JIT compiled ops requires ninja
249
+ 100.83.37.188: ninja .................. [OKAY]
250
+ 100.83.37.188: --------------------------------------------------
251
+ 100.83.37.188: op name ................ installed .. compatible
252
+ 100.83.37.188: --------------------------------------------------
253
+ 100.83.37.188: cpu_adam ............... [NO] ....... [OKAY]
254
+ 100.83.37.188: fused_adam ............. [NO] ....... [OKAY]
255
+ 100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY]
256
+ 100.83.37.188: transformer_inference .. [NO] ....... [OKAY]
257
+ 100.83.37.188: --------------------------------------------------
258
+ 100.83.37.188: DeepSpeed general environment info:
259
+ 100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
260
+ 100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6
261
+ 100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
262
+ 100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
263
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
264
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
265
+ 100.83.37.188: --------------------------------------------------
266
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
267
+ 100.83.37.188: --------------------------------------------------
268
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
269
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
270
+ 100.83.37.188: meet the required dependencies to JIT install the op.
271
+ 100.83.37.188: --------------------------------------------------
272
+ 100.83.37.188: JIT compiled ops requires ninja
273
+ 100.83.37.188: ninja .................. [OKAY]
274
+ 100.83.37.188: --------------------------------------------------
275
+ 100.83.37.188: op name ................ installed .. compatible
276
+ 100.83.37.188: --------------------------------------------------
277
+ 100.83.37.188: cpu_adam ............... [NO] ....... [OKAY]
278
+ 100.83.37.188: fused_adam ............. [NO] ....... [OKAY]
279
+ 100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY]
280
+ 100.83.37.188: transformer_inference .. [NO] ....... [OKAY]
281
+ 100.83.37.188: --------------------------------------------------
282
+ 100.83.37.188: DeepSpeed general environment info:
283
+ 100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
284
+ 100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6
285
+ 100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
286
+ 100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
287
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
288
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
289
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
290
+ 100.83.37.188: To add an exception for this directory, call:
291
+ 100.83.37.188:
292
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
293
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
294
+ 100.83.37.188: Traceback (most recent call last):
295
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
296
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
297
+ 100.83.37.188: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
298
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
299
+ 100.83.37.188: To add an exception for this directory, call:
300
+ 100.83.37.188:
301
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
302
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
303
+ 100.83.37.188: Traceback (most recent call last):
304
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
305
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
306
+ 100.83.37.188: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
307
+ 100.83.37.175: --------------------------------------------------
308
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
309
+ 100.83.37.175: --------------------------------------------------
310
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
311
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
312
+ 100.83.37.175: meet the required dependencies to JIT install the op.
313
+ 100.83.37.175: --------------------------------------------------
314
+ 100.83.37.175: JIT compiled ops requires ninja
315
+ 100.83.37.175: ninja .................. [OKAY]
316
+ 100.83.37.175: --------------------------------------------------
317
+ 100.83.37.175: op name ................ installed .. compatible
318
+ 100.83.37.175: --------------------------------------------------
319
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
320
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
321
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
322
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
323
+ 100.83.37.175: --------------------------------------------------
324
+ 100.83.37.175: DeepSpeed general environment info:
325
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
326
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
327
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
328
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
329
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
330
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
331
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
332
+ 100.83.37.175: To add an exception for this directory, call:
333
+ 100.83.37.175:
334
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
335
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
336
+ 100.83.37.175: Traceback (most recent call last):
337
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
338
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
339
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
340
+ 100.83.37.175: --------------------------------------------------
341
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
342
+ 100.83.37.175: --------------------------------------------------
343
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
344
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
345
+ 100.83.37.175: meet the required dependencies to JIT install the op.
346
+ 100.83.37.175: --------------------------------------------------
347
+ 100.83.37.175: JIT compiled ops requires ninja
348
+ 100.83.37.175: ninja .................. [OKAY]
349
+ 100.83.37.175: --------------------------------------------------
350
+ 100.83.37.175: op name ................ installed .. compatible
351
+ 100.83.37.175: --------------------------------------------------
352
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
353
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
354
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
355
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
356
+ 100.83.37.175: --------------------------------------------------
357
+ 100.83.37.175: DeepSpeed general environment info:
358
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
359
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
360
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
361
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
362
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
363
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
364
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
365
+ 100.83.37.175: To add an exception for this directory, call:
366
+ 100.83.37.175:
367
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
368
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
369
+ 100.83.37.175: Traceback (most recent call last):
370
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
371
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
372
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
373
+ 100.83.37.188: --------------------------------------------------
374
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
375
+ 100.83.37.188: --------------------------------------------------
376
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
377
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
378
+ 100.83.37.188: meet the required dependencies to JIT install the op.
379
+ 100.83.37.188: --------------------------------------------------
380
+ 100.83.37.188: JIT compiled ops requires ninja
381
+ 100.83.37.188: ninja .................. [OKAY]
382
+ 100.83.37.188: --------------------------------------------------
383
+ 100.83.37.188: op name ................ installed .. compatible
384
+ 100.83.37.188: --------------------------------------------------
385
+ 100.83.37.188: cpu_adam ............... [NO] ....... [OKAY]
386
+ 100.83.37.188: fused_adam ............. [NO] ....... [OKAY]
387
+ 100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY]
388
+ 100.83.37.188: transformer_inference .. [NO] ....... [OKAY]
389
+ 100.83.37.188: --------------------------------------------------
390
+ 100.83.37.188: DeepSpeed general environment info:
391
+ 100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
392
+ 100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6
393
+ 100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
394
+ 100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
395
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
396
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
397
+ 100.83.37.188: --------------------------------------------------
398
+ 100.83.37.188: DeepSpeed C++/CUDA extension op report
399
+ 100.83.37.188: --------------------------------------------------
400
+ 100.83.37.188: NOTE: Ops not installed will be just-in-time (JIT) compiled at
401
+ 100.83.37.188: runtime if needed. Op compatibility means that your system
402
+ 100.83.37.188: meet the required dependencies to JIT install the op.
403
+ 100.83.37.188: --------------------------------------------------
404
+ 100.83.37.188: JIT compiled ops requires ninja
405
+ 100.83.37.188: ninja .................. [OKAY]
406
+ 100.83.37.188: --------------------------------------------------
407
+ 100.83.37.188: op name ................ installed .. compatible
408
+ 100.83.37.188: --------------------------------------------------
409
+ 100.83.37.188: cpu_adam ............... [NO] ....... [OKAY]
410
+ 100.83.37.188: fused_adam ............. [NO] ....... [OKAY]
411
+ 100.83.37.188: deepspeed_not_implemented [NO] ....... [OKAY]
412
+ 100.83.37.188: transformer_inference .. [NO] ....... [OKAY]
413
+ 100.83.37.188: --------------------------------------------------
414
+ 100.83.37.188: DeepSpeed general environment info:
415
+ 100.83.37.188: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
416
+ 100.83.37.188: torch version .................... 2.1.1a0+gitb51c9f6
417
+ 100.83.37.188: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
418
+ 100.83.37.188: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
419
+ 100.83.37.188: deepspeed wheel compiled w. ...... torch 2.1
420
+ 100.83.37.188: shared memory (/dev/shm) size .... 503.75 GB
421
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
422
+ 100.83.37.188: To add an exception for this directory, call:
423
+ 100.83.37.188:
424
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
425
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
426
+ 100.83.37.188: Traceback (most recent call last):
427
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
428
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
429
+ 100.83.37.188: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
430
+ 100.83.37.188: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
431
+ 100.83.37.188: To add an exception for this directory, call:
432
+ 100.83.37.188:
433
+ 100.83.37.188: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
434
+ 100.83.37.188: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
435
+ 100.83.37.188: Traceback (most recent call last):
436
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
437
+ 100.83.37.188: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
438
+ 100.83.37.188: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
439
+ 100.83.37.175: --------------------------------------------------
440
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
441
+ 100.83.37.175: --------------------------------------------------
442
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
443
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
444
+ 100.83.37.175: meet the required dependencies to JIT install the op.
445
+ 100.83.37.175: --------------------------------------------------
446
+ 100.83.37.175: JIT compiled ops requires ninja
447
+ 100.83.37.175: ninja .................. [OKAY]
448
+ 100.83.37.175: --------------------------------------------------
449
+ 100.83.37.175: op name ................ installed .. compatible
450
+ 100.83.37.175: --------------------------------------------------
451
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
452
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
453
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
454
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
455
+ 100.83.37.175: --------------------------------------------------
456
+ 100.83.37.175: DeepSpeed general environment info:
457
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
458
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
459
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
460
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
461
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
462
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
463
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
464
+ 100.83.37.175: To add an exception for this directory, call:
465
+ 100.83.37.175:
466
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
467
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
468
+ 100.83.37.175: Traceback (most recent call last):
469
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
470
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
471
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
472
+ 100.83.37.175: --------------------------------------------------
473
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
474
+ 100.83.37.175: --------------------------------------------------
475
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
476
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
477
+ 100.83.37.175: meet the required dependencies to JIT install the op.
478
+ 100.83.37.175: --------------------------------------------------
479
+ 100.83.37.175: JIT compiled ops requires ninja
480
+ 100.83.37.175: ninja .................. [OKAY]
481
+ 100.83.37.175: --------------------------------------------------
482
+ 100.83.37.175: op name ................ installed .. compatible
483
+ 100.83.37.175: --------------------------------------------------
484
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
485
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
486
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
487
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
488
+ 100.83.37.175: --------------------------------------------------
489
+ 100.83.37.175: DeepSpeed general environment info:
490
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
491
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
492
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
493
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
494
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
495
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
496
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
497
+ 100.83.37.175: To add an exception for this directory, call:
498
+ 100.83.37.175:
499
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
500
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
501
+ 100.83.37.175: Traceback (most recent call last):
502
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
503
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
504
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
505
+ 100.83.37.175: --------------------------------------------------
506
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
507
+ 100.83.37.175: --------------------------------------------------
508
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
509
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
510
+ 100.83.37.175: meet the required dependencies to JIT install the op.
511
+ 100.83.37.175: --------------------------------------------------
512
+ 100.83.37.175: JIT compiled ops requires ninja
513
+ 100.83.37.175: ninja .................. [OKAY]
514
+ 100.83.37.175: --------------------------------------------------
515
+ 100.83.37.175: op name ................ installed .. compatible
516
+ 100.83.37.175: --------------------------------------------------
517
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
518
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
519
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
520
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
521
+ 100.83.37.175: --------------------------------------------------
522
+ 100.83.37.175: DeepSpeed general environment info:
523
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
524
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
525
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
526
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
527
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
528
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
529
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
530
+ 100.83.37.175: To add an exception for this directory, call:
531
+ 100.83.37.175:
532
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
533
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
534
+ 100.83.37.175: Traceback (most recent call last):
535
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
536
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
537
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
538
+ 100.83.37.175: --------------------------------------------------
539
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
540
+ 100.83.37.175: --------------------------------------------------
541
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
542
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
543
+ 100.83.37.175: meet the required dependencies to JIT install the op.
544
+ 100.83.37.175: --------------------------------------------------
545
+ 100.83.37.175: JIT compiled ops requires ninja
546
+ 100.83.37.175: ninja .................. [OKAY]
547
+ 100.83.37.175: --------------------------------------------------
548
+ 100.83.37.175: op name ................ installed .. compatible
549
+ 100.83.37.175: --------------------------------------------------
550
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
551
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
552
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
553
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
554
+ 100.83.37.175: --------------------------------------------------
555
+ 100.83.37.175: DeepSpeed general environment info:
556
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
557
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
558
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
559
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
560
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
561
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
562
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
563
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
564
+ 100.83.37.175: To add an exception for this directory, call:
565
+ 100.83.37.175:
566
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
567
+ 100.83.37.175: Traceback (most recent call last):
568
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
569
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
570
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
571
+ 100.83.37.175: --------------------------------------------------
572
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
573
+ 100.83.37.175: --------------------------------------------------
574
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
575
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
576
+ 100.83.37.175: meet the required dependencies to JIT install the op.
577
+ 100.83.37.175: --------------------------------------------------
578
+ 100.83.37.175: JIT compiled ops requires ninja
579
+ 100.83.37.175: ninja .................. [OKAY]
580
+ 100.83.37.175: --------------------------------------------------
581
+ 100.83.37.175: op name ................ installed .. compatible
582
+ 100.83.37.175: --------------------------------------------------
583
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
584
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
585
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
586
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
587
+ 100.83.37.175: --------------------------------------------------
588
+ 100.83.37.175: DeepSpeed general environment info:
589
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
590
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
591
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
592
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
593
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
594
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
595
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
596
+ 100.83.37.175: To add an exception for this directory, call:
597
+ 100.83.37.175:
598
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
599
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
600
+ 100.83.37.175: Traceback (most recent call last):
601
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
602
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
603
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
604
+ 100.83.37.175: --------------------------------------------------
605
+ 100.83.37.175: DeepSpeed C++/CUDA extension op report
606
+ 100.83.37.175: --------------------------------------------------
607
+ 100.83.37.175: NOTE: Ops not installed will be just-in-time (JIT) compiled at
608
+ 100.83.37.175: runtime if needed. Op compatibility means that your system
609
+ 100.83.37.175: meet the required dependencies to JIT install the op.
610
+ 100.83.37.175: --------------------------------------------------
611
+ 100.83.37.175: JIT compiled ops requires ninja
612
+ 100.83.37.175: ninja .................. [OKAY]
613
+ 100.83.37.175: --------------------------------------------------
614
+ 100.83.37.175: op name ................ installed .. compatible
615
+ 100.83.37.175: --------------------------------------------------
616
+ 100.83.37.175: cpu_adam ............... [NO] ....... [OKAY]
617
+ 100.83.37.175: fused_adam ............. [NO] ....... [OKAY]
618
+ 100.83.37.175: deepspeed_not_implemented [NO] ....... [OKAY]
619
+ 100.83.37.175: transformer_inference .. [NO] ....... [OKAY]
620
+ 100.83.37.175: --------------------------------------------------
621
+ 100.83.37.175: DeepSpeed general environment info:
622
+ 100.83.37.175: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
623
+ 100.83.37.175: torch version .................... 2.1.1a0+gitb51c9f6
624
+ 100.83.37.175: deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
625
+ 100.83.37.175: deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
626
+ 100.83.37.175: deepspeed wheel compiled w. ...... torch 2.1
627
+ 100.83.37.175: shared memory (/dev/shm) size .... 503.75 GB
628
+ 100.83.37.175: fatal: detected dubious ownership in repository at '/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed'
629
+ 100.83.37.175: To add an exception for this directory, call:
630
+ 100.83.37.175:
631
+ 100.83.37.175: git config --global --add safe.directory /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed
632
+ 100.83.37.175: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
633
+ 100.83.37.175: Traceback (most recent call last):
634
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
635
+ 100.83.37.175: pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
636
+ 100.83.37.175: TypeError: pretrain() missing 1 required positional argument: 'forward_step_func'
637
+ 100.83.37.188: [2024-05-13 09:34:20,462] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124734
638
+ 100.83.37.188: [2024-05-13 09:34:20,463] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124735
639
+ 100.83.37.188: [2024-05-13 09:34:20,464] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124736
640
+ 100.83.37.188: [2024-05-13 09:34:20,464] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124737
641
+ 100.83.37.188: [2024-05-13 09:34:20,464] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124738
642
+ 100.83.37.188: [2024-05-13 09:34:20,464] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124739
643
+ 100.83.37.188: [2024-05-13 09:34:20,465] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124740
644
+ 100.83.37.188: [2024-05-13 09:34:20,465] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 124741
645
+ 100.83.37.188: [2024-05-13 09:34:20,465] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:34:09/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
646
+ 100.83.37.175: [2024-05-13 09:34:20,512] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29973
647
+ 100.83.37.175: [2024-05-13 09:34:20,514] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29974
648
+ 100.83.37.175: [2024-05-13 09:34:20,514] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29975
649
+ 100.83.37.175: [2024-05-13 09:34:20,515] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29976
650
+ 100.83.37.175: [2024-05-13 09:34:20,515] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29979
651
+ 100.83.37.175: [2024-05-13 09:34:20,516] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29982
652
+ 100.83.37.175: [2024-05-13 09:34:20,516] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29987
653
+ 100.83.37.175: [2024-05-13 09:34:20,516] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 29988
654
+ 100.83.37.175: [2024-05-13 09:34:20,517] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:34:09/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:34:09/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
655
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
656
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:34:09/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:58:53/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:58:53/log.txt ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 09:58:54,809] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 09:58:56,129] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 09:58:56,129] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 09:58:56,129] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:58:53/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 09:58:57,783] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 09:58:57,862] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 09:58:58,998] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 09:58:58,998] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 09:58:58,998] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 09:58:58,998] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 09:58:58,998] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
41
+ 100.83.37.188: )
42
+ 100.83.37.188: IndentationError: unexpected indent
43
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
44
+ 100.83.37.188: )
45
+ 100.83.37.188: IndentationError: unexpected indent
46
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
47
+ 100.83.37.188: )
48
+ 100.83.37.188: IndentationError: unexpected indent
49
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
50
+ 100.83.37.188: )
51
+ 100.83.37.188: IndentationError: unexpected indent
52
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
53
+ 100.83.37.188: )
54
+ 100.83.37.188: IndentationError: unexpected indent
55
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
56
+ 100.83.37.188: )
57
+ 100.83.37.188: IndentationError: unexpected indent
58
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
59
+ 100.83.37.188: )
60
+ 100.83.37.188: IndentationError: unexpected indent
61
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
62
+ 100.83.37.188: )
63
+ 100.83.37.188: IndentationError: unexpected indent
64
+ 100.83.37.175: [2024-05-13 09:58:59,320] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
65
+ 100.83.37.175: [2024-05-13 09:58:59,320] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
66
+ 100.83.37.175: [2024-05-13 09:58:59,320] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
67
+ 100.83.37.175: [2024-05-13 09:58:59,320] [INFO] [launch.py:164:main] dist_world_size=16
68
+ 100.83.37.175: [2024-05-13 09:58:59,320] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
69
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
70
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
71
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
72
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
73
+ 100.83.37.175: ...done.
74
+ 100.83.37.175: ...done.
75
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
76
+ 100.83.37.175: ...done.
77
+ 100.83.37.175: ...done.
78
+ 100.83.37.175: ...done.
79
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
80
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
81
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
82
+ 100.83.37.175: ...done.
83
+ 100.83.37.175: ...done.
84
+ 100.83.37.175: ...done.
85
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
86
+ 100.83.37.175: )
87
+ 100.83.37.175: IndentationError: unexpected indent
88
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
89
+ 100.83.37.175: )
90
+ 100.83.37.175: IndentationError: unexpected indent
91
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
92
+ 100.83.37.175: )
93
+ 100.83.37.175: IndentationError: unexpected indent
94
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
95
+ 100.83.37.175: )
96
+ 100.83.37.175: IndentationError: unexpected indent
97
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
98
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
99
+ 100.83.37.175: )
100
+ 100.83.37.175: )
101
+ 100.83.37.175: IndentationError: IndentationErrorunexpected indent:
102
+ 100.83.37.175: unexpected indent
103
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
104
+ 100.83.37.175: )
105
+ 100.83.37.175: IndentationError: unexpected indent
106
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
107
+ 100.83.37.175: )
108
+ 100.83.37.175: IndentationError: unexpected indent
109
+ 100.83.37.188: [2024-05-13 09:59:00,002] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126123
110
+ 100.83.37.188: [2024-05-13 09:59:00,003] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126124
111
+ 100.83.37.188: [2024-05-13 09:59:00,003] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126125
112
+ 100.83.37.188: [2024-05-13 09:59:00,004] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126126
113
+ 100.83.37.188: [2024-05-13 09:59:00,004] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126127
114
+ 100.83.37.188: [2024-05-13 09:59:00,004] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126128
115
+ 100.83.37.188: [2024-05-13 09:59:00,004] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126129
116
+ 100.83.37.188: [2024-05-13 09:59:00,004] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 126130
117
+ 100.83.37.188: [2024-05-13 09:59:00,005] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:58:53/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
118
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
119
+ 100.83.37.175: [2024-05-13 09:59:00,324] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31578
120
+ 100.83.37.175: [2024-05-13 09:59:00,326] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31579
121
+ 100.83.37.175: [2024-05-13 09:59:00,326] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31580
122
+ 100.83.37.175: [2024-05-13 09:59:00,327] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31581
123
+ 100.83.37.175: [2024-05-13 09:59:00,327] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31585
124
+ 100.83.37.175: [2024-05-13 09:59:00,328] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31590
125
+ 100.83.37.175: [2024-05-13 09:59:00,328] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31592
126
+ 100.83.37.175: [2024-05-13 09:59:00,328] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 31599
127
+ 100.83.37.175: [2024-05-13 09:59:00,329] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-09:58:53/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-09:58:53/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
128
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-09:58:53/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-09:59:29/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-09:59:29/log.txt ADDED
The diff for this file is too large to render. See raw diff
 
llama13b_multiling_800M/13-05-2024-09:59:29/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-11:50:01/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-11:50:01/log.txt ADDED
The diff for this file is too large to render. See raw diff
 
llama13b_multiling_800M/13-05-2024-11:50:01/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-11:52:31/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-11:52:31/log.txt ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-05-13 11:52:33,447] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-05-13 11:52:34,769] [INFO] [runner.py:574:main] deepspeed_env file = /root/.deepspeed_env
5
+ [2024-05-13 11:52:34,769] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: 100.83.37.175,100.83.37.188
6
+ [2024-05-13 11:52:34,769] [INFO] [runner.py:585:main] cmd = pdsh -S -f 1024 -w 100.83.37.175,100.83.37.188 export PYTHONPATH=/Model-References/PyTorch/common:/usr/lib/habanalabs/:$PYTHONPATH; export PYTHON=/usr/bin/python3; export PT_HPU_LAZY_ACC_PAR_MODE=0; export PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES=0; export PT_HPU_ENABLE_WEIGHT_CPU_PERMUTE=0; export MODEL_REFERENCES_ROOT=/Model-References; export ENABLE_CONSOLE=false; export LOG_LEVEL_ALL=4; cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed; /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMzcuMTc1IjogWzAsIDEsIDIsIDMsIDQsIDUsIDYsIDddLCAiMTAwLjgzLjM3LjE4OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --node_rank=%n --master_addr=100.83.37.175 --master_port=29500 --no_python --no_local_rank /usr/bin/bash -c ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-11:52:31/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'
7
+ 100.83.37.175: Warning: Permanently added '[100.83.37.175]:3122' (ED25519) to the list of known hosts.
8
+ 100.83.37.188: Warning: Permanently added '[100.83.37.188]:3122' (ED25519) to the list of known hosts.
9
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
10
+ 100.83.37.175: ...done.
11
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
12
+ 100.83.37.188: ...done.
13
+ 100.83.37.175: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ 100.83.37.175: warnings.warn(
15
+ 100.83.37.175: [2024-05-13 11:52:36,438] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ 100.83.37.188: /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ 100.83.37.188: warnings.warn(
18
+ 100.83.37.188: [2024-05-13 11:52:36,472] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ 100.83.37.188: [2024-05-13 11:52:37,597] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
20
+ 100.83.37.188: [2024-05-13 11:52:37,597] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=1
21
+ 100.83.37.188: [2024-05-13 11:52:37,597] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
22
+ 100.83.37.188: [2024-05-13 11:52:37,597] [INFO] [launch.py:164:main] dist_world_size=16
23
+ 100.83.37.188: [2024-05-13 11:52:37,597] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
24
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
25
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
26
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
27
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
28
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
29
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
30
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
31
+ 100.83.37.188: * Starting OpenBSD Secure Shell server sshd
32
+ 100.83.37.188: ...done.
33
+ 100.83.37.188: ...done.
34
+ 100.83.37.188: ...done.
35
+ 100.83.37.188: ...done.
36
+ 100.83.37.188: ...done.
37
+ 100.83.37.188: ...done.
38
+ 100.83.37.188: ...done.
39
+ 100.83.37.188: ...done.
40
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
41
+ 100.83.37.188: data_post_process=data_post_process)
42
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
43
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
44
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
45
+ 100.83.37.188: data_post_process=data_post_process)
46
+ 100.83.37.188: data_post_process=data_post_process)
47
+ 100.83.37.188: data_post_process=data_post_process)
48
+ 100.83.37.188: ^ ^
49
+ 100.83.37.188: ^
50
+ 100.83.37.188:
51
+ 100.83.37.188: SyntaxError^SyntaxErrorSyntaxError:
52
+ 100.83.37.188: : : positional argument follows keyword argumentpositional argument follows keyword argumentpositional argument follows keyword argumentSyntaxError
53
+ 100.83.37.188:
54
+ 100.83.37.188:
55
+ 100.83.37.188: : positional argument follows keyword argument
56
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
57
+ 100.83.37.188: data_post_process=data_post_process)
58
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
59
+ 100.83.37.188: data_post_process=data_post_process)
60
+ 100.83.37.188: ^
61
+ 100.83.37.188: SyntaxError : positional argument follows keyword argument
62
+ 100.83.37.188: ^
63
+ 100.83.37.188: SyntaxError: positional argument follows keyword argument
64
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
65
+ 100.83.37.188: data_post_process=data_post_process)
66
+ 100.83.37.188: ^
67
+ 100.83.37.188: SyntaxError: positional argument follows keyword argument
68
+ 100.83.37.188: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
69
+ 100.83.37.188: data_post_process=data_post_process)
70
+ 100.83.37.188: ^
71
+ 100.83.37.188: SyntaxError: positional argument follows keyword argument
72
+ 100.83.37.175: [2024-05-13 11:52:37,832] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [0, 1, 2, 3, 4, 5, 6, 7]}
73
+ 100.83.37.175: [2024-05-13 11:52:37,832] [INFO] [launch.py:152:main] nnodes=2, num_local_procs=8, node_rank=0
74
+ 100.83.37.175: [2024-05-13 11:52:37,832] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.37.175': [0, 1, 2, 3, 4, 5, 6, 7], '100.83.37.188': [8, 9, 10, 11, 12, 13, 14, 15]})
75
+ 100.83.37.175: [2024-05-13 11:52:37,832] [INFO] [launch.py:164:main] dist_world_size=16
76
+ 100.83.37.175: [2024-05-13 11:52:37,832] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
77
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
78
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
79
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
80
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
81
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
82
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
83
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
84
+ 100.83.37.175: ...done.
85
+ 100.83.37.175: ...done.
86
+ 100.83.37.175: ...done.
87
+ 100.83.37.175: ...done.
88
+ 100.83.37.175: ...done.
89
+ 100.83.37.175: ...done.
90
+ 100.83.37.175: ...done.
91
+ 100.83.37.175: * Starting OpenBSD Secure Shell server sshd
92
+ 100.83.37.175: ...done.
93
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
94
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
95
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
96
+ 100.83.37.175: data_post_process=data_post_process)
97
+ 100.83.37.175: data_post_process=data_post_process)
98
+ 100.83.37.175: data_post_process=data_post_process)
99
+ 100.83.37.175: ^
100
+ 100.83.37.175: ^^SyntaxError
101
+ 100.83.37.175:
102
+ 100.83.37.175: : positional argument follows keyword argument
103
+ 100.83.37.175: SyntaxErrorSyntaxError: : positional argument follows keyword argumentpositional argument follows keyword argument
104
+ 100.83.37.175:
105
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
106
+ 100.83.37.175: data_post_process=data_post_process)
107
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
108
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
109
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
110
+ 100.83.37.175: data_post_process=data_post_process)
111
+ 100.83.37.175: data_post_process=data_post_process)
112
+ 100.83.37.175: data_post_process=data_post_process)
113
+ 100.83.37.175: File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 119
114
+ 100.83.37.175: data_post_process=data_post_process)
115
+ 100.83.37.175: ^
116
+ 100.83.37.175: SyntaxError : positional argument follows keyword argument
117
+ 100.83.37.175: ^
118
+ 100.83.37.175: SyntaxError : positional argument follows keyword argument^
119
+ 100.83.37.175: ^
120
+ 100.83.37.175: SyntaxError
121
+ 100.83.37.175: : SyntaxErrorpositional argument follows keyword argument :
122
+ 100.83.37.175: positional argument follows keyword argument
123
+ 100.83.37.175: ^
124
+ 100.83.37.175: SyntaxError: positional argument follows keyword argument
125
+ 100.83.37.188: [2024-05-13 11:52:38,601] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129388
126
+ 100.83.37.188: [2024-05-13 11:52:38,602] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129389
127
+ 100.83.37.188: [2024-05-13 11:52:38,602] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129390
128
+ 100.83.37.188: [2024-05-13 11:52:38,603] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129391
129
+ 100.83.37.188: [2024-05-13 11:52:38,603] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129392
130
+ 100.83.37.188: [2024-05-13 11:52:38,603] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129393
131
+ 100.83.37.188: [2024-05-13 11:52:38,603] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129394
132
+ 100.83.37.188: [2024-05-13 11:52:38,604] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 129395
133
+ 100.83.37.188: [2024-05-13 11:52:38,604] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-11:52:31/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
134
+ 100.83.37.175: [2024-05-13 11:52:38,836] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35482
135
+ 100.83.37.175: [2024-05-13 11:52:38,838] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35483
136
+ 100.83.37.175: [2024-05-13 11:52:38,838] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35484
137
+ 100.83.37.175: [2024-05-13 11:52:38,839] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35485
138
+ 100.83.37.175: [2024-05-13 11:52:38,839] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35488
139
+ 100.83.37.175: [2024-05-13 11:52:38,840] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35489
140
+ 100.83.37.175: [2024-05-13 11:52:38,840] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35492
141
+ 100.83.37.175: [2024-05-13 11:52:38,840] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 35496
142
+ 100.83.37.175: [2024-05-13 11:52:38,841] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 24 --hidden-size 2048 --ffn-hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 500 --data-path /data/hineng/tokenizer//_raw_content_document --vocab-file /data/hineng/tokenizer//gpt2-vocab.json --merge-file /data/hineng/tokenizer//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_multiling_800M/13-05-2024-11:52:31/ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/checkpoints_zero_stage_2 --hf-save /data/output/llama13b_multiling_800M/13-05-2024-11:52:31/hf_ckpt --save-interval 500 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
143
+ pdsh@vizzhy-150-3: 100.83.37.188: ssh exited with exit code 1
144
+ pdsh@vizzhy-150-3: 100.83.37.175: ssh exited with exit code 1
llama13b_multiling_800M/13-05-2024-11:52:31/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_multiling_800M/13-05-2024-11:55:44/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_multiling_800M/13-05-2024-11:55:44/log.txt ADDED
The diff for this file is too large to render. See raw diff
 
llama13b_multiling_800M/13-05-2024-11:55:44/mds_to_hf_llama_custom.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "MODEL": {
3
+ "num_hidden_layers": 24,
4
+ "hidden_size": 2048,
5
+ "num_attention_heads": 32,
6
+ "intermediate_size": 4096,
7
+ "vocab_size":VOCAB_SIZE
8
+ },
9
+ "LAYER_MAPPINGS" : {
10
+ "word_embeddings": 1,
11
+ "transformer": [3, 26],
12
+ "final_layernorm": 28,
13
+ "final_word_embeddings": 29
14
+ },
15
+ "FULL_NAME_MAPPINGS": {
16
+ },
17
+ "PARTIAL_NAME_MAPPINGS": {
18
+ "final_word_embeddings": {
19
+ "vocab_parallel_projection": "lm_head"
20
+ },
21
+ "final_layernorm": {
22
+ "final_rmsnorm": "model.norm"
23
+ },
24
+ "word_embeddings": {
25
+ "word_embeddings": "model.embed_tokens"
26
+ },
27
+ "transformer": {
28
+ "dense_h_to_4h": "mlp.gate_proj",
29
+ "dense_4h_to_h": "mlp.down_proj",
30
+ "dense_h_to_4h_swiglu": "mlp.up_proj",
31
+ "post_attention_layernorm": "post_attention_layernorm",
32
+ "input_layernorm": "input_layernorm",
33
+ "dense": "self_attn.o_proj",
34
+ "query_key_value": {"query": "self_attn.q_proj", "key": "self_attn.k_proj", "value": "self_attn.v_proj"}
35
+ }
36
+ },
37
+ "SPECIAL": {
38
+ "query_key_value": "attention_qkv"
39
+ }
40
+ }
llama13b_x/ds_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train_batch_size" : 256,
3
+ "train_micro_batch_size_per_gpu": 1,
4
+ "steps_per_print": 10,
5
+ "gradient_clipping": 1.0,
6
+ "zero_optimization": {
7
+ "stage": 0
8
+ },
9
+ "bf16": {
10
+ "enabled": true,
11
+ "accumulate_grads_via_hooks": true
12
+ },
13
+ "fp16": {"enabled": false},
14
+ "wall_clock_breakdown": false,
15
+ "pipeline": {
16
+ "pipe_partitioned": false,
17
+ "grad_partitioned": false
18
+ }
19
+ }
llama13b_x/log.txt ADDED
@@ -0,0 +1,857 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
2
+ warnings.warn(
3
+ [2024-04-17 13:35:41,775] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
4
+ [2024-04-17 13:36:22,312] [INFO] [runner.py:585:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyIxMDAuODMuMTM0LjE1OCI6IFswLCAxLCAyLCAzLCA0LCA1LCA2LCA3XX0= --master_addr=100.83.134.158 --master_port=29500 --no_python --no_local_rank --enable_each_rank_log=None /usr/bin/bash -c cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 16 --hidden-size 5120 --ffn-hidden-size 13824 --num-attention-heads 40 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 100 --data-path /data/arxiv//tokenized_text_document --vocab-file /data/arxiv//gpt2-vocab.json --merge-file /data/arxiv//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_x//tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_x//checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_x//ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_x//checkpoints_zero_stage_2 --hf-save /data/output/llama13b_x//hf_ckpt --save-interval 100 --verify-checkpoint --verify-checkpoint-model-type LLAMA
5
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
6
+ warnings.warn(
7
+ [2024-04-17 13:36:23,803] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
8
+ [2024-04-17 13:36:24,923] [INFO] [launch.py:146:main] WORLD INFO DICT: {'100.83.134.158': [0, 1, 2, 3, 4, 5, 6, 7]}
9
+ [2024-04-17 13:36:24,923] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=8, node_rank=0
10
+ [2024-04-17 13:36:24,923] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'100.83.134.158': [0, 1, 2, 3, 4, 5, 6, 7]})
11
+ [2024-04-17 13:36:24,923] [INFO] [launch.py:164:main] dist_world_size=8
12
+ [2024-04-17 13:36:24,923] [INFO] [launch.py:166:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
13
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
14
+ warnings.warn(
15
+ [2024-04-17 13:36:26,736] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
16
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
17
+ warnings.warn(
18
+ [2024-04-17 13:36:26,736] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
19
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
20
+ warnings.warn(
21
+ [2024-04-17 13:36:26,798] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
22
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
23
+ warnings.warn(
24
+ [2024-04-17 13:36:26,801] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
25
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
26
+ warnings.warn(
27
+ [2024-04-17 13:36:26,864] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
28
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
29
+ warnings.warn(
30
+ [2024-04-17 13:36:26,902] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
31
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
32
+ warnings.warn(
33
+ [2024-04-17 13:36:26,912] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
34
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
35
+ warnings.warn(
36
+ [2024-04-17 13:36:26,977] [INFO] [real_accelerator.py:178:get_accelerator] Setting ds_accelerator to hpu (auto detect)
37
+ --------------------------------------------------
38
+ DeepSpeed C++/CUDA extension op report
39
+ --------------------------------------------------
40
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
41
+ runtime if needed. Op compatibility means that your system
42
+ meet the required dependencies to JIT install the op.
43
+ --------------------------------------------------
44
+ JIT compiled ops requires ninja
45
+ ninja .................. [OKAY]
46
+ --------------------------------------------------
47
+ op name ................ installed .. compatible
48
+ --------------------------------------------------
49
+ cpu_adam ............... [NO] ....... [OKAY]
50
+ fused_adam ............. [NO] ....... [OKAY]
51
+ deepspeed_not_implemented [NO] ....... [OKAY]
52
+ transformer_inference .. [NO] ....... [OKAY]
53
+ --------------------------------------------------
54
+ DeepSpeed general environment info:
55
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
56
+ torch version .................... 2.1.1a0+gitb51c9f6
57
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
58
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
59
+ deepspeed wheel compiled w. ...... torch 2.1
60
+ shared memory (/dev/shm) size .... 503.72 GB
61
+ --------------------------------------------------
62
+ DeepSpeed C++/CUDA extension op report
63
+ --------------------------------------------------
64
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
65
+ runtime if needed. Op compatibility means that your system
66
+ meet the required dependencies to JIT install the op.
67
+ --------------------------------------------------
68
+ JIT compiled ops requires ninja
69
+ ninja .................. [OKAY]
70
+ --------------------------------------------------
71
+ op name ................ installed .. compatible
72
+ --------------------------------------------------
73
+ cpu_adam ............... [NO] ....... [OKAY]
74
+ fused_adam ............. [NO] ....... [OKAY]
75
+ deepspeed_not_implemented [NO] ....... [OKAY]
76
+ transformer_inference .. [NO] ....... [OKAY]
77
+ --------------------------------------------------
78
+ DeepSpeed general environment info:
79
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
80
+ torch version .................... 2.1.1a0+gitb51c9f6
81
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
82
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
83
+ deepspeed wheel compiled w. ...... torch 2.1
84
+ shared memory (/dev/shm) size .... 503.72 GB
85
+ fatal: detected dubious ownership in repository at '/Model-References'
86
+ To add an exception for this directory, call:
87
+
88
+ git config --global --add safe.directory /Model-References
89
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
90
+ fatal: detected dubious ownership in repository at '/Model-References'
91
+ To add an exception for this directory, call:
92
+
93
+ git config --global --add safe.directory /Model-References
94
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
95
+ --------------------------------------------------
96
+ DeepSpeed C++/CUDA extension op report
97
+ --------------------------------------------------
98
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
99
+ runtime if needed. Op compatibility means that your system
100
+ meet the required dependencies to JIT install the op.
101
+ --------------------------------------------------
102
+ JIT compiled ops requires ninja
103
+ ninja .................. [OKAY]
104
+ --------------------------------------------------
105
+ op name ................ installed .. compatible
106
+ --------------------------------------------------
107
+ cpu_adam ............... [NO] ....... [OKAY]
108
+ fused_adam ............. [NO] ....... [OKAY]
109
+ deepspeed_not_implemented [NO] ....... [OKAY]
110
+ transformer_inference .. [NO] ....... [OKAY]
111
+ --------------------------------------------------
112
+ DeepSpeed general environment info:
113
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
114
+ torch version .................... 2.1.1a0+gitb51c9f6
115
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
116
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
117
+ deepspeed wheel compiled w. ...... torch 2.1
118
+ shared memory (/dev/shm) size .... 503.72 GB
119
+ fatal: detected dubious ownership in repository at '/Model-References'
120
+ To add an exception for this directory, call:
121
+
122
+ git config --global --add safe.directory /Model-References
123
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
124
+ using world size: 8, data-parallel-size: 8, tensor-model-parallel size: 1, pipeline-model-parallel size: 1
125
+ accumulate and all-reduce gradients in fp32 for bfloat16 data type.
126
+ using torch.bfloat16 for parameters ...
127
+ ------------------------ arguments ------------------------
128
+ accumulate_allreduce_grads_in_fp32 .............. True
129
+ activation_func_type ............................ swiglu
130
+ adam_beta1 ...................................... 0.9
131
+ adam_beta2 ...................................... 0.95
132
+ adam_eps ........................................ 1e-06
133
+ adlr_autoresume ................................. False
134
+ adlr_autoresume_interval ........................ 1000
135
+ aml_data_download_path .......................... None
136
+ apply_layernorm_weight_plus_one ................. False
137
+ apply_query_key_layer_scaling ................... True
138
+ apply_residual_connection_post_layernorm ........ False
139
+ attention_dropout ............................... 0.1
140
+ attention_softmax_in_fp32 ....................... False
141
+ bert_binary_head ................................ True
142
+ bert_load ....................................... None
143
+ bf16 ............................................ True
144
+ bias_dropout_fusion ............................. False
145
+ bias_gelu_fusion ................................ False
146
+ biencoder_projection_dim ........................ 0
147
+ biencoder_shared_query_context_model ............ False
148
+ block_data_path ................................. None
149
+ cache_fp8_weight ................................ False
150
+ cache_fp8_weight_fwd ............................ True
151
+ checkpoint_activations .......................... False
152
+ checkpoint_activations_granularity .............. full
153
+ checkpoint_in_cpu ............................... False
154
+ checkpoint_num_layers ........................... 1
155
+ clearml_config_path ............................. None
156
+ clearml_continue_exp ............................ False
157
+ clearml_exp_name ................................ None
158
+ clip_grad ....................................... 1.0
159
+ compression_training ............................ False
160
+ consumed_train_samples .......................... 0
161
+ consumed_train_tokens ........................... 0
162
+ consumed_valid_samples .......................... 0
163
+ contigious_checkpointing ........................ False
164
+ cpu_optimizer ................................... False
165
+ cpu_torch_adam .................................. False
166
+ create_moe_param_group .......................... False
167
+ curriculum_learning ............................. False
168
+ data_idx_path ................................... None
169
+ data_impl ....................................... infer
170
+ data_parallel_size .............................. 8
171
+ data_path ....................................... ['/data/arxiv//tokenized_text_document']
172
+ data_sharding ................................... True
173
+ dataloader_type ................................. single
174
+ DDP_impl ........................................ local
175
+ decoder_seq_length .............................. None
176
+ deepscale ....................................... False
177
+ deepscale_config ................................ None
178
+ deepspeed ....................................... True
179
+ deepspeed_activation_checkpointing .............. False
180
+ deepspeed_config ................................ /data/output/llama13b_x//ds_config.json
181
+ deepspeed_mpi ................................... False
182
+ distribute_checkpointed_activations ............. False
183
+ distributed_backend ............................. hccl
184
+ do_layernorm_bias_weight_decay .................. False
185
+ do_pretrain_validation .......................... False
186
+ ds_inference .................................... False
187
+ ds_pipeline_enabled ............................. True
188
+ embed_layernorm ................................. False
189
+ embedding_path .................................. None
190
+ enable_expert_tensor_parallelism ................ False
191
+ encoder_seq_length .............................. 2048
192
+ eod_mask_loss ................................... False
193
+ eval_interval ................................... 100
194
+ eval_iters ...................................... 10
195
+ eval_loss_exit_value ............................ None
196
+ eval_micro_batch_size ........................... 1
197
+ evidence_data_path .............................. None
198
+ exit_duration_in_mins ........................... None
199
+ exit_interval ................................... 0
200
+ expert_interval ................................. 2
201
+ ffn_hidden_coeff ................................ 2.6666666666666665
202
+ ffn_hidden_size ................................. 13824
203
+ finetune ........................................ False
204
+ fix_position_emb_redundant_alloc ................ False
205
+ flatten_linear_operands ......................... False
206
+ fp16 ............................................ False
207
+ fp16_lm_cross_entropy ........................... False
208
+ fp32_residual_connection ........................ False
209
+ global_batch_size ............................... 256
210
+ hf_save ......................................... /data/output/llama13b_x//hf_ckpt
211
+ hidden_dropout .................................. 0.1
212
+ hidden_size ..................................... 5120
213
+ hidden_size_teacher ............................. None
214
+ hpu_deterministic ............................... True
215
+ hpu_fp8_format .................................. e5m2
216
+ hpu_fp8_measure_interval ........................ 10
217
+ hysteresis ...................................... 2
218
+ ict_head_size ................................... None
219
+ ict_load ........................................ None
220
+ img_dim ......................................... 224
221
+ indexer_batch_size .............................. 128
222
+ indexer_log_interval ............................ 1000
223
+ inference ....................................... False
224
+ init_method_std ................................. 0.02
225
+ init_method_xavier_uniform ...................... False
226
+ initial_loss_scale .............................. 4294967296
227
+ kd .............................................. False
228
+ kd_alpha_ce ..................................... 1
229
+ kd_beta_ce ...................................... 1
230
+ kd_temp ......................................... 1.0
231
+ kill_switch_path ................................ None
232
+ kv_channels ..................................... 128
233
+ layernorm_epsilon ............................... 1e-06
234
+ layernorm_type .................................. rmsnorm
235
+ lazy_mpu_init ................................... None
236
+ load ............................................ /data/output/llama13b_x//checkpoints_zero_stage_2
237
+ load_teacher .................................... None
238
+ local_rank ...................................... 0
239
+ log_batch_size_to_tensorboard ................... True
240
+ log_bwd_grads ................................... False
241
+ log_fwd_activations ............................. False
242
+ log_interval .................................... 10
243
+ log_learning_rate_to_tensorboard ................ True
244
+ log_loss_scale_to_tensorboard ................... True
245
+ log_model_inputs ................................ False
246
+ log_num_zeros_in_grad ........................... False
247
+ log_optimizer_states_to_tensorboard ............. False
248
+ log_params_norm ................................. False
249
+ log_timers_to_tensorboard ....................... True
250
+ log_validation_ppl_to_tensorboard ............... True
251
+ loss_scale ...................................... None
252
+ loss_scale_window ............................... 1000
253
+ lr .............................................. 0.0003
254
+ lr_decay_iters .................................. None
255
+ lr_decay_samples ................................ None
256
+ lr_decay_style .................................. cosine
257
+ lr_decay_tokens ................................. None
258
+ lr_warmup_fraction .............................. None
259
+ lr_warmup_iters ................................. 2000
260
+ lr_warmup_samples ............................... 0
261
+ lr_warmup_tokens ................................ None
262
+ make_vocab_size_divisible_by .................... 128
263
+ mask_prob ....................................... 0.15
264
+ mask_tensor_adding .............................. False
265
+ masked_softmax_fusion ........................... False
266
+ max_position_embeddings ......................... None
267
+ memory_centric_tiled_linear ..................... False
268
+ merge_file ...................................... /data/arxiv//gpt2-merges.txt
269
+ micro_batch_size ................................ 1
270
+ min_loss_scale .................................. 1.0
271
+ min_lr .......................................... 0.0
272
+ mlp_type ........................................ standard
273
+ mmap_warmup ..................................... False
274
+ moe_eval_capacity_factor ........................ 1.0
275
+ moe_expert_parallel_size ........................ 1
276
+ moe_loss_coeff .................................. 0.1
277
+ moe_min_capacity ................................ 4
278
+ moe_token_dropping .............................. True
279
+ moe_train_capacity_factor ....................... 1.0
280
+ mos ............................................. False
281
+ no_bias ......................................... True
282
+ no_cuda ......................................... False
283
+ no_load_lr_state ................................ False
284
+ no_load_optim ................................... None
285
+ no_load_rng ..................................... None
286
+ no_pipeline_parallel ............................ False
287
+ no_save_optim ................................... None
288
+ no_save_rng ..................................... None
289
+ no_scaled_init .................................. False
290
+ num_attention_heads ............................. 40
291
+ num_attention_heads_teacher ..................... None
292
+ num_channels .................................... 3
293
+ num_classes ..................................... 1000
294
+ num_experts ..................................... [1]
295
+ num_experts_teacher ............................. [1]
296
+ num_key_value_heads ............................. 40
297
+ num_layers ...................................... 16
298
+ num_layers_per_virtual_pipeline_stage ........... None
299
+ num_layers_teacher .............................. None
300
+ num_workers ..................................... 2
301
+ onnx_safe ....................................... None
302
+ openai_gelu ..................................... False
303
+ optimizer ....................................... adamw
304
+ override_lr_scheduler ........................... False
305
+ params_dtype .................................... torch.bfloat16
306
+ partition_activations ........................... False
307
+ patch_dim ....................................... 16
308
+ pipeline_model_parallel_size .................... 1
309
+ position_embedding_type ......................... PositionEmbeddingType.rotary
310
+ profile ......................................... None
311
+ profile_backward ................................ False
312
+ profile_steps ................................... 2,3
313
+ query_in_block_prob ............................. 0.1
314
+ rampup_batch_size ............................... None
315
+ rank ............................................ 0
316
+ remote_device ................................... none
317
+ reset_attention_mask ............................ False
318
+ reset_iteration ................................. False
319
+ reset_position_ids .............................. False
320
+ retriever_report_topk_accuracies ................ []
321
+ retriever_score_scaling ......................... False
322
+ retriever_seq_length ............................ 256
323
+ sample_rate ..................................... 1.0
324
+ save ............................................ /data/output/llama13b_x//checkpoints_zero_stage_2
325
+ save_interval ................................... 100
326
+ scatter_gather_tensors_in_pipeline .............. True
327
+ scattered_embeddings ............................ False
328
+ seed ............................................ 1234
329
+ seq_length ...................................... 2048
330
+ sequence_parallel ............................... False
331
+ sgd_momentum .................................... 0.9
332
+ short_seq_prob .................................. 0.1
333
+ skip_train ...................................... False
334
+ split ........................................... 969, 30, 1
335
+ split_transformers .............................. False
336
+ synchronize_each_layer .......................... False
337
+ tensor_logger_max_iter .......................... 0
338
+ tensor_logger_path .............................. None
339
+ tensor_model_parallel_size ...................... 1
340
+ tensorboard_dir ................................. /data/output/llama13b_x//tensorboard
341
+ tensorboard_log_interval ........................ 1
342
+ tensorboard_queue_size .......................... 1000
343
+ test_data_path .................................. None
344
+ tile_factor ..................................... 1
345
+ titles_data_path ................................ None
346
+ tokenizer_eod_id ................................ None
347
+ tokenizer_model_file ............................ None
348
+ tokenizer_type .................................. GPT2BPETokenizer
349
+ topk ............................................ 1
350
+ train_data_path ................................. None
351
+ train_iters ..................................... 10000
352
+ train_samples ................................... None
353
+ train_tokens .................................... None
354
+ universal_checkpoint ............................ False
355
+ use_checkpoint_lr_scheduler ..................... False
356
+ use_contiguous_buffers_in_ddp ................... True
357
+ use_cpu_initialization .......................... None
358
+ use_fused_sdpa .................................. True
359
+ use_fused_sdpa_with_recompute ................... False
360
+ use_hpu ......................................... True
361
+ use_hpu_fp8_transformer_engine .................. False
362
+ use_hpu_graphs .................................. False
363
+ use_one_sent_docs ............................... False
364
+ use_pin_memory .................................. False
365
+ use_rotary_v2 ................................... False
366
+ use_seq_len_plus_one_tokens ..................... True
367
+ use_torch_compile ............................... False
368
+ use_tutel ....................................... False
369
+ valid_data_path ................................. None
370
+ verify_checkpoint ............................... True
371
+ verify_checkpoint_model_type .................... LLAMA
372
+ verify_tp_workers ............................... False
373
+ verify_tp_workers_hash .......................... False
374
+ virtual_pipeline_model_parallel_size ............ None
375
+ vocab_extra_ids ................................. 0
376
+ vocab_file ...................................... /data/arxiv//gpt2-vocab.json
377
+ weight_decay .................................... 0.1
378
+ world_size ...................................... 8
379
+ zero_allgather_bucket_size ...................... 0.0
380
+ zero_contigious_gradients ....................... False
381
+ zero_reduce_bucket_size ......................... 0.0
382
+ zero_reduce_scatter ............................. False
383
+ zero_stage ...................................... 0
384
+ -------------------- end of arguments ---------------------
385
+ setting number of micro-batches to constant 32
386
+ setting number of micro-batches to constant 32
387
+ > building GPT2BPETokenizer tokenizer ...
388
+ _initialize_distributed: Initializing with below params:
389
+ args.local_rank: 2
390
+ args.world_size: 8
391
+ args.rank: 2
392
+ args.distributed_backend: hccl
393
+ --------------------------------------------------
394
+ DeepSpeed C++/CUDA extension op report
395
+ --------------------------------------------------
396
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
397
+ runtime if needed. Op compatibility means that your system
398
+ meet the required dependencies to JIT install the op.
399
+ --------------------------------------------------
400
+ JIT compiled ops requires ninja
401
+ ninja .................. [OKAY]
402
+ --------------------------------------------------
403
+ op name ................ installed .. compatible
404
+ --------------------------------------------------
405
+ cpu_adam ............... [NO] ....... [OKAY]
406
+ fused_adam ............. [NO] ....... [OKAY]
407
+ deepspeed_not_implemented [NO] ....... [OKAY]
408
+ transformer_inference .. [NO] ....... [OKAY]
409
+ --------------------------------------------------
410
+ DeepSpeed general environment info:
411
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
412
+ torch version .................... 2.1.1a0+gitb51c9f6
413
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
414
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
415
+ deepspeed wheel compiled w. ...... torch 2.1
416
+ shared memory (/dev/shm) size .... 503.72 GB
417
+ _initialize_distributed: Initializing with below params:
418
+ args.local_rank: 4
419
+ args.world_size: 8
420
+ args.rank: 4
421
+ args.distributed_backend: hccl
422
+ fatal: detected dubious ownership in repository at '/Model-References'
423
+ To add an exception for this directory, call:
424
+
425
+ git config --global --add safe.directory /Model-References
426
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
427
+ --------------------------------------------------
428
+ DeepSpeed C++/CUDA extension op report
429
+ --------------------------------------------------
430
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
431
+ runtime if needed. Op compatibility means that your system
432
+ meet the required dependencies to JIT install the op.
433
+ --------------------------------------------------
434
+ JIT compiled ops requires ninja
435
+ ninja .................. [OKAY]
436
+ --------------------------------------------------
437
+ op name ................ installed .. compatible
438
+ --------------------------------------------------
439
+ cpu_adam ............... [NO] ....... [OKAY]
440
+ fused_adam ............. [NO] ....... [OKAY]
441
+ deepspeed_not_implemented [NO] ....... [OKAY]
442
+ transformer_inference .. [NO] ....... [OKAY]
443
+ --------------------------------------------------
444
+ DeepSpeed general environment info:
445
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
446
+ torch version .................... 2.1.1a0+gitb51c9f6
447
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
448
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
449
+ deepspeed wheel compiled w. ...... torch 2.1
450
+ shared memory (/dev/shm) size .... 503.72 GB
451
+ > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304)
452
+ _initialize_distributed: Initializing with below params:
453
+ args.local_rank: 0
454
+ args.world_size: 8
455
+ args.rank: 0
456
+ args.distributed_backend: hccl
457
+ --------------------------------------------------
458
+ DeepSpeed C++/CUDA extension op report
459
+ --------------------------------------------------
460
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
461
+ runtime if needed. Op compatibility means that your system
462
+ meet the required dependencies to JIT install the op.
463
+ --------------------------------------------------
464
+ JIT compiled ops requires ninja
465
+ ninja .................. [OKAY]
466
+ --------------------------------------------------
467
+ op name ................ installed .. compatible
468
+ --------------------------------------------------
469
+ cpu_adam ............... [NO] ....... [OKAY]
470
+ fused_adam ............. [NO] ....... [OKAY]
471
+ deepspeed_not_implemented [NO] ....... [OKAY]
472
+ transformer_inference .. [NO] ....... [OKAY]
473
+ --------------------------------------------------
474
+ DeepSpeed general environment info:
475
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
476
+ torch version .................... 2.1.1a0+gitb51c9f6
477
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
478
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
479
+ deepspeed wheel compiled w. ...... torch 2.1
480
+ shared memory (/dev/shm) size .... 503.72 GB
481
+ fatal: detected dubious ownership in repository at '/Model-References'
482
+ To add an exception for this directory, call:
483
+
484
+ git config --global --add safe.directory /Model-References
485
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
486
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
487
+ warnings.warn(
488
+ hccl device_count: 8
489
+ [2024-04-17 13:36:29,411] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
490
+ [2024-04-17 13:36:29,411] [INFO] [comm.py:637:init_distributed] cdb=None
491
+ fatal: detected dubious ownership in repository at '/Model-References'
492
+ To add an exception for this directory, call:
493
+
494
+ git config --global --add safe.directory /Model-References
495
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
496
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
497
+ warnings.warn(
498
+ hccl device_count: 8
499
+ [2024-04-17 13:36:29,416] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
500
+ [2024-04-17 13:36:29,416] [INFO] [comm.py:637:init_distributed] cdb=None
501
+ _initialize_distributed: Initializing with below params:
502
+ args.local_rank: 1
503
+ args.world_size: 8
504
+ args.rank: 1
505
+ args.distributed_backend: hccl
506
+ _initialize_distributed: Initializing with below params:
507
+ args.local_rank: 6
508
+ args.world_size: 8
509
+ args.rank: 6
510
+ args.distributed_backend: hccl
511
+ --------------------------------------------------
512
+ DeepSpeed C++/CUDA extension op report
513
+ --------------------------------------------------
514
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
515
+ runtime if needed. Op compatibility means that your system
516
+ meet the required dependencies to JIT install the op.
517
+ --------------------------------------------------
518
+ JIT compiled ops requires ninja
519
+ ninja .................. [OKAY]
520
+ --------------------------------------------------
521
+ op name ................ installed .. compatible
522
+ --------------------------------------------------
523
+ cpu_adam ............... [NO] ....... [OKAY]
524
+ fused_adam ............. [NO] ....... [OKAY]
525
+ deepspeed_not_implemented [NO] ....... [OKAY]
526
+ transformer_inference .. [NO] ....... [OKAY]
527
+ --------------------------------------------------
528
+ DeepSpeed general environment info:
529
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
530
+ torch version .................... 2.1.1a0+gitb51c9f6
531
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
532
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
533
+ deepspeed wheel compiled w. ...... torch 2.1
534
+ shared memory (/dev/shm) size .... 503.72 GB
535
+ --------------------------------------------------
536
+ DeepSpeed C++/CUDA extension op report
537
+ --------------------------------------------------
538
+ NOTE: Ops not installed will be just-in-time (JIT) compiled at
539
+ runtime if needed. Op compatibility means that your system
540
+ meet the required dependencies to JIT install the op.
541
+ --------------------------------------------------
542
+ JIT compiled ops requires ninja
543
+ ninja .................. [OKAY]
544
+ --------------------------------------------------
545
+ op name ................ installed .. compatible
546
+ --------------------------------------------------
547
+ cpu_adam ............... [NO] ....... [OKAY]
548
+ fused_adam ............. [NO] ....... [OKAY]
549
+ deepspeed_not_implemented [NO] ....... [OKAY]
550
+ transformer_inference .. [NO] ....... [OKAY]
551
+ --------------------------------------------------
552
+ DeepSpeed general environment info:
553
+ torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
554
+ torch version .................... 2.1.1a0+gitb51c9f6
555
+ deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
556
+ deepspeed info ................... 0.12.4+hpu.synapse.v1.14.0, fad45b2, 1.14.0
557
+ deepspeed wheel compiled w. ...... torch 2.1
558
+ shared memory (/dev/shm) size .... 503.72 GB
559
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
560
+ warnings.warn(
561
+ hccl device_count: 8
562
+ > initializing torch distributed ...
563
+ [2024-04-17 13:36:29,471] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
564
+ [2024-04-17 13:36:29,471] [INFO] [comm.py:637:init_distributed] cdb=None
565
+ [2024-04-17 13:36:29,471] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend hccl
566
+ fatal: detected dubious ownership in repository at '/Model-References'
567
+ To add an exception for this directory, call:
568
+
569
+ git config --global --add safe.directory /Model-References
570
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
571
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
572
+ warnings.warn(
573
+ hccl device_count: 8
574
+ fatal: detected dubious ownership in repository at '/Model-References'
575
+ To add an exception for this directory, call:
576
+
577
+ git config --global --add safe.directory /Model-References
578
+ [2024-04-17 13:36:29,495] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
579
+ [2024-04-17 13:36:29,496] [INFO] [comm.py:637:init_distributed] cdb=None
580
+ **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
581
+ > setting tensorboard ...
582
+ _initialize_distributed: Initializing with below params:
583
+ args.local_rank: 7
584
+ args.world_size: 8
585
+ args.rank: 7
586
+ args.distributed_backend: hccl
587
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
588
+ warnings.warn(
589
+ hccl device_count: 8
590
+ [2024-04-17 13:36:29,532] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
591
+ [2024-04-17 13:36:29,532] [INFO] [comm.py:637:init_distributed] cdb=None
592
+ _initialize_distributed: Initializing with below params:
593
+ args.local_rank: 3
594
+ args.world_size: 8
595
+ args.rank: 3
596
+ args.distributed_backend: hccl
597
+ _initialize_distributed: Initializing with below params:
598
+ args.local_rank: 5
599
+ args.world_size: 8
600
+ args.rank: 5
601
+ args.distributed_backend: hccl
602
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
603
+ warnings.warn(
604
+ hccl device_count: 8
605
+ [2024-04-17 13:36:29,568] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
606
+ [2024-04-17 13:36:29,568] [INFO] [comm.py:637:init_distributed] cdb=None
607
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
608
+ warnings.warn(
609
+ hccl device_count: 8
610
+ [2024-04-17 13:36:29,609] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
611
+ [2024-04-17 13:36:29,610] [INFO] [comm.py:637:init_distributed] cdb=None
612
+ /usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/__init__.py:158: UserWarning: torch.hpu.setDeterministic is deprecated and will be removed in next release. Please use torch.use_deterministic_algorithms instead.
613
+ warnings.warn(
614
+ hccl device_count: 8
615
+ [2024-04-17 13:36:29,627] [WARNING] [comm.py:163:init_deepspeed_backend] HCCL backend in DeepSpeed not yet implemented
616
+ [2024-04-17 13:36:29,627] [INFO] [comm.py:637:init_distributed] cdb=None
617
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
618
+ Traceback (most recent call last):
619
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
620
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
621
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
622
+ initialize_megatron(extra_args_provider=extra_args_provider,
623
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
624
+ finish_mpu_init()
625
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
626
+ _initialize_distributed()
627
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
628
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
629
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
630
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
631
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
632
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
633
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
634
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
635
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
636
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
637
+ return init_process_group_orig(
638
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
639
+ func_return = func(*args, **kwargs)
640
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
641
+ Traceback (most recent call last):
642
+ store, rank, world_size = next(rendezvous_iterator)
643
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
644
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
645
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
646
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
647
+ return TCPStore(
648
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
649
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
650
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
651
+ initialize_megatron(extra_args_provider=extra_args_provider,
652
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
653
+ finish_mpu_init()
654
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
655
+ _initialize_distributed()
656
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
657
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
658
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
659
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
660
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
661
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
662
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
663
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
664
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
665
+ return init_process_group_orig(
666
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
667
+ func_return = func(*args, **kwargs)
668
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
669
+ store, rank, world_size = next(rendezvous_iterator)
670
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
671
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
672
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
673
+ return TCPStore(
674
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
675
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
676
+ Traceback (most recent call last):
677
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
678
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
679
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
680
+ initialize_megatron(extra_args_provider=extra_args_provider,
681
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
682
+ finish_mpu_init()
683
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
684
+ _initialize_distributed()
685
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
686
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
687
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
688
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
689
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
690
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
691
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
692
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
693
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
694
+ return init_process_group_orig(
695
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
696
+ func_return = func(*args, **kwargs)
697
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
698
+ store, rank, world_size = next(rendezvous_iterator)
699
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
700
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
701
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
702
+ return TCPStore(
703
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
704
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
705
+ Traceback (most recent call last):
706
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
707
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
708
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
709
+ initialize_megatron(extra_args_provider=extra_args_provider,
710
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
711
+ finish_mpu_init()
712
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
713
+ _initialize_distributed()
714
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
715
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
716
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
717
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
718
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
719
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
720
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
721
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
722
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
723
+ return init_process_group_orig(
724
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
725
+ func_return = func(*args, **kwargs)
726
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
727
+ store, rank, world_size = next(rendezvous_iterator)
728
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
729
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
730
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
731
+ return TCPStore(
732
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
733
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
734
+ Traceback (most recent call last):
735
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
736
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
737
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
738
+ initialize_megatron(extra_args_provider=extra_args_provider,
739
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
740
+ finish_mpu_init()
741
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
742
+ _initialize_distributed()
743
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
744
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
745
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
746
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
747
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
748
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
749
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
750
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
751
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
752
+ return init_process_group_orig(
753
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
754
+ func_return = func(*args, **kwargs)
755
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
756
+ store, rank, world_size = next(rendezvous_iterator)
757
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
758
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
759
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
760
+ return TCPStore(
761
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
762
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
763
+ Traceback (most recent call last):
764
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
765
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
766
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
767
+ initialize_megatron(extra_args_provider=extra_args_provider,
768
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
769
+ finish_mpu_init()
770
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
771
+ _initialize_distributed()
772
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
773
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
774
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
775
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
776
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
777
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
778
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
779
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
780
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
781
+ return init_process_group_orig(
782
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
783
+ func_return = func(*args, **kwargs)
784
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
785
+ store, rank, world_size = next(rendezvous_iterator)
786
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
787
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
788
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
789
+ return TCPStore(
790
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
791
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
792
+ Traceback (most recent call last):
793
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
794
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
795
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
796
+ initialize_megatron(extra_args_provider=extra_args_provider,
797
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
798
+ finish_mpu_init()
799
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
800
+ _initialize_distributed()
801
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
802
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
803
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
804
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
805
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
806
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
807
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
808
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
809
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
810
+ return init_process_group_orig(
811
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
812
+ func_return = func(*args, **kwargs)
813
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
814
+ store, rank, world_size = next(rendezvous_iterator)
815
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
816
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
817
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
818
+ return TCPStore(
819
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
820
+ [E socket.cpp:922] [c10d] The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
821
+ Traceback (most recent call last):
822
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/./pretrain_llama.py", line 110, in <module>
823
+ pretrain(train_valid_test_datasets_provider, model_provider, forward_step, extra_args_provider=llama_argument_handler,
824
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/training.py", line 162, in pretrain
825
+ initialize_megatron(extra_args_provider=extra_args_provider,
826
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 95, in initialize_megatron
827
+ finish_mpu_init()
828
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 77, in finish_mpu_init
829
+ _initialize_distributed()
830
+ File "/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/initialize.py", line 276, in _initialize_distributed
831
+ deepspeed.init_distributed(dist_backend=args.distributed_backend)
832
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/comm.py", line 670, in init_distributed
833
+ cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
834
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 120, in __init__
835
+ self.init_process_group(backend, timeout, init_method, rank, world_size)
836
+ File "/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py", line 148, in init_process_group
837
+ torch.distributed.init_process_group(backend, timeout=timeout, init_method=init_method)
838
+ File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/torch_overwrites.py", line 259, in wrap_init_process_group
839
+ return init_process_group_orig(
840
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
841
+ func_return = func(*args, **kwargs)
842
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1141, in init_process_group
843
+ store, rank, world_size = next(rendezvous_iterator)
844
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 241, in _env_rendezvous_handler
845
+ store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
846
+ File "/usr/local/lib/python3.10/dist-packages/torch/distributed/rendezvous.py", line 172, in _create_c10d_store
847
+ return TCPStore(
848
+ TimeoutError: The client socket has timed out after 1800s while trying to connect to (100.83.134.158, 29500).
849
+ [2024-04-17 14:06:29,834] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2748
850
+ [2024-04-17 14:06:29,931] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2749
851
+ [2024-04-17 14:06:29,931] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2750
852
+ [2024-04-17 14:06:29,932] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2751
853
+ [2024-04-17 14:06:29,984] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2752
854
+ [2024-04-17 14:06:29,985] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2753
855
+ [2024-04-17 14:06:30,037] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2754
856
+ [2024-04-17 14:06:30,037] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2755
857
+ [2024-04-17 14:06:30,037] [ERROR] [launch.py:322:sigkill_handler] ['/usr/bin/bash', '-c', ' cd /Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed && python3 -u ./pretrain_llama.py --deepspeed --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --position-embedding-type rotary --no-bias --layernorm-type rmsnorm --activation-func-type swiglu --layernorm-epsilon 1e-6 --num-layers 16 --hidden-size 5120 --ffn-hidden-size 13824 --num-attention-heads 40 --seq-length 2048 --micro-batch-size 1 --global-batch-size 256 --train-iters 10000 --log-interval 10 --eval-iters 10 --eval-interval 100 --data-path /data/arxiv//tokenized_text_document --vocab-file /data/arxiv//gpt2-vocab.json --merge-file /data/arxiv//gpt2-merges.txt --optimizer adamw --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-6 --lr 3e-4 --lr-decay-style cosine --lr-warmup-iters 2000 --clip-grad 1.0 --weight-decay 0.1 --attention-dropout 0.1 --hidden-dropout 0.1 --tensorboard-dir /data/output/llama13b_x//tensorboard --log-validation-ppl-to-tensorboard --log-batch-size-to-tensorboard --log-timers-to-tensorboard --load /data/output/llama13b_x//checkpoints_zero_stage_2 --deepspeed_config=/data/output/llama13b_x//ds_config.json --zero-stage=0 --exit-interval 0 --no-masked-softmax-fusion --no-bias-gelu-fusion --no-bias-dropout-fusion --use-torch-compile false --use-fused-sdpa true --use-fused-sdpa-with-recompute false --bf16 --use_hpu --distributed-backend=hccl --hpu-deterministic --sequence-parallel --save /data/output/llama13b_x//checkpoints_zero_stage_2 --hf-save /data/output/llama13b_x//hf_ckpt --save-interval 100 --verify-checkpoint --verify-checkpoint-model-type LLAMA'] exits with return code = 1
univ_ckpt_new/zero/10.attention.dense.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8456b875deb014a426c31f3458ee2d9497070b7bf04c0c7580312b34dd71f2e8
3
+ size 16778460