applied-ai-018 commited on
Commit
1a40fb1
·
verified ·
1 Parent(s): 74c5cc2

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/examples_deepspeed/offload_pp/twin-offload.png +3 -0
  2. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/__init__.cpython-310.pyc +0 -0
  3. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/arguments.cpython-310.pyc +0 -0
  4. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/checkpointing.cpython-310.pyc +0 -0
  5. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/dist_signal_handler.cpython-310.pyc +0 -0
  6. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/enums.cpython-310.pyc +0 -0
  7. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/global_vars.cpython-310.pyc +0 -0
  8. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/initialize.cpython-310.pyc +0 -0
  9. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/memory.cpython-310.pyc +0 -0
  10. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/microbatches.cpython-310.pyc +0 -0
  11. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/optimizer_param_scheduler.cpython-310.pyc +0 -0
  12. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/profiler.cpython-310.pyc +0 -0
  13. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/timers.cpython-310.pyc +0 -0
  14. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/training.cpython-310.pyc +0 -0
  15. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/utils.cpython-310.pyc +0 -0
  16. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/__init__.cpython-310.pyc +0 -0
  17. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/enums.cpython-310.pyc +0 -0
  18. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/model_parallel_config.cpython-310.pyc +0 -0
  19. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/parallel_state.cpython-310.pyc +0 -0
  20. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/utils.cpython-310.pyc +0 -0
  21. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/__init__.py +0 -0
  22. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/gpt/__init__.py +1 -0
  23. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/gpt/gpt_embedding.py +114 -0
  24. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/gpt/gpt_model.py +251 -0
  25. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__init__.py +1 -0
  26. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__pycache__/__init__.cpython-310.pyc +0 -0
  27. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__pycache__/p2p_communication.cpython-310.pyc +0 -0
  28. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__pycache__/schedules.cpython-310.pyc +0 -0
  29. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/p2p_communication.py +544 -0
  30. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/schedules.py +1185 -0
  31. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/__init__.py +1 -0
  32. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/__pycache__/__init__.cpython-310.pyc +0 -0
  33. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/__pycache__/cross_entropy.cpython-310.pyc +0 -0
  34. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/cross_entropy.py +56 -0
  35. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/__pycache__/__init__.cpython-310.pyc +0 -0
  36. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/__pycache__/transformer_config.cpython-310.pyc +0 -0
  37. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/enums.py +25 -0
  38. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/module.py +118 -0
  39. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/transformer_block.py +222 -0
  40. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/utils.py +41 -0
  41. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/Makefile +9 -0
  42. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__init__.py +1 -0
  43. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/__init__.cpython-310.pyc +0 -0
  44. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/autoaugment.cpython-310.pyc +0 -0
  45. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/blendable_dataset.cpython-310.pyc +0 -0
  46. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/data_samplers.cpython-310.pyc +0 -0
  47. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/dataset_utils.cpython-310.pyc +0 -0
  48. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/gpt_dataset.cpython-310.pyc +0 -0
  49. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/image_folder.cpython-310.pyc +0 -0
  50. docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/indexed_dataset.cpython-310.pyc +0 -0
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/examples_deepspeed/offload_pp/twin-offload.png ADDED

Git LFS Details

  • SHA256: 228aea9883ac07fb46617338279b2a328ce12c2652e9d5f499d1aa1e8b7b8ef9
  • Pointer size: 130 Bytes
  • Size of remote file: 59.9 kB
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (890 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/arguments.cpython-310.pyc ADDED
Binary file (55.5 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/checkpointing.cpython-310.pyc ADDED
Binary file (17.9 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/dist_signal_handler.cpython-310.pyc ADDED
Binary file (2.81 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/enums.cpython-310.pyc ADDED
Binary file (933 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/global_vars.cpython-310.pyc ADDED
Binary file (6.4 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/initialize.cpython-310.pyc ADDED
Binary file (10.4 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/memory.cpython-310.pyc ADDED
Binary file (4.65 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/microbatches.cpython-310.pyc ADDED
Binary file (4.84 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/optimizer_param_scheduler.cpython-310.pyc ADDED
Binary file (5.84 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/profiler.cpython-310.pyc ADDED
Binary file (3.03 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/timers.cpython-310.pyc ADDED
Binary file (8.51 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/training.cpython-310.pyc ADDED
Binary file (39.1 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/__pycache__/utils.cpython-310.pyc ADDED
Binary file (11.8 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (459 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/enums.cpython-310.pyc ADDED
Binary file (479 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/model_parallel_config.cpython-310.pyc ADDED
Binary file (8.06 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/parallel_state.cpython-310.pyc ADDED
Binary file (20.9 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/__pycache__/utils.cpython-310.pyc ADDED
Binary file (6.55 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/__init__.py ADDED
File without changes
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/gpt/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .gpt_model import GPTModel
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/gpt/gpt_embedding.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ import torch
4
+
5
+ from megatron.core import tensor_parallel
6
+
7
+ from megatron.core.transformer.module import MegatronModule
8
+ from megatron.core.transformer.transformer_config import TransformerConfig
9
+
10
+
11
+ class GPTEmbedding(MegatronModule):
12
+ """Language model embeddings.
13
+
14
+ Arguments:
15
+ config (TransformerConfig): config object with all necessary configs for TransformerBlock
16
+ vocab_size (int): vocabulary size
17
+ max_sequence_length (int): maximum size of sequence. This
18
+ is used for positional embedding
19
+ embedding_dropout_prob float): dropout probability for embeddings
20
+ """
21
+
22
+ def __init__(self, config: TransformerConfig, vocab_size: int, max_sequence_length: int):
23
+ super().__init__(config=config)
24
+
25
+ self.config: TransformerConfig = config
26
+ self.vocab_size: int = vocab_size
27
+ self.max_sequence_length: int = max_sequence_length
28
+
29
+ # Word embeddings (parallel).
30
+ self.word_embeddings = tensor_parallel.VocabParallelEmbedding(
31
+ num_embeddings=self.vocab_size,
32
+ embedding_dim=self.config.hidden_size,
33
+ init_method=self.config.init_method,
34
+ config=self.config
35
+ )
36
+ # @jcasper are these keys needed?
37
+ self._word_embeddings_key = 'word_embeddings'
38
+
39
+ # Position embedding (serial).
40
+ self.position_embeddings = torch.nn.Embedding(self.max_sequence_length, self.config.hidden_size)
41
+ self._position_embeddings_key = 'position_embeddings'
42
+
43
+ # Initialize the position embeddings.
44
+ if self.config.perform_initialization:
45
+ self.config.init_method(self.position_embeddings.weight)
46
+
47
+ # Embeddings dropout
48
+ self.embedding_dropout = torch.nn.Dropout(self.config.hidden_dropout)
49
+
50
+ def zero_parameters(self):
51
+ """Zero out all parameters in embedding."""
52
+ self.word_embeddings.weight.data.fill_(0)
53
+ self.word_embeddings.weight.shared = True
54
+ self.position_embeddings.weight.data.fill_(0)
55
+ self.position_embeddings.weight.shared = True
56
+
57
+ def forward(self, input_ids, position_ids):
58
+ # Embeddings.
59
+ words_embeddings = self.word_embeddings(input_ids)
60
+ position_embeddings = self.position_embeddings(position_ids)
61
+ embeddings = words_embeddings + position_embeddings
62
+
63
+ # Data format change to avoid explicit tranposes : [b s h] --> [s b h].
64
+ embeddings = embeddings.transpose(0, 1).contiguous()
65
+
66
+ # If the input flag for fp32 residual connection is set, convert for float.
67
+ if self.config.fp32_residual_connection:
68
+ embeddings = embeddings.float()
69
+
70
+ # Dropout.
71
+ if self.config.sequence_parallel:
72
+ embeddings = tensor_parallel.scatter_to_sequence_parallel_region(embeddings)
73
+ with tensor_parallel.get_cuda_rng_tracker().fork():
74
+ embeddings = self.embedding_dropout(embeddings)
75
+ else:
76
+ embeddings = self.embedding_dropout(embeddings)
77
+
78
+ return embeddings
79
+
80
+ def state_dict_for_save_checkpoint(self, prefix='', keep_vars=False):
81
+ """For easy load."""
82
+
83
+ state_dict_ = {}
84
+ state_dict_[self._word_embeddings_key] = self.word_embeddings.state_dict(prefix=prefix, keep_vars=keep_vars)
85
+ state_dict_[self._position_embeddings_key] = self.position_embeddings.state_dict(
86
+ prefix=prefix, keep_vars=keep_vars
87
+ )
88
+
89
+ return state_dict_
90
+
91
+ def load_state_dict(self, state_dict, strict=True):
92
+ """Customized load."""
93
+
94
+ # Word embedding.
95
+ if self._word_embeddings_key in state_dict:
96
+ state_dict_ = state_dict[self._word_embeddings_key]
97
+ else:
98
+ # for backward compatibility.
99
+ state_dict_ = {}
100
+ for key in state_dict.keys():
101
+ if 'word_embeddings' in key:
102
+ state_dict_[key.split('word_embeddings.')[1]] = state_dict[key]
103
+ self.word_embeddings.load_state_dict(state_dict_, strict=strict)
104
+
105
+ # Position embedding.
106
+ if self._position_embeddings_key in state_dict:
107
+ state_dict_ = state_dict[self._position_embeddings_key]
108
+ else:
109
+ # for backward compatibility.
110
+ state_dict_ = {}
111
+ for key in state_dict.keys():
112
+ if 'position_embeddings' in key:
113
+ state_dict_[key.split('position_embeddings.')[1]] = state_dict[key]
114
+ self.position_embeddings.load_state_dict(state_dict_, strict=strict)
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/models/gpt/gpt_model.py ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ import torch
4
+ from torch import Tensor
5
+
6
+ from megatron.core import parallel_state, tensor_parallel
7
+
8
+ from megatron.core.transformer.module import MegatronModule
9
+ from megatron.core.transformer.transformer_config import TransformerConfig
10
+ from megatron.core.transformer.transformer_block import TransformerBlock
11
+ from megatron.core.transformer.enums import AttnMaskType, ModelType
12
+ from megatron.core.models.gpt.gpt_embedding import GPTEmbedding
13
+
14
+
15
+ class GPTModel(MegatronModule):
16
+ """Transformer language model.
17
+
18
+ Arguments:
19
+ config (TransformerConfig): transformer config
20
+
21
+ vocab_size (int): vocabulary size
22
+
23
+ max_sequence_length (int): maximum size of sequence. This is used for positional embedding
24
+
25
+ pre_process (bool): Include embedding layer (used with pipeline parallelism)
26
+ post_process (bool): Include an output layer (used with pipeline parallelism)
27
+
28
+ parallel_output (bool): Do not gather the outputs, keep them split across tensor parallel ranks
29
+
30
+ share_embeddings_and_output_weights (bool): When True, input embeddings and output logit weights are
31
+ shared. Defaults to False.
32
+
33
+ """
34
+
35
+ def __init__(
36
+ self,
37
+ config: TransformerConfig,
38
+ vocab_size: int,
39
+ max_sequence_length: int,
40
+ pre_process: bool = True,
41
+ post_process: bool = True,
42
+ fp16_lm_cross_entropy: bool = False,
43
+ parallel_output: bool = True,
44
+ share_embeddings_and_output_weights: bool = False,
45
+ ):
46
+ super(GPTModel, self).__init__(config=config)
47
+
48
+ self.config: TransformerConfig = config
49
+ self.vocab_size = vocab_size
50
+ self.max_sequence_length = max_sequence_length
51
+ self.pre_process = pre_process
52
+ self.post_process = post_process
53
+ self.fp16_lm_cross_entropy = fp16_lm_cross_entropy
54
+ self.parallel_output = parallel_output
55
+ self.share_embeddings_and_output_weights = share_embeddings_and_output_weights
56
+
57
+ # megatron core pipelining currently depends on model type
58
+ self.model_type = ModelType.encoder_or_decoder
59
+
60
+ # Embeddings.
61
+ if self.pre_process:
62
+ self.embedding = GPTEmbedding(
63
+ config=self.config, vocab_size=self.vocab_size, max_sequence_length=self.max_sequence_length,
64
+ )
65
+
66
+ # Transformer.
67
+ self.decoder = TransformerBlock(
68
+ config=self.config,
69
+ self_attn_mask_type=AttnMaskType.causal,
70
+ pre_process=self.pre_process,
71
+ post_process=self.post_process,
72
+ )
73
+
74
+ # Output
75
+ if post_process:
76
+ self.output_layer = tensor_parallel.ColumnParallelLinear(
77
+ config.hidden_size,
78
+ self.vocab_size,
79
+ config=config,
80
+ init_method=config.init_method,
81
+ bias=False,
82
+ skip_bias_add=False,
83
+ gather_output=not self.parallel_output,
84
+ skip_weight_param_allocation=self.pre_process and self.share_embeddings_and_output_weights)
85
+
86
+ if self.share_embeddings_and_output_weights and (self.pre_process or self.post_process):
87
+ self.initialize_last_stage_with_word_embeddings()
88
+
89
+ def set_input_tensor(self, input_tensor):
90
+ """ See megatron.model.transformer.set_input_tensor()"""
91
+
92
+ # This is usually handled in schedules.py but some inference code still
93
+ # gives us non-lists or None
94
+ if not isinstance(input_tensor, list):
95
+ input_tensor = [input_tensor]
96
+
97
+ assert len(input_tensor) == 1, 'input_tensor should only be length 1 for gpt'
98
+ self.decoder.set_input_tensor(input_tensor[0])
99
+
100
+ def forward(
101
+ self,
102
+ input_ids: Tensor,
103
+ position_ids: Tensor,
104
+ attention_mask: Tensor,
105
+ labels: Tensor = None,
106
+ inference_params=None,
107
+ ):
108
+
109
+ # Encoder embedding.
110
+ if self.pre_process:
111
+ decoder_input = self.embedding(input_ids=input_ids, position_ids=position_ids)
112
+ else:
113
+ # intermediate stage of pipeline
114
+ # encoder will get hidden_states from encoder.input_tensor
115
+ decoder_input = None
116
+
117
+ # Run encoder.
118
+ hidden_states = self.decoder(
119
+ hidden_states=decoder_input, attention_mask=attention_mask, inference_params=inference_params
120
+ )
121
+
122
+ if not self.post_process:
123
+ return hidden_states
124
+
125
+ # logits and loss
126
+ output_weight = None
127
+ if self.share_embeddings_and_output_weights:
128
+ output_weight = self.shared_embedding_or_output_weight()
129
+ logits, _ = self.output_layer(hidden_states, weight=output_weight)
130
+
131
+ if labels is None:
132
+ # [s b h] => [b s h]
133
+ return logits.transpose(0, 1).contiguous()
134
+
135
+ # [b s] => [s b]
136
+ labels = labels.transpose(0, 1).contiguous()
137
+ loss = tensor_parallel.vocab_parallel_cross_entropy(logits.float(), labels)
138
+
139
+ # [s b] => [b, s]
140
+ loss = loss.transpose(0, 1).contiguous()
141
+ return loss
142
+
143
+ def shared_embedding_or_output_weight(self):
144
+ if self.pre_process:
145
+ return self.embedding.word_embeddings.weight
146
+ elif self.post_process:
147
+ return self.output_layer.weight
148
+ return None
149
+
150
+ def initialize_last_stage_with_word_embeddings(self):
151
+
152
+ # This function just initializes the word embeddings in the final stage
153
+ # when we are using pipeline parallelism and sharing word
154
+ # embeddings. Nothing to do if we aren't sharing weights or aren't using
155
+ # pipeline parallelism.
156
+ if not self.share_embeddings_and_output_weights or (self.pre_process and self.post_process):
157
+ return
158
+
159
+ if self.post_process and not self.pre_process:
160
+ assert not parallel_state.is_pipeline_first_stage()
161
+ # set word_embeddings weights to 0 here, then copy first
162
+ # stage's weights using all_reduce below.
163
+ self.output_layer.weight.data.fill_(0)
164
+ self.output_layer.weight.shared = True
165
+
166
+ # Parameters are shared between the word embeddings layers, and the
167
+ # heads at the end of the model. In a pipelined setup with more than
168
+ # one stage, the initial embedding layer and the head are on different
169
+ # workers, so we do the following:
170
+ # 1. Create a second copy of word_embeddings on the last stage, with
171
+ # initial parameters of 0.0.
172
+ # 2. Do an all-reduce between the first and last stage to ensure that
173
+ # the two copies of word_embeddings start off with the same
174
+ # parameter values.
175
+ # 3. In the training loop, before an all-reduce between the grads of
176
+ # the two word_embeddings layers to ensure that every applied weight
177
+ # update is the same on both stages.
178
+
179
+ # Ensure that first and last stages have the same initial parameter
180
+ # values.
181
+ if torch.distributed.is_initialized():
182
+ if parallel_state.is_rank_in_embedding_group():
183
+ weight = self.shared_embedding_or_output_weight()
184
+ torch.distributed.all_reduce(weight.data, group=parallel_state.get_embedding_group())
185
+
186
+ elif not getattr(GPTModel, "embedding_warning_printed", False):
187
+ logging.getLogger(__name__).warning(
188
+ "Distributed processes aren't initialized, so the output layer "
189
+ "is not initialized with weights from the word embeddings. "
190
+ "If you are just manipulating a model this is fine, but "
191
+ "this needs to be handled manually. If you are training "
192
+ "something is definitely wrong."
193
+ )
194
+ GPTModel.embedding_warning_printed = True
195
+
196
+ # TODO: add distributed checkpointing
197
+ def state_dict_for_save_checkpoint(self, prefix='', keep_vars=False):
198
+ pass
199
+ # """For easy load."""
200
+
201
+ # state_dict_ = {}
202
+ # if self.pre_process:
203
+ # state_dict_[self._embedding_key] = self.embedding.state_dict_for_save_checkpoint(
204
+ # prefix=prefix, keep_vars=keep_vars
205
+ # )
206
+ # state_dict_[self._encoder_key] = self.encoder.state_dict_for_save_checkpoint(
207
+ # prefix=prefix, keep_vars=keep_vars
208
+ # )
209
+
210
+ # return state_dict_
211
+
212
+ # TODO: add distributed checkpointing
213
+ def load_state_dict(self, state_dict, strict=True):
214
+ pass
215
+ # """Customized load."""
216
+
217
+ # # Embedding.
218
+ # if self.pre_process:
219
+ # if self._embedding_key in state_dict:
220
+ # state_dict_ = state_dict[self._embedding_key]
221
+ # else:
222
+ # # for backward compatibility.
223
+ # state_dict_ = {}
224
+ # for key in state_dict.keys():
225
+ # if '_embeddings' in key:
226
+ # state_dict_[key] = state_dict[key]
227
+ # self.embedding.load_state_dict(state_dict_, strict=strict)
228
+
229
+ # # Encoder.
230
+ # if self._encoder_key in state_dict:
231
+ # state_dict_ = state_dict[self._encoder_key]
232
+ # # For backward compatibility.
233
+ # elif 'transformer' in state_dict:
234
+ # state_dict_ = state_dict['transformer']
235
+ # else:
236
+ # # For backward compatibility.
237
+ # state_dict_ = {}
238
+ # for key in state_dict.keys():
239
+ # if 'transformer.' in key:
240
+ # state_dict_[key.split('transformer.')[1]] = state_dict[key]
241
+
242
+ # # For backward compatibility.
243
+ # state_dict_self_attention = {}
244
+ # for key in state_dict_.keys():
245
+ # if '.attention.' in key:
246
+ # state_dict_self_attention[key.replace(".attention.", ".self_attention.")] = state_dict_[key]
247
+ # else:
248
+ # state_dict_self_attention[key] = state_dict_[key]
249
+ # state_dict_ = state_dict_self_attention
250
+
251
+ # self.encoder.load_state_dict(state_dict_, strict=strict)
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .schedules import get_forward_backward_func
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (262 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__pycache__/p2p_communication.cpython-310.pyc ADDED
Binary file (11.1 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/__pycache__/schedules.cpython-310.pyc ADDED
Binary file (23.4 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/p2p_communication.py ADDED
@@ -0,0 +1,544 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ from functools import reduce
4
+ import operator
5
+ from typing import Optional, List, Union, Callable, Tuple
6
+
7
+ import torch
8
+
9
+ from megatron import core
10
+ from megatron.core.parallel_state import (
11
+ get_pipeline_model_parallel_group,
12
+ get_pipeline_model_parallel_rank,
13
+ get_pipeline_model_parallel_prev_rank,
14
+ get_pipeline_model_parallel_next_rank,
15
+ )
16
+
17
+ from megatron.core import ModelParallelConfig
18
+ from deepspeed.accelerator import get_accelerator
19
+
20
+ # Types
21
+ Shape = Union[List[int], torch.Size]
22
+
23
+ def _communicate_shapes(tensor_send_next, tensor_send_prev,
24
+ recv_prev, recv_next, config):
25
+ """Communicate tensor shapes between stages. Used to communicate
26
+ tensor shapes before the actual tensor communication happens.
27
+ This is required when the sequence lengths across micro batches
28
+ are not uniform.
29
+
30
+ Takes the following arguments:
31
+ tensor_send_next: tensor to send to next rank (no tensor sent if
32
+ set to None).
33
+ tensor_send_prev: tensor to send to prev rank (no tensor sent if
34
+ set to None).
35
+ recv_prev: boolean for whether tensor should be received from
36
+ previous rank.
37
+ recv_next: boolean for whether tensor should be received from
38
+ next rank.
39
+ Returns:
40
+ (recv_prev_shape, recv_next_shape)
41
+ """
42
+
43
+ recv_prev_shape_tensor = None
44
+ recv_next_shape_tensor = None
45
+ send_prev_shape_tensor = None
46
+ send_next_shape_tensor = None
47
+ if recv_prev:
48
+ recv_prev_shape_tensor = torch.empty((3),
49
+ device=get_accelerator().current_device(),
50
+ dtype=torch.int64)
51
+ if recv_next:
52
+ recv_next_shape_tensor = torch.empty((3),
53
+ device=get_accelerator().current_device(),
54
+ dtype=torch.int64)
55
+ if tensor_send_prev is not None:
56
+ send_prev_shape_tensor = torch.tensor(tensor_send_prev.size(),
57
+ device=get_accelerator().current_device(),
58
+ dtype=torch.int64)
59
+ if tensor_send_next is not None:
60
+ send_next_shape_tensor = torch.tensor(tensor_send_next.size(),
61
+ device=get_accelerator().current_device(),
62
+ dtype=torch.int64)
63
+
64
+ if config.use_ring_exchange_p2p:
65
+ torch.distributed.ring_exchange(tensor_send_prev=send_prev_shape_tensor,
66
+ tensor_recv_prev=recv_prev_shape_tensor,
67
+ tensor_send_next=send_next_shape_tensor,
68
+ tensor_recv_next=recv_next_shape_tensor,
69
+ group=get_pipeline_model_parallel_group())
70
+ else:
71
+ ops = []
72
+ if send_prev_shape_tensor is not None:
73
+ send_prev_op = torch.distributed.P2POp(
74
+ torch.distributed.isend, send_prev_shape_tensor,
75
+ get_pipeline_model_parallel_prev_rank())
76
+ ops.append(send_prev_op)
77
+ if recv_prev_shape_tensor is not None:
78
+ recv_prev_op = torch.distributed.P2POp(
79
+ torch.distributed.irecv, recv_prev_shape_tensor,
80
+ get_pipeline_model_parallel_prev_rank())
81
+ ops.append(recv_prev_op)
82
+ if send_next_shape_tensor is not None:
83
+ send_next_op = torch.distributed.P2POp(
84
+ torch.distributed.isend, send_next_shape_tensor,
85
+ get_pipeline_model_parallel_next_rank())
86
+ ops.append(send_next_op)
87
+ if recv_next_shape_tensor is not None:
88
+ recv_next_op = torch.distributed.P2POp(
89
+ torch.distributed.irecv, recv_next_shape_tensor,
90
+ get_pipeline_model_parallel_next_rank())
91
+ ops.append(recv_next_op)
92
+ if len(ops) > 0:
93
+ reqs = torch.distributed.batch_isend_irecv(ops)
94
+ for req in reqs:
95
+ req.wait()
96
+
97
+ # To protect against race condition when using batch_isend_irecv().
98
+ # should take this out once the bug with batch_isend_irecv is resolved.
99
+ get_accelerator().synchronize()
100
+
101
+ recv_prev_shape = [0, 0, 0]
102
+ if recv_prev_shape_tensor is not None:
103
+ recv_prev_shape = recv_prev_shape_tensor.tolist()
104
+
105
+ recv_next_shape = [0, 0, 0]
106
+ if recv_next_shape_tensor is not None:
107
+ recv_next_shape = recv_next_shape_tensor.tolist()
108
+
109
+ return recv_prev_shape, recv_next_shape
110
+
111
+ def _batched_p2p_ops(*,
112
+ tensor_send_prev: Optional[torch.Tensor],
113
+ tensor_recv_prev: Optional[torch.Tensor],
114
+ tensor_send_next: Optional[torch.Tensor],
115
+ tensor_recv_next: Optional[torch.Tensor],
116
+ group: torch.distributed.ProcessGroup):
117
+ ops = []
118
+ if tensor_send_prev is not None:
119
+ send_prev_op = torch.distributed.P2POp(
120
+ torch.distributed.isend, tensor_send_prev,
121
+ get_pipeline_model_parallel_prev_rank(),
122
+ group)
123
+ ops.append(send_prev_op)
124
+ if tensor_recv_prev is not None:
125
+ recv_prev_op = torch.distributed.P2POp(
126
+ torch.distributed.irecv, tensor_recv_prev,
127
+ get_pipeline_model_parallel_prev_rank(),
128
+ group)
129
+ ops.append(recv_prev_op)
130
+ if tensor_send_next is not None:
131
+ send_next_op = torch.distributed.P2POp(
132
+ torch.distributed.isend, tensor_send_next,
133
+ get_pipeline_model_parallel_next_rank(),
134
+ group)
135
+ ops.append(send_next_op)
136
+ if tensor_recv_next is not None:
137
+ recv_next_op = torch.distributed.P2POp(
138
+ torch.distributed.irecv, tensor_recv_next,
139
+ get_pipeline_model_parallel_next_rank(),
140
+ group)
141
+ ops.append(recv_next_op)
142
+ if len(ops) > 0:
143
+ reqs = torch.distributed.batch_isend_irecv(ops)
144
+ else:
145
+ reqs = []
146
+ return reqs
147
+
148
+ def _p2p_ops(*,
149
+ tensor_send_prev: Optional[torch.Tensor],
150
+ tensor_recv_prev: Optional[torch.Tensor],
151
+ tensor_send_next: Optional[torch.Tensor],
152
+ tensor_recv_next: Optional[torch.Tensor],
153
+ group: torch.distributed.ProcessGroup):
154
+ reqs = []
155
+ rank = get_pipeline_model_parallel_rank()
156
+ if get_pipeline_model_parallel_rank() % 2 == 0:
157
+ if tensor_send_next is not None:
158
+ send_next_req = torch.distributed.isend(
159
+ tensor=tensor_send_next,
160
+ dst=get_pipeline_model_parallel_next_rank(),
161
+ group=group,
162
+ )
163
+ reqs.append(send_next_req)
164
+
165
+ if tensor_recv_prev is not None:
166
+ recv_prev_req = torch.distributed.irecv(
167
+ tensor=tensor_recv_prev,
168
+ src=get_pipeline_model_parallel_prev_rank(),
169
+ group=group,
170
+ )
171
+ reqs.append(recv_prev_req)
172
+
173
+ if tensor_send_prev is not None:
174
+ send_prev_req = torch.distributed.isend(
175
+ tensor=tensor_send_prev,
176
+ dst=get_pipeline_model_parallel_prev_rank(),
177
+ group=group,
178
+ )
179
+ reqs.append(send_prev_req)
180
+
181
+ if tensor_recv_next is not None:
182
+ recv_next_req = torch.distributed.irecv(
183
+ tensor=tensor_recv_next,
184
+ src=get_pipeline_model_parallel_next_rank(),
185
+ group=group,
186
+ )
187
+ reqs.append(recv_next_req)
188
+
189
+ else:
190
+ if tensor_recv_prev is not None:
191
+ recv_prev_req = torch.distributed.irecv(
192
+ tensor=tensor_recv_prev,
193
+ src=get_pipeline_model_parallel_prev_rank(),
194
+ group=group,
195
+ )
196
+ reqs.append(recv_prev_req)
197
+
198
+ if tensor_send_next is not None:
199
+ send_next_req = torch.distributed.isend(
200
+ tensor=tensor_send_next,
201
+ dst=get_pipeline_model_parallel_next_rank(),
202
+ group=group,
203
+ )
204
+ reqs.append(send_next_req)
205
+
206
+ if tensor_recv_next is not None:
207
+ recv_next_req = torch.distributed.irecv(
208
+ tensor=tensor_recv_next,
209
+ src=get_pipeline_model_parallel_next_rank(),
210
+ group=group,
211
+ )
212
+ reqs.append(recv_next_req)
213
+
214
+ if tensor_send_prev is not None:
215
+ send_prev_req = torch.distributed.isend(
216
+ tensor=tensor_send_prev,
217
+ dst=get_pipeline_model_parallel_prev_rank(),
218
+ group=group,
219
+ )
220
+ reqs.append(send_prev_req)
221
+ return reqs
222
+
223
+ def _communicate(*, tensor_send_next: Optional[torch.Tensor],
224
+ tensor_send_prev: Optional[torch.Tensor],
225
+ recv_prev: bool,
226
+ recv_next: bool,
227
+ tensor_shape: Shape,
228
+ config: ModelParallelConfig,
229
+ wait_on_reqs: bool = True) -> Tuple[torch.Tensor, torch.Tensor]:
230
+ """Communicate tensors between stages. Used as helper method in other
231
+ communication methods that are used in megatron/schedules.py.
232
+
233
+ Arguments:
234
+ tensor_send_next (torch.Tensor, optional):
235
+ Tensor to send to next rank (no tensor sent if None)
236
+
237
+ tensor_send_prev (torch.Tensor, optional):
238
+ Tensor to send to prev rank (no tensor sent if None)
239
+
240
+ recv_prev (boolean, required):
241
+ whether tensor should be received from previous rank.
242
+
243
+ recv_next (boolean, required):
244
+ whether tensor should be received from next rank.
245
+
246
+ tensor_shape (List[int] or torch.Size, required):
247
+ shape of tensor to receive (this method assumes that all
248
+ tensors sent and received in a single function call are
249
+ the same shape).
250
+
251
+ wait_on_reqs (boolean, optional, default=False):
252
+ For non-batched p2p communication, wait on each request
253
+ before returning.
254
+
255
+ Returns:
256
+ tuple containing
257
+
258
+ - tensor_recv_prev: torch.Tensor if recv_prev is True, None otherwise.
259
+ - tensor_recv_next: torch.Tensor if recv_next is True, None otherwise.
260
+
261
+ """
262
+
263
+ # Create placeholder tensors for receive in forward and backward directions
264
+ # if needed.
265
+ tensor_recv_prev = None
266
+ tensor_recv_next = None
267
+
268
+ if not config.variable_seq_lengths:
269
+ recv_prev_shape = tensor_shape
270
+ recv_next_shape = tensor_shape
271
+ else:
272
+ recv_prev_shape, recv_next_shape = \
273
+ _communicate_shapes(tensor_send_next, tensor_send_prev,
274
+ recv_prev, recv_next, config)
275
+
276
+ if recv_prev:
277
+ if config.pipeline_dtype is None:
278
+ raise RuntimeError("pipeline_dtype must be provided if recv_prev is True")
279
+ if tensor_shape is None:
280
+ raise RuntimeError(
281
+ "tensor_shape must be specified if recv_prev is True. "
282
+ "Common tensor_shape is (seq_length, micro_batch_size, hidden_size)"
283
+ )
284
+ tensor_recv_prev = torch.empty(recv_prev_shape,
285
+ requires_grad=True,
286
+ device=get_accelerator().current_device(),
287
+ dtype=config.pipeline_dtype)
288
+ if recv_next:
289
+ if config.pipeline_dtype is None:
290
+ raise RuntimeError("dtype must be provided if recv_next is True")
291
+ if tensor_shape is None:
292
+ raise RuntimeError(
293
+ "tensor_shape must be specified if recv_next is True. "
294
+ "Common tensor_shape is (seq_length, micro_batch_size, hidden_size)"
295
+ )
296
+ tensor_recv_next = torch.empty(recv_next_shape,
297
+ requires_grad=True,
298
+ device=get_accelerator().current_device(),
299
+ dtype=config.pipeline_dtype)
300
+
301
+ # Send tensors in both the forward and backward directions as appropriate.
302
+ if config.use_ring_exchange_p2p:
303
+ def _ring_exchange_wrapper(**kwargs):
304
+ torch.distributed.ring_exchange(**kwargs)
305
+ return []
306
+ p2p_func = _ring_exchange_wrapper
307
+ elif config.batch_p2p_comm:
308
+ assert wait_on_reqs
309
+ p2p_func = _batched_p2p_ops
310
+ else:
311
+ p2p_func = _p2p_ops
312
+
313
+ reqs = p2p_func(tensor_send_prev=tensor_send_prev,
314
+ tensor_recv_prev=tensor_recv_prev,
315
+ tensor_send_next=tensor_send_next,
316
+ tensor_recv_next=tensor_recv_next,
317
+ group=get_pipeline_model_parallel_group())
318
+
319
+ if wait_on_reqs and len(reqs) > 0:
320
+ for req in reqs:
321
+ req.wait()
322
+ reqs = None
323
+
324
+ if config.batch_p2p_comm and config.batch_p2p_sync:
325
+ # To protect against race condition when using batch_isend_irecv().
326
+ # User should assert that we have a modern enough PyTorch to not need this
327
+ get_accelerator().synchronize()
328
+
329
+ return tensor_recv_prev, tensor_recv_next, reqs
330
+
331
+
332
+ def recv_forward(tensor_shape: Shape,
333
+ config: ModelParallelConfig) -> torch.Tensor:
334
+ """ Receive tensor from previous rank in pipeline (forward receive).
335
+
336
+
337
+ See _communicate for argument details.
338
+ """
339
+
340
+ if core.parallel_state.is_pipeline_first_stage():
341
+ input_tensor = None
342
+ else:
343
+ if config.timers is not None:
344
+ config.timers('forward-recv', log_level=2).start()
345
+ input_tensor, _, _ = _communicate(
346
+ tensor_send_next=None,
347
+ tensor_send_prev=None,
348
+ recv_prev=True,
349
+ recv_next=False,
350
+ tensor_shape=tensor_shape,
351
+ config=config)
352
+ if config.timers is not None:
353
+ config.timers('forward-recv').stop()
354
+ return input_tensor
355
+
356
+
357
+ def recv_backward(tensor_shape: Shape,
358
+ config: ModelParallelConfig) -> torch.Tensor:
359
+ """Receive tensor from next rank in pipeline (backward receive).
360
+
361
+ See _communicate for argument details.
362
+ """
363
+ if core.parallel_state.is_pipeline_last_stage():
364
+ output_tensor_grad = None
365
+ else:
366
+ if config.timers is not None:
367
+ config.timers('backward-recv', log_level=2).start()
368
+ _, output_tensor_grad, _ = _communicate(
369
+ tensor_send_next=None,
370
+ tensor_send_prev=None,
371
+ recv_prev=False,
372
+ recv_next=True,
373
+ tensor_shape=tensor_shape,
374
+ config=config)
375
+ if config.timers is not None:
376
+ config.timers('backward-recv').stop()
377
+ return output_tensor_grad
378
+
379
+
380
+ def send_forward(output_tensor: torch.Tensor,
381
+ config: ModelParallelConfig) -> None:
382
+ """Send tensor to next rank in pipeline (forward send).
383
+
384
+ See _communicate for argument details.
385
+ """
386
+
387
+ if not core.parallel_state.is_pipeline_last_stage():
388
+ if config.timers is not None:
389
+ config.timers('forward-send', log_level=2).start()
390
+ _communicate(
391
+ tensor_send_next=output_tensor,
392
+ tensor_send_prev=None,
393
+ recv_prev=False,
394
+ recv_next=False,
395
+ tensor_shape=None,
396
+ config=config)
397
+ if config.timers is not None:
398
+ config.timers('forward-send').stop()
399
+
400
+
401
+ def send_backward(input_tensor_grad: torch.Tensor,
402
+ config: ModelParallelConfig) -> None:
403
+ """Send tensor to previous rank in pipeline (backward send).
404
+
405
+ See _communicate for argument details.
406
+ """
407
+ if not core.parallel_state.is_pipeline_first_stage():
408
+ if config.timers is not None:
409
+ config.timers('backward-send', log_level=2).start()
410
+ _communicate(
411
+ tensor_send_next=None,
412
+ tensor_send_prev=input_tensor_grad,
413
+ recv_prev=False,
414
+ recv_next=False,
415
+ tensor_shape=None,
416
+ config=config)
417
+ if config.timers is not None:
418
+ config.timers('backward-send').stop()
419
+
420
+
421
+ def send_forward_recv_backward(output_tensor: torch.Tensor,
422
+ tensor_shape: Shape,
423
+ config: ModelParallelConfig) -> torch.Tensor:
424
+ """Batched send and recv with next rank in pipeline.
425
+
426
+ See _communicate for argument details.
427
+ """
428
+ if core.parallel_state.is_pipeline_last_stage():
429
+ output_tensor_grad = None
430
+ else:
431
+ if config.timers is not None:
432
+ config.timers('forward-send-backward-recv', log_level=2).start()
433
+ _, output_tensor_grad,_ = _communicate(
434
+ tensor_send_next=output_tensor,
435
+ tensor_send_prev=None,
436
+ recv_prev=False,
437
+ recv_next=True,
438
+ tensor_shape=tensor_shape,
439
+ config=config)
440
+ if config.timers is not None:
441
+ config.timers('forward-send-backward-recv').stop()
442
+ return output_tensor_grad
443
+
444
+
445
+ def send_backward_recv_forward(input_tensor_grad: torch.Tensor,
446
+ tensor_shape: Shape,
447
+ config: ModelParallelConfig) -> torch.Tensor:
448
+ """Batched send and recv with previous rank in pipeline.
449
+
450
+ See _communicate for argument details.
451
+ """
452
+ if core.parallel_state.is_pipeline_first_stage():
453
+ input_tensor = None
454
+ else:
455
+ if config.timers is not None:
456
+ config.timers('backward-send-forward-recv', log_level=2).start()
457
+ input_tensor, _, _ = _communicate(
458
+ tensor_send_next=None,
459
+ tensor_send_prev=input_tensor_grad,
460
+ recv_prev=True,
461
+ recv_next=False,
462
+ tensor_shape=tensor_shape,
463
+ config=config)
464
+ if config.timers is not None:
465
+ config.timers('backward-send-forward-recv').stop()
466
+ return input_tensor
467
+
468
+
469
+ def send_forward_recv_forward(output_tensor: torch.Tensor,
470
+ recv_prev: bool,
471
+ tensor_shape: Shape,
472
+ config: ModelParallelConfig,
473
+ overlap_p2p_comm: bool = False) -> torch.Tensor:
474
+ """Batched recv from previous rank and send to next rank in pipeline.
475
+
476
+ See _communicate for argument details.
477
+ """
478
+ if config.timers is not None:
479
+ config.timers('forward-send-forward-recv', log_level=2).start()
480
+ input_tensor, _, wait_handles = _communicate(
481
+ tensor_send_next=output_tensor,
482
+ tensor_send_prev=None,
483
+ recv_prev=recv_prev,
484
+ recv_next=False,
485
+ tensor_shape=tensor_shape,
486
+ wait_on_reqs=(not overlap_p2p_comm),
487
+ config=config)
488
+ if config.timers is not None:
489
+ config.timers('forward-send-forward-recv').stop()
490
+ if overlap_p2p_comm:
491
+ return input_tensor, wait_handles
492
+ return input_tensor
493
+
494
+
495
+ def send_backward_recv_backward(input_tensor_grad: torch.Tensor,
496
+ recv_next: bool,
497
+ tensor_shape: Shape,
498
+ config: ModelParallelConfig,
499
+ overlap_p2p_comm: bool = False) -> torch.Tensor:
500
+ """Batched recv from next rank and send to previous rank in pipeline.
501
+
502
+ See _communicate for argument details.
503
+ """
504
+ if config.timers is not None:
505
+ config.timers('backward-send-backward-recv', log_level=2).start()
506
+ _, output_tensor_grad, wait_handles = _communicate(
507
+ tensor_send_next=None,
508
+ tensor_send_prev=input_tensor_grad,
509
+ recv_prev=False,
510
+ recv_next=recv_next,
511
+ tensor_shape=tensor_shape,
512
+ wait_on_reqs=(not overlap_p2p_comm),
513
+ config=config)
514
+ if config.timers is not None:
515
+ config.timers('backward-send-backward-recv').stop()
516
+ if overlap_p2p_comm:
517
+ return output_tensor_grad, wait_handles
518
+ return output_tensor_grad
519
+
520
+
521
+ def send_forward_backward_recv_forward_backward(
522
+ output_tensor: torch.Tensor,
523
+ input_tensor_grad: torch.Tensor,
524
+ recv_prev: bool,
525
+ recv_next: bool,
526
+ tensor_shape: Shape,
527
+ config: ModelParallelConfig) -> torch.Tensor:
528
+ """Batched send and recv with previous and next ranks in pipeline.
529
+
530
+ See _communicate for argument details.
531
+ """
532
+ if config.timers is not None:
533
+ config.timers('forward-backward-send-forward-backward-recv',
534
+ log_level=2).start()
535
+ input_tensor, output_tensor_grad, _ = _communicate(
536
+ tensor_send_next=output_tensor,
537
+ tensor_send_prev=input_tensor_grad,
538
+ recv_prev=recv_prev,
539
+ recv_next=recv_next,
540
+ tensor_shape=tensor_shape,
541
+ config=config)
542
+ if config.timers is not None:
543
+ config.timers('forward-backward-send-forward-backward-recv').stop()
544
+ return input_tensor, output_tensor_grad
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/pipeline_parallel/schedules.py ADDED
@@ -0,0 +1,1185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ import contextlib
4
+ from typing import Callable, Iterator, List, Optional, Union
5
+
6
+ import torch
7
+ from torch.autograd.variable import Variable
8
+ from torch.nn.parallel.distributed import DistributedDataParallel as torchDDP
9
+
10
+ from megatron import get_args
11
+ from megatron import core
12
+ from megatron.core import parallel_state
13
+ from megatron.core.pipeline_parallel import p2p_communication
14
+ from megatron.core.enums import ModelType
15
+ from megatron.core.utils import get_attr_wrapped_model, get_model_type, get_model_config
16
+
17
+ from megatron.utils import unwrap_model
18
+ from megatron.model import DistributedDataParallel as LocalDDP
19
+ from megatron.model import Float16Module
20
+
21
+ # Types
22
+ Shape = Union[List[int], torch.Size]
23
+
24
+ def get_forward_backward_func():
25
+ """Retrieves the appropriate forward_backward function given the
26
+ configuration of parallel_state.
27
+
28
+ Returns a function that will perform all of the forward and
29
+ backward passes of the model given the pipeline model parallel
30
+ world size and virtual pipeline model parallel world size in the
31
+ global parallel_state.
32
+
33
+ Note that if using sequence parallelism, the sequence length component of
34
+ the tensor shape is updated to original_sequence_length /
35
+ tensor_model_parallel_world_size.
36
+
37
+ The function returned takes the following arguments:
38
+
39
+ forward_step_func (required): A function that takes a data
40
+ iterator and a model as its arguments and return the model's
41
+ forward output and the loss function. The loss function should
42
+ take one torch.Tensor and return a torch.Tensor of loss and a
43
+ dictionary of string -> torch.Tensor.
44
+
45
+ A third argument, checkpoint_activations_microbatch, indicates
46
+ that the activations for this microbatch should be
47
+ checkpointed. A None value for this argument indicates that
48
+ the default from the configuration should be used. This is
49
+ used when the
50
+ num_microbatches_with_partial_activation_checkpoints is used.
51
+
52
+ For example:
53
+
54
+ def loss_func(loss_mask, output_tensor):
55
+ losses = output_tensor.float()
56
+ loss_mask = loss_mask.view(-1).float()
57
+ loss = torch.sum(losses.view(-1) * loss_mask) / loss_mask.sum()
58
+
59
+ # Reduce loss for logging.
60
+ averaged_loss = average_losses_across_data_parallel_group([loss])
61
+
62
+ return loss, {'lm loss': averaged_loss[0]}
63
+
64
+ def forward_step(data_iterator, model):
65
+ data, loss_mask = next(data_iterator)
66
+ output = model(data)
67
+ return output, partial(loss_func, loss_mask)
68
+
69
+
70
+ forward_backward_func(forward_step_func=forward_step, ...)
71
+
72
+
73
+ data_iterator (required): an iterator over the data, will be
74
+ passed as is to forward_step_func. Expected to be a list of
75
+ iterators in the case of interleaved pipeline parallelism.
76
+
77
+ model (required): the actual model. Expected to be a list of modules in the case of interleaved
78
+ pipeline parallelism. Must be a (potentially wrapped) megatron.core.models.MegatronModule.
79
+
80
+ num_microbatches (int, required):
81
+ The number of microbatches to go through
82
+
83
+ seq_length (int, required): Sequence length of the current global batch. If this is a dual-stack
84
+ transformer, this is the encoder's sequence length. This is ignored if variable_seq_lengths
85
+ in the config is True. Otherwise, each microbatch in the current global batch size must use
86
+ this sequence length.
87
+
88
+ micro_batch_size (int, required): The number of sequences in a microbatch.
89
+
90
+ decoder_seq_length (int, optional): The sequence length for the decoder in a dual-stack
91
+ transformer. This is ignored for a single-stack transformer.
92
+
93
+ forward_only (optional, default = False): Perform only the forward step
94
+
95
+ collect_non_loss_data (optional, bool, default=False): TODO
96
+
97
+ """
98
+ pipeline_model_parallel_size = parallel_state.get_pipeline_model_parallel_world_size()
99
+ if pipeline_model_parallel_size > 1:
100
+ if parallel_state.get_virtual_pipeline_model_parallel_world_size() is not None:
101
+ forward_backward_func = forward_backward_pipelining_with_interleaving
102
+ else:
103
+ forward_backward_func = forward_backward_pipelining_without_interleaving
104
+ else:
105
+ forward_backward_func = forward_backward_no_pipelining
106
+ return forward_backward_func
107
+
108
+ def deallocate_output_tensor(out, deallocate_pipeline_outputs=False):
109
+ '''Pseudo-deallocate (i.e., set to scalar) the output tensor's '.data' field.
110
+
111
+ This method should be called right after the output tensor has been
112
+ sent to the next pipeline stage. At this point, the output tensor is
113
+ only useful for its '.grad_fn' field, and not its '.data'.
114
+ '''
115
+ if (out is None) or (not deallocate_pipeline_outputs):
116
+ return
117
+ assert isinstance(out, torch.Tensor), \
118
+ "expected Tensor, found %s." % type(out).__name__
119
+ assert out._base is None, \
120
+ "counter-productive to free a view of another tensor."
121
+ out.data = torch.empty(
122
+ (1,),
123
+ device = out.device,
124
+ dtype = out.dtype,
125
+ )
126
+
127
+ def custom_backward(output, grad_output):
128
+ '''Directly call C++ autograd engine.
129
+
130
+ To make the 'deallocate_output_tensor' (above) optimization work, the C++
131
+ autograd engine must be called directly, bypassing Pytorch's
132
+ torch.autograd.backward. Pytorch's 'backward' checks that the output and
133
+ grad have the same shape, while C++'s 'backward' does not.
134
+ '''
135
+
136
+ assert output.numel() == 1, \
137
+ "output should be pseudo-'freed' in schedule, to optimize memory"
138
+ assert isinstance(output, torch.Tensor), \
139
+ "output == '%s'." % type(output).__name__
140
+ assert isinstance(grad_output, (torch.Tensor, type(None))), \
141
+ "grad_output == '%s'." % type(grad_output).__name__
142
+
143
+ # Handle scalar output
144
+ if grad_output is None:
145
+ assert output.numel() == 1, "implicit grad requires scalar output."
146
+ grad_output = torch.ones_like(
147
+ output,
148
+ memory_format = torch.preserve_format,
149
+ )
150
+
151
+ # Call c++ engine [ see torch/csrc/autograd/python_engine.cpp ]
152
+ Variable._execution_engine.run_backward(
153
+ tensors = (output,),
154
+ grad_tensors = (grad_output,),
155
+ keep_graph = False,
156
+ create_graph = False,
157
+ inputs = tuple(),
158
+ allow_unreachable=True,
159
+ accumulate_grad=True,
160
+ )
161
+
162
+
163
+
164
+
165
+
166
+ def forward_step(forward_step_func,
167
+ data_iterator,
168
+ model,
169
+ num_microbatches,
170
+ input_tensor,
171
+ forward_data_store,
172
+ config,
173
+ collect_non_loss_data=False,
174
+ checkpoint_activations_microbatch=None):
175
+ """Forward step for passed-in model.
176
+
177
+ If first stage, input tensor is obtained from data_iterator, otherwise
178
+ passed-in input_tensor is used.
179
+
180
+ Returns output tensor."""
181
+ args = get_args()
182
+ if config.timers is not None:
183
+ config.timers('forward-compute', log_level=2).start()
184
+
185
+ unwrap_output_tensor = False
186
+ if not isinstance(input_tensor, list):
187
+ input_tensor = [input_tensor]
188
+ unwrap_output_tensor = True
189
+
190
+ set_input_tensor = get_attr_wrapped_model(model, "set_input_tensor")
191
+ set_input_tensor(input_tensor)
192
+
193
+ if config.enable_autocast:
194
+ context_manager = torch.autocast("cuda", dtype=config.autocast_dtype)
195
+ else:
196
+ context_manager = contextlib.nullcontext()
197
+ with context_manager:
198
+ if checkpoint_activations_microbatch is None:
199
+ output_tensor, loss_func = forward_step_func(data_iterator, model)
200
+ else:
201
+ output_tensor, loss_func = forward_step_func(data_iterator, model, checkpoint_activations_microbatch)
202
+
203
+ if parallel_state.is_pipeline_last_stage():
204
+ if not collect_non_loss_data:
205
+ output_tensor = loss_func(output_tensor)
206
+ loss, loss_reduced = output_tensor
207
+ if not args.no_pipeline_parallel:
208
+ output_tensor = loss / num_microbatches
209
+ else:
210
+ output_tensor = loss
211
+ forward_data_store.append(loss_reduced)
212
+ else:
213
+ data = loss_func(output_tensor, non_loss_data=True)
214
+ forward_data_store.append(data)
215
+
216
+ if config.timers is not None:
217
+ config.timers('forward-compute').stop()
218
+
219
+ # If T5 model (or other model with encoder and decoder)
220
+ # and in decoder stack, then send encoder_hidden_state
221
+ # downstream as well.
222
+ model_type = get_model_type(model)
223
+ if parallel_state.is_pipeline_stage_after_split() and \
224
+ model_type == ModelType.encoder_and_decoder:
225
+ return [output_tensor, input_tensor[-1]]
226
+ if unwrap_output_tensor:
227
+ return output_tensor
228
+ return [output_tensor]
229
+
230
+
231
+ def backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config, model=None):
232
+ """Backward step through passed-in output tensor.
233
+
234
+ If last stage, output_tensor_grad is None, otherwise gradient of loss
235
+ with respect to stage's output tensor.
236
+
237
+ Returns gradient of loss with respect to input tensor (None if first
238
+ stage)."""
239
+
240
+ # NOTE: This code currently can handle at most one skip connection. It
241
+ # needs to be modified slightly to support arbitrary numbers of skip
242
+ # connections.
243
+ args = get_args()
244
+ if args.deepspeed:
245
+ assert model is not None
246
+
247
+ if config.timers is not None:
248
+ config.timers('backward-compute', log_level=2).start()
249
+
250
+ # Retain the grad on the input_tensor.
251
+ unwrap_input_tensor_grad = False
252
+ if not isinstance(input_tensor, list):
253
+ input_tensor = [input_tensor]
254
+ unwrap_input_tensor_grad = True
255
+ for x in input_tensor:
256
+ if x is not None:
257
+ x.retain_grad()
258
+
259
+ if not isinstance(output_tensor, list):
260
+ output_tensor = [output_tensor]
261
+ if not isinstance(output_tensor_grad, list):
262
+ output_tensor_grad = [output_tensor_grad]
263
+
264
+ # Backward pass.
265
+ if args.deepspeed:
266
+ model.backward(output_tensor[0])
267
+ else:
268
+ if output_tensor_grad[0] is None and config.grad_scale_func is not None:
269
+ output_tensor[0] = config.grad_scale_func(output_tensor[0])
270
+
271
+ if config.deallocate_pipeline_outputs:
272
+ custom_backward(output_tensor[0], output_tensor_grad[0])
273
+ else:
274
+ torch.autograd.backward(output_tensor[0], grad_tensors=output_tensor_grad[0])
275
+
276
+ # Collect the grad of the input_tensor.
277
+ input_tensor_grad = [None]
278
+ if input_tensor is not None:
279
+ input_tensor_grad = []
280
+ for x in input_tensor:
281
+ if x is None:
282
+ input_tensor_grad.append(None)
283
+ else:
284
+ input_tensor_grad.append(x.grad)
285
+
286
+ # Handle single skip connection if it exists (encoder_hidden_state in
287
+ # model with encoder and decoder).
288
+ if parallel_state.get_pipeline_model_parallel_world_size() > 1 and \
289
+ parallel_state.is_pipeline_stage_after_split() and \
290
+ model_type == ModelType.encoder_and_decoder:
291
+ if output_tensor_grad[1] is not None:
292
+ input_tensor_grad[-1].add_(output_tensor_grad[1])
293
+ if unwrap_input_tensor_grad:
294
+ input_tensor_grad = input_tensor_grad[0]
295
+
296
+ if config.timers is not None:
297
+ config.timers('backward-compute').stop()
298
+
299
+ return input_tensor_grad
300
+
301
+
302
+ def forward_backward_no_pipelining(*,
303
+ forward_step_func,
304
+ data_iterator: Union[Iterator, List[Iterator]],
305
+ model: Union[torch.nn.Module, List[torch.nn.Module]],
306
+ num_microbatches: int,
307
+ seq_length: int, # unused
308
+ micro_batch_size: int, # unused
309
+ decoder_seq_length: int = None, # unused
310
+ forward_only: bool = False,
311
+ collect_non_loss_data: bool = False,
312
+ ):
313
+ """Run forward and backward passes with no pipeline parallelism
314
+ (no inter-stage communication).
315
+
316
+ Returns dictionary with losses.
317
+
318
+
319
+ See get_forward_backward_func() for argument details
320
+ """
321
+
322
+ if isinstance(model, list):
323
+ assert len(model) == 1, \
324
+ "non-pipeline-parallel schedule does not support model chunking"
325
+ model = model[0]
326
+ if isinstance(data_iterator, list):
327
+ assert len(data_iterator) == 1, \
328
+ "non-pipeline-parallel schedule does not support model chunking"
329
+ data_iterator = data_iterator[0]
330
+
331
+ config = get_model_config(model)
332
+
333
+ no_sync_func = config.no_sync_func
334
+ if no_sync_func is None and isinstance(model, torchDDP):
335
+ no_sync_func = model.no_sync
336
+ if no_sync_func is None:
337
+ no_sync_func = contextlib.nullcontext
338
+
339
+ args = get_args()
340
+ if args.deepspeed:
341
+ model.set_gradient_accumulation_boundary(False)
342
+
343
+ model_type = get_model_type(model)
344
+
345
+ forward_data_store = []
346
+ input_tensor, output_tensor_grad = None, None
347
+ with no_sync_func():
348
+ for i in range(num_microbatches - 1):
349
+ output_tensor = forward_step(forward_step_func, data_iterator, model, num_microbatches,
350
+ input_tensor, forward_data_store, config, collect_non_loss_data)
351
+ if not forward_only:
352
+ backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config, model)
353
+ if args.deepspeed:
354
+ model.set_gradient_accumulation_boundary(True)
355
+
356
+ # Run computation for last microbatch out of context handler (want to
357
+ # synchronize gradients).
358
+ output_tensor = forward_step(forward_step_func, data_iterator, model, num_microbatches,
359
+ input_tensor, forward_data_store, config, collect_non_loss_data)
360
+
361
+ if not forward_only:
362
+ backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config, model)
363
+
364
+ return forward_data_store
365
+
366
+
367
+ def forward_backward_pipelining_with_interleaving(*,
368
+ forward_step_func,
369
+ data_iterator: Union[Iterator, List[Iterator]],
370
+ model: Union[torch.nn.Module, List[torch.nn.Module]],
371
+ num_microbatches: int,
372
+ seq_length: int,
373
+ micro_batch_size: int,
374
+ decoder_seq_length: int = None,
375
+ forward_only: bool = False,
376
+ collect_non_loss_data: bool = False,
377
+ ):
378
+ """Run interleaved 1F1B schedule (model split into model chunks), with
379
+ communication between pipeline stages as needed.
380
+
381
+ Returns dictionary with losses if the last stage, empty dict otherwise."""
382
+ assert isinstance(model, list), \
383
+ "interleaved pipeline parallelism expected model chunking"
384
+ assert all(isinstance(chunk, torch.nn.Module) for chunk in model), \
385
+ "invalid model chunking"
386
+ assert isinstance(data_iterator, list), \
387
+ "interleaved pipeline parallelism expected each model chunk to have a data iterator"
388
+
389
+ config = get_model_config(model[0])
390
+ if config.overlap_p2p_comm and config.batch_p2p_comm:
391
+ raise ValueError("Can not use both overlap_p2p_comm and batch_p2p_comm")
392
+
393
+ # Disable async grad reductions
394
+ no_sync_func = config.no_sync_func
395
+ if no_sync_func is None and all(isinstance(chunk, torchDDP) for chunk in model):
396
+ def multi_no_sync():
397
+ stack = contextlib.ExitStack()
398
+ for chunk in model:
399
+ stack.enter_context(chunk.no_sync())
400
+ return stack
401
+ no_sync_func = multi_no_sync
402
+ if no_sync_func is None:
403
+ no_sync_func = contextlib.nullcontext
404
+ no_sync_context = None
405
+ def disable_grad_sync():
406
+ """Disable asynchronous grad reductions"""
407
+ nonlocal no_sync_context
408
+ if no_sync_context is None:
409
+ no_sync_context = no_sync_func()
410
+ no_sync_context.__enter__()
411
+ def enable_grad_sync():
412
+ """Enable asynchronous grad reductions"""
413
+ nonlocal no_sync_context
414
+ if no_sync_context is not None:
415
+ no_sync_context.__exit__(None, None, None)
416
+ no_sync_context = None
417
+ disable_grad_sync()
418
+
419
+ # Model chunk IDs with synchronized grads
420
+ synchronized_model_chunks = set()
421
+
422
+ input_tensors = [[] for _ in range(len(model))]
423
+ output_tensors = [[] for _ in range(len(model))]
424
+ forward_data_store = []
425
+ if not forward_only:
426
+ output_tensor_grads = [[] for _ in range(len(model))]
427
+
428
+ pipeline_parallel_size = parallel_state.get_pipeline_model_parallel_world_size()
429
+ pipeline_parallel_rank = parallel_state.get_pipeline_model_parallel_rank()
430
+
431
+ if num_microbatches % pipeline_parallel_size != 0:
432
+ msg = f'number of microbatches ({num_microbatches}) is not divisible by '
433
+ msg += f'pipeline-model-parallel-size ({pipeline_parallel_size}) '
434
+ msg += 'when using interleaved schedule'
435
+ raise RuntimeError(msg)
436
+
437
+ model_type = get_model_type(model[0])
438
+ if model_type == ModelType.encoder_and_decoder:
439
+ raise RuntimeError("Interleaving is not supported with an encoder and decoder model.")
440
+
441
+ if decoder_seq_length is not None and decoder_seq_length != tensor_shape[0]:
442
+ raise RuntimeError("Interleaving is not supported with a different decoder sequence length.")
443
+
444
+ tensor_shape = (seq_length, micro_batch_size, config.hidden_size)
445
+ if config.sequence_parallel:
446
+ tensor_shape[0] = tensor_shape[0] // parallel_state.get_tensor_model_parallel_world_size()
447
+
448
+ # Compute number of warmup and remaining microbatches.
449
+ num_model_chunks = len(model)
450
+ total_num_microbatches = num_microbatches * num_model_chunks
451
+ all_warmup_microbatches = False
452
+ if forward_only:
453
+ num_warmup_microbatches = total_num_microbatches
454
+ else:
455
+ # Run all forward passes and then all backward passes if number of
456
+ # microbatches is just the number of pipeline stages.
457
+ # Otherwise, perform (num_model_chunks-1)*pipeline_parallel_size on
458
+ # all workers, followed by more microbatches after depending on
459
+ # stage ID (more forward passes for earlier stages, later stages can
460
+ # immediately start with 1F1B).
461
+ if num_microbatches == pipeline_parallel_size:
462
+ num_warmup_microbatches = total_num_microbatches
463
+ all_warmup_microbatches = True
464
+ else:
465
+ num_warmup_microbatches = (pipeline_parallel_size - pipeline_parallel_rank - 1) * 2
466
+ num_warmup_microbatches += (num_model_chunks - 1) * pipeline_parallel_size
467
+ num_warmup_microbatches = min(num_warmup_microbatches, total_num_microbatches)
468
+ num_microbatches_remaining = total_num_microbatches - num_warmup_microbatches
469
+
470
+ # Checkpoint the activations of partial Transformer layers in a number of micro-batches
471
+ # within the maximum outstanding micro-batch backpropagations.
472
+ # Micro-batches with the ids less than 'num_microbatches_with_partial_activation_checkpoints'
473
+ # checkpoint partial Transformer layers (or skip checkpointing) and
474
+ # the rest of micro-batches within a window of micro-batches checkpoint
475
+ # all Transformer layers. The window of micro-batches is set by the maximum
476
+ # outstanding backpropagations and becomes smaller at later pipeline stages.
477
+ # Please refer the appendix C in https://arxiv.org/pdf/2205.05198.pdf
478
+ max_outstanding_backprops = None
479
+ if config.num_microbatches_with_partial_activation_checkpoints is not None:
480
+ max_outstanding_backprops = num_warmup_microbatches + 1
481
+
482
+ # Synchronize params for first two model chunks
483
+ if config.param_sync_func is not None:
484
+ config.param_sync_func(model[0].parameters())
485
+ config.param_sync_func(model[1].parameters())
486
+
487
+ def get_model_chunk_id(microbatch_id, forward):
488
+ """Helper method to get the model chunk ID given the iteration number."""
489
+ microbatch_id_in_group = microbatch_id % (pipeline_parallel_size * num_model_chunks)
490
+ model_chunk_id = microbatch_id_in_group // pipeline_parallel_size
491
+ if not forward:
492
+ model_chunk_id = (num_model_chunks - model_chunk_id - 1)
493
+ return model_chunk_id
494
+
495
+ def is_first_microbatch_for_model_chunk(microbatch_id: int) -> bool:
496
+ """Check if an iteration is the first for a model chunk."""
497
+ microbatch_group_size = pipeline_parallel_size * num_model_chunks
498
+ num_microbatch_groups = total_num_microbatches // microbatch_group_size
499
+ microbatch_group_id = microbatch_id // microbatch_group_size
500
+ microbatch_id_in_group = microbatch_id % microbatch_group_size
501
+ if microbatch_group_id == 0:
502
+ return microbatch_id_in_group % pipeline_parallel_size == 0
503
+ else:
504
+ return False
505
+
506
+ def is_last_microbatch_for_model_chunk(microbatch_id: int) -> bool:
507
+ """Check if an iteration is the last for a model chunk."""
508
+ microbatch_group_size = pipeline_parallel_size * num_model_chunks
509
+ num_microbatch_groups = total_num_microbatches // microbatch_group_size
510
+ microbatch_group_id = microbatch_id // microbatch_group_size
511
+ microbatch_id_in_group = microbatch_id % microbatch_group_size
512
+ if microbatch_group_id == num_microbatch_groups - 1:
513
+ return microbatch_id_in_group % pipeline_parallel_size == pipeline_parallel_size - 1
514
+ else:
515
+ return False
516
+
517
+
518
+ def forward_step_helper(microbatch_id, checkpoint_activations_microbatch):
519
+ """Helper method to run forward step with model split into chunks
520
+ (run set_virtual_pipeline_model_parallel_rank() before calling
521
+ forward_step())."""
522
+ model_chunk_id = get_model_chunk_id(microbatch_id, forward=True)
523
+ parallel_state.set_virtual_pipeline_model_parallel_rank(model_chunk_id)
524
+
525
+ # launch param synchronization for next model chunk
526
+ # Note: Asynchronous communication tends to slow down compute.
527
+ # To reduce idling from mismatched microbatch times, we launch
528
+ # asynchronous communication at the same time across the
529
+ # pipeline-parallel group.
530
+ if config.param_sync_func is not None:
531
+ param_sync_microbatch_id = microbatch_id + pipeline_parallel_rank
532
+ if param_sync_microbatch_id < num_microbatches and is_first_microbatch_for_model_chunk(param_sync_microbatch_id):
533
+ param_sync_chunk_id = get_model_chunk_id(param_sync_microbatch_id, forward=True) + 1
534
+ if 1 < param_sync_chunk_id < num_model_chunks:
535
+ config.param_sync_func(model[param_sync_chunk_id].parameters())
536
+
537
+ # forward step
538
+ if parallel_state.is_pipeline_first_stage():
539
+ if len(input_tensors[model_chunk_id]) == \
540
+ len(output_tensors[model_chunk_id]):
541
+ input_tensors[model_chunk_id].append(None)
542
+ input_tensor = input_tensors[model_chunk_id][-1]
543
+ output_tensor = forward_step(forward_step_func,
544
+ data_iterator[model_chunk_id],
545
+ model[model_chunk_id],
546
+ num_microbatches,
547
+ input_tensor,
548
+ forward_data_store,
549
+ config,
550
+ collect_non_loss_data,
551
+ checkpoint_activations_microbatch)
552
+ output_tensors[model_chunk_id].append(output_tensor)
553
+
554
+ # if forward-only, no need to save tensors for a backward pass
555
+ if forward_only:
556
+ input_tensors[model_chunk_id].pop()
557
+ output_tensors[model_chunk_id].pop()
558
+
559
+ return output_tensor
560
+
561
+ def backward_step_helper(microbatch_id):
562
+ """Helper method to run backward step with model split into chunks
563
+ (run set_virtual_pipeline_model_parallel_rank() before calling
564
+ backward_step())."""
565
+ model_chunk_id = get_model_chunk_id(microbatch_id, forward=False)
566
+ parallel_state.set_virtual_pipeline_model_parallel_rank(model_chunk_id)
567
+
568
+ # launch grad synchronization (default)
569
+ if config.grad_sync_func is None and is_last_microbatch_for_model_chunk(microbatch_id):
570
+ enable_grad_sync()
571
+ synchronized_model_chunks.add(model_chunk_id)
572
+
573
+ if parallel_state.is_pipeline_last_stage():
574
+ if len(output_tensor_grads[model_chunk_id]) == 0:
575
+ output_tensor_grads[model_chunk_id].append(None)
576
+ input_tensor = input_tensors[model_chunk_id].pop(0)
577
+ output_tensor = output_tensors[model_chunk_id].pop(0)
578
+ output_tensor_grad = output_tensor_grads[model_chunk_id].pop(0)
579
+ input_tensor_grad = \
580
+ backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config)
581
+
582
+ # launch grad synchronization (custom grad sync)
583
+ # Note: Asynchronous communication tends to slow down compute.
584
+ # To reduce idling from mismatched microbatch times, we launch
585
+ # asynchronous communication at the same time across the
586
+ # pipeline-parallel group.
587
+ if config.grad_sync_func is not None:
588
+ grad_sync_microbatch_id = microbatch_id - pipeline_parallel_rank
589
+ if grad_sync_microbatch_id >= 0 and is_last_microbatch_for_model_chunk(grad_sync_microbatch_id):
590
+ grad_sync_chunk_id = get_model_chunk_id(grad_sync_microbatch_id, forward=False)
591
+ enable_grad_sync()
592
+ config.grad_sync_func(model[grad_sync_chunk_id].parameters())
593
+ synchronized_model_chunks.add(grad_sync_chunk_id)
594
+ disable_grad_sync()
595
+
596
+ return input_tensor_grad
597
+
598
+ # Run warmup forward passes.
599
+ parallel_state.set_virtual_pipeline_model_parallel_rank(0)
600
+ input_tensors[0].append(
601
+ p2p_communication.recv_forward(tensor_shape, config))
602
+
603
+ fwd_wait_handles = None
604
+ bwd_wait_handles = None
605
+
606
+ for k in range(num_warmup_microbatches):
607
+
608
+ if fwd_wait_handles is not None:
609
+ for req in fwd_wait_handles:
610
+ req.wait()
611
+
612
+ # Decide to checkpoint all layers' activations of the current micro-batch
613
+ if max_outstanding_backprops is not None:
614
+ checkpoint_activations_microbatch = k % max_outstanding_backprops >= \
615
+ config.num_microbatches_with_partial_activation_checkpoints
616
+ else:
617
+ checkpoint_activations_microbatch = None
618
+
619
+ output_tensor = forward_step_helper(k, checkpoint_activations_microbatch)
620
+
621
+ # Determine if tensor should be received from previous stage.
622
+ next_forward_model_chunk_id = get_model_chunk_id(k+1, forward=True)
623
+ recv_prev = True
624
+ if parallel_state.is_pipeline_first_stage(ignore_virtual=True):
625
+ if next_forward_model_chunk_id == 0:
626
+ recv_prev = False
627
+ if k == (total_num_microbatches - 1):
628
+ recv_prev = False
629
+
630
+ # Don't send tensor downstream if on last stage.
631
+ if parallel_state.is_pipeline_last_stage():
632
+ output_tensor = None
633
+
634
+ # Send and receive tensors as appropriate (send tensors computed
635
+ # in this iteration; receive tensors for next iteration).
636
+ if not config.overlap_p2p_comm:
637
+ if k == (num_warmup_microbatches - 1) and not forward_only and \
638
+ not all_warmup_microbatches:
639
+ input_tensor_grad = None
640
+ recv_next = True
641
+ if parallel_state.is_pipeline_last_stage(ignore_virtual=True):
642
+ recv_next = False
643
+ input_tensor, output_tensor_grad = \
644
+ p2p_communication.send_forward_backward_recv_forward_backward(
645
+ output_tensor, input_tensor_grad,
646
+ recv_prev=recv_prev, recv_next=recv_next,
647
+ tensor_shape=tensor_shape, config=config)
648
+ output_tensor_grads[num_model_chunks-1].append(output_tensor_grad)
649
+ else:
650
+ input_tensor = \
651
+ p2p_communication.send_forward_recv_forward(
652
+ output_tensor, recv_prev=recv_prev,
653
+ tensor_shape=tensor_shape,
654
+ config=config)
655
+ input_tensors[next_forward_model_chunk_id].append(input_tensor)
656
+ else:
657
+ input_tensor, fwd_wait_handles = \
658
+ p2p_communication.send_forward_recv_forward(
659
+ output_tensor, recv_prev=recv_prev,
660
+ tensor_shape=tensor_shape, config=config,
661
+ overlap_p2p_comm=True)
662
+
663
+ if k == (num_warmup_microbatches - 1) and not forward_only and \
664
+ not all_warmup_microbatches:
665
+ input_tensor_grad = None
666
+ recv_next = True
667
+ if parallel_state.is_pipeline_last_stage(ignore_virtual=True):
668
+ recv_next = False
669
+
670
+ output_tensor_grad, bwd_wait_handles = p2p_communication.send_backward_recv_backward(
671
+ input_tensor_grad, recv_next=recv_next,
672
+ tensor_shape=tensor_shape,
673
+ config=config,
674
+ overlap_p2p_comm=True)
675
+
676
+ output_tensor_grads[num_model_chunks-1].append(output_tensor_grad)
677
+ input_tensors[next_forward_model_chunk_id].append(input_tensor)
678
+
679
+ deallocate_output_tensor(output_tensor, config.deallocate_pipeline_outputs)
680
+
681
+ # Run 1F1B in steady state.
682
+ for k in range(num_microbatches_remaining):
683
+ # Forward pass.
684
+ forward_k = k + num_warmup_microbatches
685
+
686
+ # Decide to checkpoint all layers' activations of the current micro-batch
687
+ if max_outstanding_backprops is not None:
688
+ checkpoint_activations_microbatch = (
689
+ forward_k % max_outstanding_backprops >= \
690
+ config.num_microbatches_with_partial_activation_checkpoints
691
+ )
692
+ else:
693
+ checkpoint_activations_microbatch = None
694
+
695
+ if config.overlap_p2p_comm:
696
+ if fwd_wait_handles is not None:
697
+ for req in fwd_wait_handles:
698
+ req.wait()
699
+
700
+ deallocate_output_tensor(output_tensor, config.deallocate_pipeline_outputs)
701
+
702
+ output_tensor = forward_step_helper(forward_k, checkpoint_activations_microbatch)
703
+
704
+ # Determine if current stage has anything to send in either direction,
705
+ # otherwise set tensor to None.
706
+ forward_model_chunk_id = get_model_chunk_id(forward_k, forward=True)
707
+ parallel_state.set_virtual_pipeline_model_parallel_rank(forward_model_chunk_id)
708
+
709
+ # Last virtual stage no activation tensor to send
710
+ if parallel_state.is_pipeline_last_stage():
711
+ output_tensor = None
712
+
713
+ # Determine if peers are sending, and where in data structure to put
714
+ # received tensors.
715
+ recv_prev = True
716
+ if parallel_state.is_pipeline_first_stage(ignore_virtual=True):
717
+ # First stage is ahead of last stage by (pipeline_parallel_size - 1).
718
+ next_forward_model_chunk_id = get_model_chunk_id(
719
+ forward_k - (pipeline_parallel_size - 1), forward=True)
720
+ if next_forward_model_chunk_id == (num_model_chunks - 1):
721
+ recv_prev = False
722
+ next_forward_model_chunk_id += 1
723
+ else:
724
+ next_forward_model_chunk_id = get_model_chunk_id(forward_k + 1,
725
+ forward=True)
726
+
727
+ # If last iteration, don't receive; we already received one extra
728
+ # before the start of the for loop.
729
+ if k == (num_microbatches_remaining - 1):
730
+ recv_prev = False
731
+
732
+ # Send activation tensor to the next stage and receive activation tensor from the
733
+ # previous stage
734
+ input_tensor, fwd_wait_handles = \
735
+ p2p_communication.send_forward_recv_forward(
736
+ output_tensor, recv_prev=recv_prev,
737
+ tensor_shape=tensor_shape,
738
+ dtype=dtype,
739
+ batch_p2p_comm=batch_p2p_comm,
740
+ timers=timers,
741
+ overlap_p2p_comm=True)
742
+ # assert fwd_wait_handles is not None
743
+
744
+ if bwd_wait_handles is not None:
745
+ for req in bwd_wait_handles:
746
+ req.wait()
747
+
748
+ # Backward pass.
749
+ backward_k = k
750
+ input_tensor_grad = backward_step_helper(backward_k)
751
+
752
+ backward_model_chunk_id = get_model_chunk_id(backward_k, forward=False)
753
+ parallel_state.set_virtual_pipeline_model_parallel_rank(backward_model_chunk_id)
754
+
755
+ # First virtual stage no activation gradient tensor to send
756
+ if parallel_state.is_pipeline_first_stage():
757
+ input_tensor_grad = None
758
+
759
+ # Determine if the current virtual stage has an activation gradient tensor to receive
760
+ recv_next = True
761
+ if parallel_state.is_pipeline_last_stage(ignore_virtual=True):
762
+ # Last stage is ahead of first stage by (pipeline_parallel_size - 1).
763
+ next_backward_model_chunk_id = get_model_chunk_id(
764
+ backward_k - (pipeline_parallel_size - 1), forward=False
765
+ )
766
+ if next_backward_model_chunk_id == 0:
767
+ recv_next = False
768
+ next_backward_model_chunk_id -= 1
769
+ else:
770
+ next_backward_model_chunk_id = get_model_chunk_id(
771
+ backward_k + 1, forward=False
772
+ )
773
+
774
+ output_tensor_grad, bwd_wait_handles = p2p_communication.send_backward_recv_backward(
775
+ input_tensor_grad, recv_next=recv_next,
776
+ tensor_shape=tensor_shape,
777
+ config=config,
778
+ overlap_p2p_comm=True)
779
+
780
+ else: # no p2p overlap
781
+ output_tensor = forward_step_helper(forward_k, checkpoint_activations_microbatch)
782
+
783
+ # Backward pass.
784
+ backward_k = k
785
+ input_tensor_grad = backward_step_helper(backward_k)
786
+
787
+ # Send output_tensor and input_tensor_grad, receive input_tensor
788
+ # and output_tensor_grad.
789
+
790
+ # Determine if current stage has anything to send in either direction,
791
+ # otherwise set tensor to None.
792
+ forward_model_chunk_id = get_model_chunk_id(forward_k, forward=True)
793
+ parallel_state.set_virtual_pipeline_model_parallel_rank(forward_model_chunk_id)
794
+ if parallel_state.is_pipeline_last_stage():
795
+ output_tensor = None
796
+
797
+ backward_model_chunk_id = get_model_chunk_id(backward_k, forward=False)
798
+ parallel_state.set_virtual_pipeline_model_parallel_rank(backward_model_chunk_id)
799
+ if parallel_state.is_pipeline_first_stage():
800
+ input_tensor_grad = None
801
+
802
+ # Determine if peers are sending, and where in data structure to put
803
+ # received tensors.
804
+ recv_prev = True
805
+ if parallel_state.is_pipeline_first_stage(ignore_virtual=True):
806
+ # First stage is ahead of last stage by (pipeline_parallel_size - 1).
807
+ next_forward_model_chunk_id = get_model_chunk_id(
808
+ forward_k - (pipeline_parallel_size - 1), forward=True)
809
+ if next_forward_model_chunk_id == (num_model_chunks - 1):
810
+ recv_prev = False
811
+ next_forward_model_chunk_id += 1
812
+ else:
813
+ next_forward_model_chunk_id = get_model_chunk_id(forward_k + 1,
814
+ forward=True)
815
+
816
+ recv_next = True
817
+ if parallel_state.is_pipeline_last_stage(ignore_virtual=True):
818
+ # Last stage is ahead of first stage by (pipeline_parallel_size - 1).
819
+ next_backward_model_chunk_id = get_model_chunk_id(
820
+ backward_k - (pipeline_parallel_size - 1), forward=False)
821
+ if next_backward_model_chunk_id == 0:
822
+ recv_next = False
823
+ next_backward_model_chunk_id -= 1
824
+ else:
825
+ next_backward_model_chunk_id = get_model_chunk_id(backward_k + 1,
826
+ forward=False)
827
+
828
+ # If last iteration, don't receive; we already received one extra
829
+ # before the start of the for loop.
830
+ if k == (num_microbatches_remaining - 1):
831
+ recv_prev = False
832
+
833
+ # Communicate tensors.
834
+ input_tensor, output_tensor_grad = \
835
+ p2p_communication.send_forward_backward_recv_forward_backward(
836
+ output_tensor, input_tensor_grad,
837
+ recv_prev=recv_prev, recv_next=recv_next,
838
+ tensor_shape=tensor_shape, config=config)
839
+ deallocate_output_tensor(output_tensor, config.deallocate_pipeline_outputs)
840
+
841
+ # Put input_tensor and output_tensor_grad in data structures in the
842
+ # right location.
843
+ if recv_prev:
844
+ input_tensors[next_forward_model_chunk_id].append(input_tensor)
845
+ if recv_next:
846
+ output_tensor_grads[next_backward_model_chunk_id].append(
847
+ output_tensor_grad)
848
+
849
+ deallocate_output_tensor(output_tensor, config.deallocate_pipeline_outputs)
850
+
851
+ # Run cooldown backward passes (flush out pipeline).
852
+ if not forward_only:
853
+ if config.overlap_p2p_comm and bwd_wait_handles is not None:
854
+ for wait_handle in bwd_wait_handles:
855
+ wait_handle.wait()
856
+
857
+ if all_warmup_microbatches:
858
+ output_tensor_grads[num_model_chunks-1].append(
859
+ p2p_communication.recv_backward(tensor_shape, config=config))
860
+ for k in range(num_microbatches_remaining, total_num_microbatches):
861
+ input_tensor_grad = backward_step_helper(k)
862
+ next_backward_model_chunk_id = get_model_chunk_id(k+1, forward=False)
863
+ recv_next = True
864
+ if parallel_state.is_pipeline_last_stage(ignore_virtual=True):
865
+ if next_backward_model_chunk_id == (num_model_chunks - 1):
866
+ recv_next = False
867
+ if k == (total_num_microbatches - 1):
868
+ recv_next = False
869
+ output_tensor_grads[next_backward_model_chunk_id].append(
870
+ p2p_communication.send_backward_recv_backward(
871
+ input_tensor_grad, recv_next=recv_next,
872
+ tensor_shape=tensor_shape, config=config))
873
+
874
+ # Launch any remaining grad reductions
875
+ enable_grad_sync()
876
+ if config.grad_sync_func is not None:
877
+ params = []
878
+ for model_chunk_id in range(num_model_chunks):
879
+ if model_chunk_id not in synchronized_model_chunks:
880
+ params.extend(model[model_chunk_id].parameters())
881
+ synchronized_model_chunks.add(model_chunk_id)
882
+ if params:
883
+ config.grad_sync_func(params)
884
+
885
+ return forward_data_store
886
+
887
+ def get_tensor_shapes(*,
888
+ rank: int,
889
+ model_type: ModelType,
890
+ seq_length: int,
891
+ micro_batch_size: int,
892
+ decoder_seq_length: int,
893
+ config):
894
+ # Determine right tensor sizes (based on position of rank with respect to split
895
+ # rank) and model size.
896
+ # Send two tensors if model is T5 and rank is in decoder stage:
897
+ # first tensor is decoder (pre-transpose),
898
+ # second tensor is encoder (post-transpose).
899
+ # If model is T5 and rank is at the boundary:
900
+ # send one tensor (post-transpose from encoder).
901
+ # Otherwise, send one tensor (pre-transpose).
902
+ tensor_shapes = []
903
+
904
+ if config.sequence_parallel:
905
+ seq_length = seq_length // parallel_state.get_tensor_model_parallel_world_size()
906
+ decoder_seq_length = decoder_seq_length // parallel_state.get_tensor_model_parallel_world_size()
907
+
908
+ if model_type == ModelType.encoder_and_decoder:
909
+ if parallel_state.is_pipeline_stage_before_split(rank):
910
+ tensor_shapes.append((seq_length, micro_batch_size, config.hidden_size))
911
+ else:
912
+ tensor_shapes.append((decoder_seq_length, micro_batch_size, config.hidden_size))
913
+ tensor_shapes.append((seq_length, micro_batch_size, config.hidden_size))
914
+ else:
915
+ tensor_shapes.append((seq_length, micro_batch_size, config.hidden_size))
916
+ return tensor_shapes
917
+
918
+
919
+
920
+ def recv_forward(tensor_shapes, config):
921
+ input_tensors = []
922
+ for tensor_shape in tensor_shapes:
923
+ if tensor_shape is None:
924
+ input_tensors.append(None)
925
+ else:
926
+ input_tensors.append(p2p_communication.recv_forward(tensor_shape, config))
927
+ return input_tensors
928
+
929
+
930
+ def recv_backward(tensor_shapes, config):
931
+ output_tensor_grads = []
932
+ for tensor_shape in tensor_shapes:
933
+ if tensor_shape is None:
934
+ output_tensor_grads.append(None)
935
+ else:
936
+ output_tensor_grads.append(p2p_communication.recv_backward(tensor_shape, config))
937
+ return output_tensor_grads
938
+
939
+
940
+ def send_forward(output_tensors, tensor_shapes, config):
941
+ if not isinstance(output_tensors, list):
942
+ output_tensors = [output_tensors]
943
+ for (output_tensor, tensor_shape) in zip(output_tensors, tensor_shapes):
944
+ if tensor_shape is None:
945
+ continue
946
+ p2p_communication.send_forward(output_tensor, config)
947
+
948
+
949
+ def send_backward(input_tensor_grads, tensor_shapes, config):
950
+ if not isinstance(input_tensor_grads, list):
951
+ input_tensor_grads = [input_tensor_grads]
952
+ for (input_tensor_grad, tensor_shape) in zip(input_tensor_grads, tensor_shapes):
953
+ if tensor_shape is None:
954
+ continue
955
+ p2p_communication.send_backward(input_tensor_grad, config)
956
+
957
+
958
+ def send_forward_recv_backward(output_tensors, tensor_shapes, config):
959
+ if not isinstance(output_tensors, list):
960
+ output_tensors = [output_tensors]
961
+ output_tensor_grads = []
962
+ for (output_tensor, tensor_shape) in zip(output_tensors, tensor_shapes):
963
+ if tensor_shape is None:
964
+ output_tensor_grads.append(None)
965
+ continue
966
+ output_tensor_grad = p2p_communication.send_forward_recv_backward(
967
+ output_tensor, tensor_shape, config)
968
+ output_tensor_grads.append(output_tensor_grad)
969
+ return output_tensor_grads
970
+
971
+
972
+ def send_backward_recv_forward(input_tensor_grads, tensor_shapes, config):
973
+ if not isinstance(input_tensor_grads, list):
974
+ input_tensor_grads = [input_tensor_grads]
975
+ input_tensors = []
976
+ for (input_tensor_grad, tensor_shape) in zip(input_tensor_grads, tensor_shapes):
977
+ if tensor_shape is None:
978
+ input_tensors.append(None)
979
+ continue
980
+ input_tensor = p2p_communication.send_backward_recv_forward(
981
+ input_tensor_grad, tensor_shape, config)
982
+ input_tensors.append(input_tensor)
983
+ return input_tensors
984
+
985
+
986
+ def forward_backward_pipelining_without_interleaving(*,
987
+ forward_step_func,
988
+ data_iterator: Union[Iterator, List[Iterator]],
989
+ model: Union[torch.nn.Module, List[torch.nn.Module]],
990
+ num_microbatches: int,
991
+ seq_length: int,
992
+ micro_batch_size: int,
993
+ decoder_seq_length: int = None,
994
+ forward_only: bool = False,
995
+ collect_non_loss_data: bool = False,
996
+ ):
997
+ """Run non-interleaved 1F1B schedule, with communication between pipeline
998
+ stages.
999
+
1000
+ Returns dictionary with losses if the last stage, empty dict otherwise."""
1001
+
1002
+ if isinstance(model, list):
1003
+ assert len(model) == 1, \
1004
+ "non-interleaved pipeline parallelism does not support model chunking"
1005
+ model = model[0]
1006
+ if isinstance(data_iterator, list):
1007
+ assert len(data_iterator) == 1, \
1008
+ "non-pipeline-parallel schedule does not support model chunking"
1009
+ data_iterator = data_iterator[0]
1010
+
1011
+ config = get_model_config(model)
1012
+ if config.overlap_p2p_comm:
1013
+ raise ValueError("Non-interleaved pipeline parallelism does not support overlapping p2p communication")
1014
+
1015
+ # Disable async grad reductions
1016
+ no_sync_func = config.no_sync_func
1017
+ if no_sync_func is None and isinstance(model, torchDDP):
1018
+ no_sync_func = model.no_sync
1019
+ if no_sync_func is None:
1020
+ no_sync_func = contextlib.nullcontext
1021
+ no_sync_context = None
1022
+ def disable_grad_sync():
1023
+ """Disable asynchronous grad reductions"""
1024
+ nonlocal no_sync_context
1025
+ if no_sync_context is None:
1026
+ no_sync_context = no_sync_func()
1027
+ no_sync_context.__enter__()
1028
+ def enable_grad_sync():
1029
+ """Enable asynchronous grad reductions"""
1030
+ nonlocal no_sync_context
1031
+ if no_sync_context is not None:
1032
+ no_sync_context.__exit__(None, None, None)
1033
+ no_sync_context = None
1034
+ disable_grad_sync()
1035
+
1036
+ # Compute number of warmup microbatches.
1037
+ num_warmup_microbatches = \
1038
+ (parallel_state.get_pipeline_model_parallel_world_size() -
1039
+ parallel_state.get_pipeline_model_parallel_rank() - 1)
1040
+ num_warmup_microbatches = min(
1041
+ num_warmup_microbatches,
1042
+ num_microbatches)
1043
+ num_microbatches_remaining = \
1044
+ num_microbatches - num_warmup_microbatches
1045
+
1046
+ # Checkpoint the activations of partial Transformer layers in a number of micro-batches
1047
+ # within the maximum outstanding micro-batch backpropagations.
1048
+ # Micro-batches with the ids less than 'num_microbatches_with_partial_activation_checkpoints'
1049
+ # checkpoint partial Transformer layers (or skip checkpointing) and
1050
+ # the rest of micro-batches within a window of micro-batches checkpoint
1051
+ # all Transformer layers. The window of micro-batches is set by the maximum
1052
+ # outstanding backpropagations and becomes smaller at later pipeline stages.
1053
+ # Please refer the appendix C in https://arxiv.org/pdf/2205.05198.pdf
1054
+ max_outstanding_backprops = None
1055
+ if config.num_microbatches_with_partial_activation_checkpoints is not None:
1056
+ max_outstanding_backprops = num_warmup_microbatches + 1
1057
+
1058
+ model_type = get_model_type(model)
1059
+
1060
+ rank = parallel_state.get_pipeline_model_parallel_rank()
1061
+ recv_tensor_shapes = get_tensor_shapes(rank=rank-1,
1062
+ model_type=model_type,
1063
+ seq_length=seq_length,
1064
+ micro_batch_size=micro_batch_size,
1065
+ decoder_seq_length=decoder_seq_length,
1066
+ config=config)
1067
+ send_tensor_shapes = get_tensor_shapes(rank=rank,
1068
+ model_type=model_type,
1069
+ seq_length=seq_length,
1070
+ micro_batch_size=micro_batch_size,
1071
+ decoder_seq_length=decoder_seq_length,
1072
+ config=config)
1073
+
1074
+ # Input, output tensors only need to be saved when doing backward passes
1075
+ input_tensors = None
1076
+ output_tensors = None
1077
+ if not forward_only:
1078
+ input_tensors = []
1079
+ output_tensors = []
1080
+ forward_data_store = []
1081
+
1082
+ # Run warmup forward passes.
1083
+ for i in range(num_warmup_microbatches):
1084
+ # Decide to checkpoint all layers' activations of the current micro-batch
1085
+ if max_outstanding_backprops is not None:
1086
+ checkpoint_activations_microbatch = (
1087
+ i % max_outstanding_backprops >= config.num_microbatches_with_partial_activation_checkpoints
1088
+ )
1089
+ else:
1090
+ checkpoint_activations_microbatch = None
1091
+
1092
+ input_tensor = recv_forward(recv_tensor_shapes, config)
1093
+ output_tensor = forward_step(forward_step_func, data_iterator, model, num_microbatches,
1094
+ input_tensor, forward_data_store, config, collect_non_loss_data,
1095
+ checkpoint_activations_microbatch)
1096
+ send_forward(output_tensor, send_tensor_shapes, config)
1097
+
1098
+ if not forward_only:
1099
+ input_tensors.append(input_tensor)
1100
+ output_tensors.append(output_tensor)
1101
+ deallocate_output_tensor(output_tensor[0], config.deallocate_pipeline_outputs)
1102
+
1103
+ # Before running 1F1B, need to receive first forward tensor.
1104
+ # If all microbatches are run in warmup / cooldown phase, then no need to
1105
+ # receive this tensor here.
1106
+ if num_microbatches_remaining > 0:
1107
+ input_tensor = recv_forward(recv_tensor_shapes, config)
1108
+
1109
+ # Run 1F1B in steady state.
1110
+ for i in range(num_microbatches_remaining):
1111
+ last_iteration = (i == (num_microbatches_remaining - 1))
1112
+
1113
+ # Decide to checkpoint all layers' activations of the current micro-batch
1114
+ if max_outstanding_backprops is not None:
1115
+ checkpoint_activations_microbatch = (
1116
+ ((i+num_warmup_microbatches) % max_outstanding_backprops) >= \
1117
+ config.num_microbatches_with_partial_activation_checkpoints
1118
+ )
1119
+ else:
1120
+ checkpoint_activations_microbatch = None
1121
+
1122
+ output_tensor = forward_step(forward_step_func, data_iterator, model, num_microbatches,
1123
+ input_tensor, forward_data_store, config, collect_non_loss_data,
1124
+ checkpoint_activations_microbatch)
1125
+
1126
+ if forward_only:
1127
+ send_forward(output_tensor, send_tensor_shapes, config)
1128
+
1129
+ if not last_iteration:
1130
+ input_tensor = recv_forward(recv_tensor_shapes, config)
1131
+
1132
+ else:
1133
+ output_tensor_grad = \
1134
+ send_forward_recv_backward(output_tensor, send_tensor_shapes, config)
1135
+
1136
+ # Add input_tensor and output_tensor to end of list.
1137
+ input_tensors.append(input_tensor)
1138
+ output_tensors.append(output_tensor)
1139
+ deallocate_output_tensor(output_tensor[0], config.deallocate_pipeline_outputs)
1140
+
1141
+ # Pop input_tensor and output_tensor from the start of the list for
1142
+ # the backward pass.
1143
+ input_tensor = input_tensors.pop(0)
1144
+ output_tensor = output_tensors.pop(0)
1145
+
1146
+ input_tensor_grad = \
1147
+ backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config, model)
1148
+
1149
+ if last_iteration:
1150
+ input_tensor = None
1151
+ send_backward(input_tensor_grad, recv_tensor_shapes, config)
1152
+ else:
1153
+ input_tensor = \
1154
+ send_backward_recv_forward(input_tensor_grad, recv_tensor_shapes, config)
1155
+
1156
+ # Run cooldown backward passes.
1157
+ if not forward_only:
1158
+ for i in range(num_warmup_microbatches):
1159
+
1160
+ # Enable async grad reduction in the last backward pass
1161
+ # Note: If grad sync function is provided, only enable
1162
+ # async grad reduction in first pipeline stage. Other
1163
+ # pipeline stages do grad reduction during pipeline
1164
+ # bubble.
1165
+ if i == num_warmup_microbatches-1:
1166
+ if config.grad_sync_func is None or rank == 0:
1167
+ enable_grad_sync()
1168
+
1169
+ input_tensor = input_tensors.pop(0)
1170
+ output_tensor = output_tensors.pop(0)
1171
+
1172
+ output_tensor_grad = recv_backward(send_tensor_shapes, config)
1173
+
1174
+ input_tensor_grad = \
1175
+ backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config, model)
1176
+
1177
+ send_backward(input_tensor_grad, recv_tensor_shapes, config)
1178
+
1179
+ # Launch any remaining grad reductions
1180
+ if no_sync_context is not None:
1181
+ enable_grad_sync()
1182
+ if config.grad_sync_func is not None:
1183
+ config.grad_sync_func(model.parameters())
1184
+
1185
+ return forward_data_store
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .cross_entropy import vocab_sequence_parallel_cross_entropy
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (278 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/__pycache__/cross_entropy.cpython-310.pyc ADDED
Binary file (2.13 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/sequence_parallel/cross_entropy.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from packaging import version
3
+
4
+ from megatron.core.parallel_state import (
5
+ get_sequence_parallel_group,
6
+ get_sequence_parallel_rank,
7
+ get_sequence_parallel_world_size
8
+ )
9
+
10
+ class _VocabSequenceParallelCrossEntropy(torch.autograd.Function):
11
+
12
+ @staticmethod
13
+ def forward(ctx, vocab_seq_parallel_logits, target, label_smoothing=0.0):
14
+ # vocab_seq_parallel_logits: [S/P, B, V]
15
+ # target: [S/P, B]
16
+ # return: [S, B]
17
+
18
+ # Need softmax for backward
19
+ softmax = torch.nn.functional.softmax(vocab_seq_parallel_logits, dim=-1)
20
+ ctx.vocab_size = vocab_seq_parallel_logits.size(2)
21
+ loss = torch.nn.functional.nll_loss(softmax.log().view(-1, ctx.vocab_size), target.view(-1), reduction='none')
22
+
23
+ ctx.seqlen = vocab_seq_parallel_logits.size(0) * get_sequence_parallel_world_size()
24
+ batch_size = vocab_seq_parallel_logits.size(1)
25
+
26
+ loss_all = torch.empty(ctx.seqlen, batch_size, dtype=vocab_seq_parallel_logits.dtype, device=vocab_seq_parallel_logits.device)
27
+ if version.parse(torch.__version__) >= version.parse('1.13'):
28
+ torch.distributed.all_gather_into_tensor(loss_all, loss, group=get_sequence_parallel_group())
29
+ else:
30
+ torch.distributed._all_gather_base(loss_all, loss, group=get_sequence_parallel_group())
31
+
32
+ ctx.save_for_backward(softmax, target)
33
+
34
+ return loss_all
35
+
36
+ @staticmethod
37
+ def backward(ctx, grad_output):
38
+ softmax, target = ctx.saved_tensors
39
+
40
+ step_seqlen = ctx.seqlen // get_sequence_parallel_world_size()
41
+ sp_rank = get_sequence_parallel_rank()
42
+ grad_output_part = grad_output[step_seqlen*sp_rank:step_seqlen*(sp_rank + 1), :]
43
+
44
+ grad_input = softmax
45
+ grad_2d = grad_input.view(-1, ctx.vocab_size)
46
+ arange_1d = torch.arange(start=0, end=grad_2d.size()[0],
47
+ device=grad_2d.device)
48
+
49
+ grad_2d[arange_1d, target.view(-1)] -= 1
50
+ grad_input.mul_(grad_output_part.unsqueeze(dim=-1))
51
+
52
+ return grad_input, None, None
53
+
54
+
55
+ def vocab_sequence_parallel_cross_entropy(vocab_parallel_logits, target, label_smoothing=0.0):
56
+ return _VocabSequenceParallelCrossEntropy.apply(vocab_parallel_logits, target, label_smoothing)
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (309 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/__pycache__/transformer_config.cpython-310.pyc ADDED
Binary file (10.6 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/enums.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ import enum
4
+
5
+
6
+ # can we get rid of this?
7
+ # it's being used in pipeline schedules
8
+ class ModelType(enum.Enum):
9
+ encoder_or_decoder = 1
10
+ encoder_and_decoder = 2
11
+
12
+
13
+ # class LayerType(enum.Enum):
14
+ # encoder = 1
15
+ # decoder = 2
16
+
17
+
18
+ class AttnType(enum.Enum):
19
+ self_attn = 1
20
+ cross_attn = 2
21
+
22
+
23
+ class AttnMaskType(enum.Enum):
24
+ padding = 1
25
+ causal = 2
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/module.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ """Megatron Module"""
4
+
5
+ import torch
6
+ from torch.autograd import Variable
7
+ from torch.nn.parameter import Parameter
8
+
9
+ from megatron.core import parallel_state, tensor_parallel
10
+ from megatron.core.transformer.transformer_config import TransformerConfig
11
+
12
+
13
+ _FLOAT_TYPES = (torch.FloatTensor, torch.cuda.FloatTensor)
14
+ _HALF_TYPES = (torch.HalfTensor, torch.cuda.HalfTensor)
15
+ _BF16_TYPES = (torch.BFloat16Tensor, torch.cuda.BFloat16Tensor)
16
+
17
+
18
+ def param_is_not_shared(param):
19
+ return not hasattr(param, 'shared') or not param.shared
20
+
21
+
22
+ class MegatronModule(torch.nn.Module):
23
+ """Megatron specific extensions of torch Module with support
24
+ for pipelining."""
25
+
26
+ # def __init__(self, config: TransformerConfig, share_word_embeddings=True):
27
+ def __init__(self, config: TransformerConfig):
28
+ super().__init__()
29
+ self.config = config
30
+
31
+ def state_dict_for_save_checkpoint(self, prefix='', keep_vars=False):
32
+ """Use this function to override the state dict for
33
+ saving checkpoints."""
34
+ return self.state_dict(prefix=prefix, keep_vars=keep_vars)
35
+
36
+
37
+ def conversion_helper(val, conversion):
38
+ """Apply conversion to val. Recursively apply conversion if `val`
39
+ #is a nested tuple/list structure."""
40
+ if not isinstance(val, (tuple, list)):
41
+ return conversion(val)
42
+ rtn = [conversion_helper(v, conversion) for v in val]
43
+ if isinstance(val, tuple):
44
+ rtn = tuple(rtn)
45
+ return rtn
46
+
47
+
48
+ def fp32_to_float16(val, float16_convertor):
49
+ """Convert fp32 `val` to fp16/bf16"""
50
+
51
+ def half_conversion(val):
52
+ val_typecheck = val
53
+ if isinstance(val_typecheck, (Parameter, Variable)):
54
+ val_typecheck = val.data
55
+ if isinstance(val_typecheck, _FLOAT_TYPES):
56
+ val = float16_convertor(val)
57
+ return val
58
+
59
+ return conversion_helper(val, half_conversion)
60
+
61
+
62
+ def float16_to_fp32(val):
63
+ """Convert fp16/bf16 `val` to fp32"""
64
+
65
+ def float_conversion(val):
66
+ val_typecheck = val
67
+ if isinstance(val_typecheck, (Parameter, Variable)):
68
+ val_typecheck = val.data
69
+ if isinstance(val_typecheck, (_BF16_TYPES, _HALF_TYPES)):
70
+ val = val.float()
71
+ return val
72
+
73
+ return conversion_helper(val, float_conversion)
74
+
75
+
76
+ class Float16Module(MegatronModule):
77
+ def __init__(self, config: TransformerConfig, module: torch.nn.Module):
78
+ super(Float16Module, self).__init__(config)
79
+ self.config = config
80
+ self.fp16 = config.fp16
81
+ self.bf16 = config.bf16
82
+
83
+ if self.fp16:
84
+ self.add_module('module', module.half())
85
+
86
+ def float16_convertor(val):
87
+ return val.half()
88
+
89
+ elif self.bf16:
90
+ self.add_module('module', module.bfloat16())
91
+
92
+ def float16_convertor(val):
93
+ return val.bfloat16()
94
+
95
+ else:
96
+ raise Exception('Either config.fp16 or config.bf16 should be True.')
97
+
98
+ self.float16_convertor = float16_convertor
99
+
100
+ def set_input_tensor(self, input_tensor):
101
+ return self.module.set_input_tensor(input_tensor)
102
+
103
+ def forward(self, *inputs, **kwargs):
104
+ if parallel_state.is_pipeline_first_stage():
105
+ inputs = fp32_to_float16(inputs, self.float16_convertor)
106
+ outputs = self.module(*inputs, **kwargs)
107
+ if parallel_state.is_pipeline_last_stage():
108
+ outputs = float16_to_fp32(outputs)
109
+ return outputs
110
+
111
+ def state_dict(self, destination=None, prefix='', keep_vars=False):
112
+ return self.module.state_dict(prefix=prefix, keep_vars=keep_vars)
113
+
114
+ def state_dict_for_save_checkpoint(self, prefix='', keep_vars=False):
115
+ return self.module.state_dict_for_save_checkpoint(prefix=prefix, keep_vars=keep_vars)
116
+
117
+ def load_state_dict(self, state_dict, strict=True):
118
+ self.module.load_state_dict(state_dict, strict=strict)
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/transformer_block.py ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ from contextlib import nullcontext
4
+ import torch
5
+
6
+ from megatron.core import parallel_state, tensor_parallel
7
+
8
+ from megatron.core.transformer.module import MegatronModule
9
+ from megatron.core.transformer.transformer_config import TransformerConfig
10
+ from megatron.core.transformer.enums import AttnMaskType
11
+ from megatron.core.fusions.fused_layer_norm import FusedLayerNorm
12
+ from megatron.core.transformer.transformer_layer import TransformerLayer
13
+ from megatron.core.utils import make_viewless_tensor
14
+
15
+
16
+ class TransformerBlock(MegatronModule):
17
+ """Transformer class."""
18
+
19
+ def __init__(
20
+ self,
21
+ config: TransformerConfig,
22
+ self_attn_mask_type=AttnMaskType.padding,
23
+ post_layer_norm=True,
24
+ pre_process=True,
25
+ post_process=True,
26
+ ):
27
+ super().__init__(config=config)
28
+
29
+ self.config: TransformerConfig = config
30
+
31
+ self.self_attn_mask_type = self_attn_mask_type
32
+ self.post_layer_norm = post_layer_norm
33
+ self.pre_process = pre_process
34
+ self.post_process = post_process
35
+
36
+ # required for pipeline parallel schedules
37
+ self.input_tensor = None
38
+
39
+ self.checkpoint_core_attention = self.config.recompute_granularity == 'selective'
40
+
41
+ # TODO: Maybe we can create a build_transformer_block method here instead
42
+
43
+ self.num_layers_per_pipeline_rank = (
44
+ self.config.num_layers // parallel_state.get_pipeline_model_parallel_world_size()
45
+ )
46
+
47
+ self._build_layers()
48
+
49
+ def _build_layers(self):
50
+ # Transformer layers.
51
+ # @jcasper can we improve how we deal with layer_number?
52
+ # currently it's only used in CoreAttention?
53
+ # if self.apply_query_key_layer_scaling:
54
+ # coeff = self.layer_number
55
+ # self.norm_factor *= coeff
56
+ def build_layer(layer_number):
57
+ return TransformerLayer(
58
+ config=self.config, layer_number=layer_number, self_attn_mask_type=self.self_attn_mask_type,
59
+ )
60
+
61
+ pipeline_rank = parallel_state.get_pipeline_model_parallel_rank()
62
+
63
+ if parallel_state.get_virtual_pipeline_model_parallel_world_size() is not None:
64
+ # Number of layers in each model chunk is the number of layers in the stage,
65
+ # divided by the number of model chunks in a stage.
66
+ # With 8 layers, 2 stages, and 4 model chunks, we want an assignment of
67
+ # layers to stages like (each list is a model chunk):
68
+ # Stage 0: [0] [2] [4] [6]
69
+ # Stage 1: [1] [3] [5] [7]
70
+ # With 8 layers, 2 stages, and 2 virtual stages, we want an assignment of
71
+ # layers to stages like (each list is a model chunk):
72
+ # Stage 0: [0, 1] [4, 5]
73
+ # Stage 1: [2, 3] [6, 7]
74
+
75
+ vp_rank = parallel_state.get_virtual_pipeline_model_parallel_rank()
76
+ vp_size = parallel_state.get_virtual_pipeline_model_parallel_world_size()
77
+
78
+ total_num_layers = self.config.num_layers
79
+ num_layers_per_virtual_rank = self.num_layers_per_pipeline_rank // vp_size
80
+ total_virtual_chunks = total_num_layers / vp_size
81
+ offset = vp_rank * total_virtual_chunks + (pipeline_rank * num_layers_per_virtual_rank)
82
+
83
+ self.layers = torch.nn.ModuleList(
84
+ [build_layer(i + 1 + offset) for i in range(num_layers_per_virtual_rank)]
85
+ )
86
+ else:
87
+ # Each stage gets a contiguous set of layers.
88
+ if parallel_state.get_pipeline_model_parallel_world_size() > 1:
89
+ offset = pipeline_rank * self.num_layers_per_pipeline_rank
90
+ else:
91
+ offset = 0
92
+
93
+ # @jcasper why is layer_number using 1 index?
94
+ self.layers = torch.nn.ModuleList(
95
+ [build_layer(i + 1 + offset) for i in range(self.num_layers_per_pipeline_rank)]
96
+ )
97
+
98
+ # # TODO: add back standalone_embedding_stage
99
+ # if self.num_layers == 0:
100
+ # # When a standalone embedding stage is used (e.g.,
101
+ # # args.standalone_embedding_stage == True), virtual pipeline ranks
102
+ # # on pipeline rank 0 will have zero transformer layers assigned to
103
+ # # them. This results in the model's input and output tensors to be
104
+ # # the same, which will cause failure for certain output tensor
105
+ # # optimizations (e.g., pipeline output deallocation). To remedy
106
+ # # this, we assign a 'no-op' layer on these ranks, which will
107
+ # # disconnect the input tensor from the output tensor.
108
+ # self.num_layers = 1
109
+ # self.layers = torch.nn.ModuleList([NoopTransformerLayer(1)])
110
+ # else:
111
+ # self.layers = torch.nn.ModuleList([build_layer(i + 1 + offset) for i in range(self.num_layers)])
112
+
113
+ if self.post_process and self.post_layer_norm:
114
+ # Final layer norm before output.
115
+ self.final_layernorm = FusedLayerNorm(
116
+ hidden_size=self.config.hidden_size,
117
+ eps=self.config.layernorm_epsilon,
118
+ persist_layer_norm=self.config.persist_layer_norm,
119
+ sequence_parallel=self.config.sequence_parallel,
120
+ zero_centered_gamma=self.config.layernorm_zero_centered_gamma,
121
+ )
122
+
123
+ def _get_layer(self, layer_number):
124
+ return self.layers[layer_number]
125
+
126
+ def _checkpointed_forward(self, hidden_states, attention_mask):
127
+ """Forward method with activation checkpointing."""
128
+
129
+ def custom(start, end):
130
+ def custom_forward(*args, **kwargs):
131
+ x_, *args = args
132
+ for index in range(start, end):
133
+ layer = self._get_layer(index)
134
+ x_ = layer(x_, *args, **kwargs)
135
+ return x_
136
+
137
+ return custom_forward
138
+
139
+ if self.config.recompute_method == 'uniform':
140
+ # Uniformly divide the total number of Transformer layers and checkpoint
141
+ # the input activation of each divided chunk.
142
+ # A method to further reduce memory usage reducing checkpoints.
143
+ l = 0
144
+ while l < self.num_layers:
145
+ hidden_states = tensor_parallel.checkpoint(
146
+ custom(l, l + self.config.recompute_num_layers),
147
+ self.config.distribute_saved_activations,
148
+ hidden_states,
149
+ attention_mask,
150
+ )
151
+
152
+ l += self.recompute_num_layers
153
+
154
+ elif self.config.recompute_method == 'block':
155
+ # Checkpoint the input activation of only a set number of individual
156
+ # Transformer layers and skip the rest.
157
+ # A method fully use the device memory removing redundant re-computation.
158
+ for l in range(self.num_layers_per_pipeline_rank):
159
+ if l < self.config.recompute_num_layers:
160
+ hidden_states = tensor_parallel.checkpoint(
161
+ custom(l, l + 1), self.config.distribute_saved_activations, hidden_states, attention_mask,
162
+ )
163
+ else:
164
+ hidden_states = custom(l, l + 1)(hidden_states, attention_mask)
165
+ else:
166
+ raise ValueError("Invalid activation recompute method.")
167
+
168
+ return hidden_states
169
+
170
+ def set_input_tensor(self, input_tensor):
171
+ """Set input tensor to be used instead of forward()'s input.
172
+
173
+ When doing pipeline parallelism the input from the previous
174
+ stage comes from communication, not from the input, so the
175
+ model's forward_step_func won't have it. This function is thus
176
+ used by internal code to bypass the input provided by the
177
+ forward_step_func"""
178
+ self.input_tensor = input_tensor
179
+
180
+ def forward(self, hidden_states, attention_mask, inference_params=None):
181
+ # hidden_states (float): [s, b, h]
182
+ # attention_mask (bool): [1, 1, s, s]
183
+
184
+ if not self.pre_process:
185
+ # See set_input_tensor()
186
+ hidden_states = self.input_tensor
187
+
188
+ # Viewless tensor.
189
+ # - We only need to create a viewless tensor in the case of micro batch
190
+ # size (mbs) == 1, since in this case, 'hidden_states.transpose()'
191
+ # above creates a view tensor, and '.contiguous()' is a pass-through.
192
+ # For mbs >= 2, '.contiguous()' creates a new tensor, eliminating
193
+ # the need to make it viewless.
194
+ #
195
+ # However, we don't explicitly check mbs == 1 here because
196
+ # make_viewless_tensor() has negligible overhead when its input
197
+ # is already viewless.
198
+ #
199
+ # - For the 'else' case above, calling make_viewless_tensor() here is
200
+ # likely redundant, since p2p_communication.py (likely originator)
201
+ # already creates viewless tensors. That said, make_viewless_tensor()
202
+ # is called here to be future-proof and corner-case-proof.
203
+ hidden_states = make_viewless_tensor(inp=hidden_states, requires_grad=True, keep_graph=True,)
204
+
205
+ if self.config.sequence_parallel:
206
+ rng_context = tensor_parallel.get_cuda_rng_tracker().fork()
207
+ else:
208
+ rng_context = nullcontext()
209
+
210
+ with rng_context:
211
+ # Forward pass.
212
+ if self.config.recompute_granularity == 'full':
213
+ hidden_states = self._checkpointed_forward(hidden_states=hidden_states, attention_mask=attention_mask)
214
+ else:
215
+ for layer in self.layers:
216
+ hidden_states = layer(hidden_states=hidden_states, attention_mask=attention_mask)
217
+
218
+ # Final layer norm.
219
+ if self.post_process and self.post_layer_norm:
220
+ hidden_states = self.final_layernorm(hidden_states)
221
+
222
+ return hidden_states
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/core/transformer/utils.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2
+
3
+ """Utilities for transformer layers."""
4
+
5
+ import torch
6
+
7
+ from megatron import get_args
8
+
9
+ from deepspeed.runtime.zero import GatheredParameters
10
+
11
+ def attention_mask_func(attention_scores, attention_mask):
12
+ attention_scores.masked_fill_(attention_mask, -10000.0)
13
+ return attention_scores
14
+
15
+
16
+ def get_linear_layer(rows, columns, init_method, gather_params_on_init=False):
17
+ """Simple linear layer with weight initialization."""
18
+ layer = torch.nn.Linear(rows, columns)
19
+ if get_args().perform_initialization:
20
+ with GatheredParameters(layer.weight, modifier_rank=0, enable=gather_params_on_init):
21
+ init_method(layer.weight)
22
+ with torch.no_grad():
23
+ with GatheredParameters(layer.weight, modifier_rank=0, enable=gather_params_on_init):
24
+ layer.bias.zero_()
25
+ return layer
26
+
27
+
28
+ @torch.jit.script
29
+ def gelu_impl(x):
30
+ """OpenAI's gelu implementation."""
31
+ return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * x * (1.0 + 0.044715 * x * x)))
32
+
33
+
34
+ def openai_gelu(x):
35
+ return gelu_impl(x)
36
+
37
+
38
+ # This is actually Python equivalent of torch.nn.functional.gelu(), also with type hints for ONNX exporter
39
+ @torch.jit.script
40
+ def erf_gelu(x):
41
+ return x * 0.5 * (torch.erf(x / 1.41421).to(dtype=x.dtype) + torch.ones_like(x).to(dtype=x.dtype))
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/Makefile ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ CXXFLAGS += -O3 -Wall -shared -std=c++11 -fPIC -fdiagnostics-color
2
+ CPPFLAGS += $(shell python3 -m pybind11 --includes)
3
+ LIBNAME = helpers
4
+ LIBEXT = $(shell python3-config --extension-suffix)
5
+
6
+ default: $(LIBNAME)$(LIBEXT)
7
+
8
+ %$(LIBEXT): %.cpp
9
+ $(CXX) $(CXXFLAGS) $(CPPFLAGS) $< -o $@
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from . import indexed_dataset
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (225 Bytes). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/autoaugment.cpython-310.pyc ADDED
Binary file (9.76 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/blendable_dataset.cpython-310.pyc ADDED
Binary file (4.09 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/data_samplers.cpython-310.pyc ADDED
Binary file (5.93 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/dataset_utils.cpython-310.pyc ADDED
Binary file (14.8 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/gpt_dataset.cpython-310.pyc ADDED
Binary file (14.6 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/image_folder.cpython-310.pyc ADDED
Binary file (10.3 kB). View file
 
docker/intel_code/llama13b/Model-References/PyTorch/nlp/DeepSpeedExamples/Megatron-DeepSpeed/megatron/data/__pycache__/indexed_dataset.cpython-310.pyc ADDED
Binary file (20.2 kB). View file