applied-ai-018 commited on
Commit
87a620d
·
verified ·
1 Parent(s): d2d9ffe

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/__init__.cpython-310.pyc +0 -0
  2. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/configuration_clvp.cpython-310.pyc +0 -0
  3. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/convert_clvp_to_hf.cpython-310.pyc +0 -0
  4. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/feature_extraction_clvp.cpython-310.pyc +0 -0
  5. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/modeling_clvp.cpython-310.pyc +0 -0
  6. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/number_normalizer.cpython-310.pyc +0 -0
  7. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/tokenization_clvp.cpython-310.pyc +0 -0
  8. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/convert_clvp_to_hf.py +234 -0
  9. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/modeling_clvp.py +2024 -0
  10. env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/number_normalizer.py +238 -0
  11. env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/__init__.cpython-310.pyc +0 -0
  12. env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/feature_extraction_convnext.cpython-310.pyc +0 -0
  13. env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/image_processing_convnext.cpython-310.pyc +0 -0
  14. env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/modeling_convnext.cpython-310.pyc +0 -0
  15. env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/modeling_tf_convnext.cpython-310.pyc +0 -0
  16. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__init__.py +60 -0
  17. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/__init__.cpython-310.pyc +0 -0
  18. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/configuration_umt5.cpython-310.pyc +0 -0
  19. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/convert_umt5_checkpoint_to_pytorch.cpython-310.pyc +0 -0
  20. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/modeling_umt5.cpython-310.pyc +0 -0
  21. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/configuration_umt5.py +177 -0
  22. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py +274 -0
  23. env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/modeling_umt5.py +1857 -0
  24. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__init__.py +49 -0
  25. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__main__.py +242 -0
  26. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/__init__.cpython-310.pyc +0 -0
  27. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/__main__.cpython-310.pyc +0 -0
  28. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/config.cpython-310.pyc +0 -0
  29. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/convert.cpython-310.pyc +0 -0
  30. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/features.cpython-310.pyc +0 -0
  31. env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/utils.cpython-310.pyc +0 -0
  32. env-llmeval/lib/python3.10/site-packages/transformers/onnx/config.py +741 -0
  33. env-llmeval/lib/python3.10/site-packages/transformers/onnx/convert.py +460 -0
  34. env-llmeval/lib/python3.10/site-packages/transformers/onnx/features.py +749 -0
  35. env-llmeval/lib/python3.10/site-packages/transformers/onnx/utils.py +109 -0
  36. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_detectron2_objects.cpython-310.pyc +0 -0
  37. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects.cpython-310.pyc +0 -0
  38. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_flax_objects.cpython-310.pyc +0 -0
  39. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_music_objects.cpython-310.pyc +0 -0
  40. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_sentencepiece_objects.cpython-310.pyc +0 -0
  41. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_tf_objects.cpython-310.pyc +0 -0
  42. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_torchaudio_objects.cpython-310.pyc +0 -0
  43. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_vision_objects.cpython-310.pyc +0 -0
  44. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/fx.cpython-310.pyc +0 -0
  45. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/generic.cpython-310.pyc +0 -0
  46. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/hp_naming.cpython-310.pyc +0 -0
  47. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/hub.cpython-310.pyc +0 -0
  48. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/import_utils.cpython-310.pyc +0 -0
  49. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/sentencepiece_model_pb2.cpython-310.pyc +0 -0
  50. env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/versions.cpython-310.pyc +0 -0
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (1.29 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/configuration_clvp.cpython-310.pyc ADDED
Binary file (17.9 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/convert_clvp_to_hf.cpython-310.pyc ADDED
Binary file (6.19 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/feature_extraction_clvp.cpython-310.pyc ADDED
Binary file (9.22 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/modeling_clvp.cpython-310.pyc ADDED
Binary file (63.6 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/number_normalizer.cpython-310.pyc ADDED
Binary file (6.83 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/__pycache__/tokenization_clvp.cpython-310.pyc ADDED
Binary file (13.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/convert_clvp_to_hf.py ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """
17
+ Weights conversion script for CLVP
18
+ """
19
+
20
+ import argparse
21
+ import os
22
+
23
+ import torch
24
+ from huggingface_hub import hf_hub_download
25
+
26
+ from transformers import ClvpConfig, ClvpModelForConditionalGeneration
27
+
28
+
29
+ _MODELS = {
30
+ "clvp": "https://huggingface.co/jbetker/tortoise-tts-v2/blob/main/.models/clvp2.pth",
31
+ "decoder": "https://huggingface.co/jbetker/tortoise-tts-v2/blob/main/.models/autoregressive.pth",
32
+ }
33
+
34
+ dim = 1024
35
+ sub_dim = dim // 16
36
+
37
+ CLVP_ENCODERS_MAPPING = {
38
+ "text_transformer.transformer.attn_layers": "text_encoder_model",
39
+ "speech_transformer.transformer.attn_layers": "speech_encoder_model",
40
+ "text_transformer.transformer.norm": "text_encoder_model.final_layer_norm",
41
+ "speech_transformer.transformer.norm": "speech_encoder_model.final_layer_norm",
42
+ "to_text_latent": "text_encoder_model.projection",
43
+ "to_speech_latent": "speech_encoder_model.projection",
44
+ "text_emb": "text_encoder_model.token_embedding",
45
+ "speech_emb": "speech_encoder_model.token_embedding",
46
+ "1.wrap.net.0": "mlp.fc1",
47
+ "1.wrap.net.3": "mlp.fc2",
48
+ "1.wrap": "self_attn",
49
+ "to_out": "out_proj",
50
+ "to_q": "q_proj",
51
+ "to_k": "k_proj",
52
+ "to_v": "v_proj",
53
+ "temperature": "logit_scale",
54
+ }
55
+
56
+ CLVP_DECODER_MAPPING = {
57
+ "conditioning_encoder.init": "conditioning_encoder.mel_conv",
58
+ "conditioning_encoder.attn": "conditioning_encoder.mel_attn_blocks",
59
+ "mel_attn_blocks": "group_norms",
60
+ ".norm.weight": ".weight",
61
+ ".norm.bias": ".bias",
62
+ "text_embedding": "conditioning_encoder.text_token_embedding",
63
+ "text_pos_embedding.emb": "conditioning_encoder.text_position_embedding",
64
+ "final_norm": "speech_decoder_model.final_norm",
65
+ "mel_head": "speech_decoder_model.lm_head",
66
+ "gpt.ln_f": "speech_decoder_model.model.decoder.layer_norm",
67
+ "mel_embedding": "speech_decoder_model.model.decoder.input_embeds_layer",
68
+ "mel_pos_embedding.emb": "speech_decoder_model.model.decoder.position_embeds_layer",
69
+ "gpt.h": "speech_decoder_model.model.decoder.layers",
70
+ "ln_1": "input_layernorm",
71
+ "ln_2": "post_attention_layernorm",
72
+ }
73
+
74
+
75
+ def update_index(present_index):
76
+ if present_index % 2 == 0:
77
+ return int(present_index / 2)
78
+ else:
79
+ return int((present_index - 1) / 2)
80
+
81
+
82
+ def convert_encoder_weights(original_weights):
83
+ converted_weights = {}
84
+ original_weights_keys = sorted(original_weights.keys())
85
+ for original_key in original_weights_keys:
86
+ updated_key = original_key
87
+ # for input_rmsnorm.weight and post_attention_rmsnorm.weight
88
+ if "0.0.g" in updated_key:
89
+ present_index = updated_key.split(".")[4]
90
+ if int(present_index) % 2 == 0:
91
+ updated_key = updated_key.replace("0.0.g", "input_rmsnorm.weight")
92
+ else:
93
+ updated_key = updated_key.replace("0.0.g", "post_attention_rmsnorm.weight")
94
+
95
+ if "transformer.attn_layers.layers" in updated_key:
96
+ present_index = updated_key.split(".")[4]
97
+ updated_index = update_index(int(present_index))
98
+ updated_key = updated_key.replace(
99
+ f"transformer.attn_layers.layers.{present_index}", f"transformer.attn_layers.layers.{updated_index}"
100
+ )
101
+
102
+ for k, v in CLVP_ENCODERS_MAPPING.items():
103
+ if k in updated_key:
104
+ updated_key = updated_key.replace(k, v)
105
+
106
+ converted_weights[updated_key] = original_weights.pop(original_key)
107
+
108
+ return converted_weights
109
+
110
+
111
+ def convert_decoder_weights(original_weights):
112
+ converted_weights = {}
113
+ original_weights_keys = sorted(original_weights.keys())
114
+ for original_key in original_weights_keys:
115
+ updated_key = original_key
116
+ if len(updated_key.split(".")) > 3:
117
+ index, attr = updated_key.split(".")[2], updated_key.split(".")[-1]
118
+
119
+ # for decoder attention
120
+ if "attn.c_attn" in updated_key:
121
+ if attr == "weight":
122
+ slice1, slice2, slice3 = original_weights[updated_key].squeeze(-1).T.split(split_size=dim, dim=0)
123
+ else:
124
+ slice1, slice2, slice3 = original_weights[updated_key].split(split_size=dim, dim=0)
125
+ converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.q_proj.{attr}"] = slice1
126
+ converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.k_proj.{attr}"] = slice2
127
+ converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.v_proj.{attr}"] = slice3
128
+ continue
129
+
130
+ if "attn.c_proj" in updated_key:
131
+ converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.out_proj.{attr}"] = (
132
+ original_weights[updated_key].squeeze(-1).T
133
+ )
134
+ continue
135
+
136
+ if "attn.bias" in updated_key or "attn.masked_bias" in updated_key or "text_head" in updated_key:
137
+ original_weights.pop(updated_key)
138
+ continue
139
+
140
+ # conditional encoder attention
141
+ if "qkv" in updated_key:
142
+ if attr == "weight":
143
+ slice1, slice2, slice3 = original_weights[updated_key].squeeze(-1).split(split_size=dim, dim=0)
144
+ else:
145
+ slice1, slice2, slice3 = original_weights[updated_key].split(split_size=dim, dim=0)
146
+
147
+ indices = torch.arange(dim)
148
+ index1, index2, index3 = (
149
+ indices.unfold(0, sub_dim, sub_dim * 3).flatten(),
150
+ indices[sub_dim:].unfold(0, sub_dim, sub_dim * 3).flatten(),
151
+ indices[2 * sub_dim :].unfold(0, sub_dim, sub_dim * 3).flatten(),
152
+ )
153
+
154
+ converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.q_proj.{attr}"] = torch.concatenate(
155
+ [slice1[index1], slice2[index3], slice3[index2]],
156
+ axis=0,
157
+ )
158
+ converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.k_proj.{attr}"] = torch.concatenate(
159
+ [slice1[index2], slice2[index1], slice3[index3]],
160
+ axis=0,
161
+ )
162
+ converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.v_proj.{attr}"] = torch.concatenate(
163
+ [slice1[index3], slice2[index2], slice3[index1]],
164
+ axis=0,
165
+ )
166
+ continue
167
+
168
+ if "proj_out" in updated_key:
169
+ converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.out_proj.{attr}"] = original_weights[
170
+ updated_key
171
+ ].squeeze(-1)
172
+ continue
173
+
174
+ for k, v in CLVP_DECODER_MAPPING.items():
175
+ if k in updated_key:
176
+ updated_key = updated_key.replace(k, v)
177
+
178
+ converted_weights[updated_key] = original_weights.pop(original_key)
179
+
180
+ return converted_weights
181
+
182
+
183
+ def _download(url: str, root: str):
184
+ repo_id = f"{url.split('/')[3]}/{url.split('/')[4]}"
185
+ filename = f"{url.split('/')[-2]}/{url.split('/')[-1]}"
186
+ hf_hub_download(
187
+ repo_id=repo_id,
188
+ filename=filename,
189
+ force_filename=root,
190
+ local_dir_use_symlinks=False,
191
+ )
192
+
193
+
194
+ def convert_clvp_weights(checkpoint_path, pytorch_dump_folder_path):
195
+ converted_checkpoint = {}
196
+
197
+ for each_model_name, each_model_url in _MODELS.items():
198
+ each_model_path = os.path.join(checkpoint_path, each_model_url.split("/")[-1])
199
+ if not os.path.exists(each_model_path):
200
+ print(f"\n{each_model_name} was not found! Downloading it to {each_model_path}")
201
+ _download(url=each_model_url, root=each_model_path)
202
+
203
+ if each_model_name == "clvp":
204
+ clvp_checkpoint = torch.load(each_model_path, map_location="cpu")
205
+ else:
206
+ decoder_checkpoint = torch.load(each_model_path, map_location="cpu")
207
+
208
+ # Converting the weights
209
+ converted_checkpoint.update(**convert_encoder_weights(clvp_checkpoint))
210
+ converted_checkpoint.update(**convert_decoder_weights(decoder_checkpoint))
211
+
212
+ config = ClvpConfig.from_pretrained("susnato/clvp_dev")
213
+ model = ClvpModelForConditionalGeneration(config)
214
+
215
+ model.load_state_dict(converted_checkpoint, strict=True)
216
+ model.save_pretrained(pytorch_dump_folder_path)
217
+ print(f"Model saved at {pytorch_dump_folder_path}!")
218
+
219
+
220
+ if __name__ == "__main__":
221
+ parser = argparse.ArgumentParser()
222
+ # # Required parameters
223
+ parser.add_argument(
224
+ "--checkpoint_path", type=str, help="Path to the folder of downloaded checkpoints. (Please enter full path)"
225
+ )
226
+ parser.add_argument(
227
+ "--pytorch_dump_folder_path",
228
+ default=None,
229
+ type=str,
230
+ help="Path to the output PyTorch model. (Please enter full path)",
231
+ )
232
+ args = parser.parse_args()
233
+
234
+ convert_clvp_weights(args.checkpoint_path, args.pytorch_dump_folder_path)
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/modeling_clvp.py ADDED
@@ -0,0 +1,2024 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ PyTorch CLVP model."""
17
+
18
+
19
+ import copy
20
+ import math
21
+ from dataclasses import dataclass
22
+ from typing import Dict, Optional, Tuple, Union
23
+
24
+ import torch
25
+ import torch.utils.checkpoint
26
+ from torch import nn
27
+ from torch.nn import CrossEntropyLoss
28
+
29
+ from ...activations import ACT2FN
30
+ from ...generation import GenerationConfig
31
+ from ...modeling_attn_mask_utils import _prepare_4d_attention_mask, _prepare_4d_causal_attention_mask
32
+ from ...modeling_outputs import (
33
+ BaseModelOutput,
34
+ BaseModelOutputWithPastAndCrossAttentions,
35
+ BaseModelOutputWithPooling,
36
+ CausalLMOutputWithCrossAttentions,
37
+ )
38
+ from ...modeling_utils import PreTrainedModel, SequenceSummary
39
+ from ...pytorch_utils import Conv1D
40
+ from ...utils import (
41
+ ModelOutput,
42
+ add_start_docstrings,
43
+ add_start_docstrings_to_model_forward,
44
+ logging,
45
+ replace_return_docstrings,
46
+ )
47
+ from .configuration_clvp import (
48
+ ClvpConfig,
49
+ ClvpDecoderConfig,
50
+ ClvpEncoderConfig,
51
+ )
52
+
53
+
54
+ logger = logging.get_logger(__name__)
55
+
56
+ _CHECKPOINT_FOR_DOC = "susnato/clvp_dev"
57
+
58
+ CLVP_PRETRAINED_MODEL_ARCHIVE_LIST = [
59
+ "susnato/clvp_dev",
60
+ # See all Clvp models at https://huggingface.co/models?filter=clvp
61
+ ]
62
+
63
+
64
+ # Copied from transformers.models.clip.modeling_clip.contrastive_loss
65
+ def contrastive_loss(logits: torch.Tensor) -> torch.Tensor:
66
+ return nn.functional.cross_entropy(logits, torch.arange(len(logits), device=logits.device))
67
+
68
+
69
+ # Copied from transformers.models.clip.modeling_clip.clip_loss with clip->clvp, image_loss->speech_loss
70
+ def clvp_loss(similarity: torch.Tensor) -> torch.Tensor:
71
+ caption_loss = contrastive_loss(similarity)
72
+ speech_loss = contrastive_loss(similarity.t())
73
+ return (caption_loss + speech_loss) / 2.0
74
+
75
+
76
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
77
+ def rotate_half(x):
78
+ """Rotates half the hidden dims of the input."""
79
+ x1 = x[..., : x.shape[-1] // 2]
80
+ x2 = x[..., x.shape[-1] // 2 :]
81
+ return torch.cat((-x2, x1), dim=-1)
82
+
83
+
84
+ def apply_rotary_pos_emb(q, k, v, cos, sin, position_ids, unsqueeze_dim=1):
85
+ """Applies Rotary Position Embedding to the query and key tensors.
86
+
87
+ Args:
88
+ q (`torch.Tensor`): The query tensor.
89
+ k (`torch.Tensor`): The key tensor.
90
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
91
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
92
+ position_ids (`torch.Tensor`):
93
+ The position indices of the tokens corresponding to the query and key tensors. For example, this can be
94
+ used to pass offsetted position ids when working with a KV-cache.
95
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
96
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
97
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
98
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
99
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
100
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
101
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
102
+ Returns:
103
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
104
+ """
105
+ cos = cos[position_ids].unsqueeze(unsqueeze_dim)
106
+ sin = sin[position_ids].unsqueeze(unsqueeze_dim)
107
+ q_embed = (q * cos) + (rotate_half(q) * sin)
108
+ k_embed = (k * cos) + (rotate_half(k) * sin)
109
+ v_embed = (v * cos) + (rotate_half(v) * sin)
110
+ return q_embed, k_embed, v_embed
111
+
112
+
113
+ def _pad_extra_bos_eos_tokens(
114
+ input_ids,
115
+ attention_mask=None,
116
+ pad_token_id=0,
117
+ bos_token_id=255,
118
+ eos_token_id=0,
119
+ add_bos_token=True,
120
+ add_eos_token=True,
121
+ ):
122
+ """
123
+ This method adds extra bos and eos tokens to input_ids and accordingly modifies the attention_mask which is used in
124
+ `ClvpConditioningEncoder` and the generation loop of the `ClvpModelForConditionalGeneration`.
125
+ """
126
+
127
+ # add the bos token at the beginning
128
+ if add_bos_token:
129
+ input_ids = torch.nn.functional.pad(input_ids, (1, 0), value=bos_token_id)
130
+ attention_mask = (
131
+ torch.nn.functional.pad(attention_mask, (1, 0), value=1) if attention_mask is not None else attention_mask
132
+ )
133
+
134
+ modified_input_ids = input_ids
135
+ if add_eos_token:
136
+ modified_input_ids = torch.zeros(
137
+ (input_ids.shape[0], input_ids.shape[1] + 1), dtype=input_ids.dtype, device=input_ids.device
138
+ )
139
+ for i, each_input_id in enumerate(input_ids):
140
+ # locate where the valid tokens end and then add the eos token
141
+ if torch.isin(each_input_id, pad_token_id).sum():
142
+ pos = torch.where(each_input_id == pad_token_id)[0].min()
143
+ modified_input_ids[i] = torch.concatenate(
144
+ [each_input_id[:pos], torch.tensor([eos_token_id], device=input_ids.device), each_input_id[pos:]]
145
+ )
146
+ else:
147
+ # if there are no pad tokens present, then add eos to the end
148
+ modified_input_ids[i] = torch.nn.functional.pad(each_input_id, (0, 1), value=eos_token_id)
149
+ attention_mask = (
150
+ torch.nn.functional.pad(attention_mask, (1, 0), value=1) if attention_mask is not None else attention_mask
151
+ )
152
+
153
+ return modified_input_ids, attention_mask
154
+
155
+
156
+ @dataclass
157
+ class ClvpEncoderOutput(ModelOutput):
158
+ """
159
+ Base class for CLVP encoder's outputs that contains a pooling of the last hidden states as well as a projection
160
+ output (a linear layer on top of the pooled output).
161
+
162
+ Args:
163
+ embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)`, *optional*, returned when model is initialized with `with_projection=True`):
164
+ The embeddings obtained by applying the projection layer to the pooler_output.
165
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
166
+ The hidden state of the last layer of the model.
167
+ pooler_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`):
168
+ Pooled output of the `last_hidden_state`.
169
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
170
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
171
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of
172
+ the model at the output of each layer plus the optional initial embedding outputs.
173
+ attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
174
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
175
+ sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in
176
+ the self-attention heads.
177
+ """
178
+
179
+ embeds: Optional[torch.FloatTensor] = None
180
+ last_hidden_state: torch.FloatTensor = None
181
+ pooler_output: Optional[torch.FloatTensor] = None
182
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
183
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
184
+
185
+
186
+ @dataclass
187
+ class ClvpOutput(ModelOutput):
188
+ """
189
+ Args:
190
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `return_loss` is `True`):
191
+ Contrastive loss for speech-text similarity.
192
+ speech_ids (`torch.LongTensor`, *optional*):
193
+ speech_ids (or speech candidates) generated by the `ClvpForCausalLM` model.
194
+ logits_per_speech (`torch.FloatTensor` of shape `(speech_batch_size, text_batch_size)`):
195
+ The scaled dot product scores between `speech_embeds` and `text_embeds`. This represents the speech-text
196
+ similarity scores.
197
+ logits_per_text (`torch.FloatTensor` of shape `(text_batch_size, speech_batch_size)`):
198
+ The scaled dot product scores between `text_embeds` and `speech_embeds`. This represents the text-speech
199
+ similarity scores.
200
+ text_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim`):
201
+ The text embeddings obtained by applying the projection layer to the pooled output of the text encoder
202
+ model.
203
+ speech_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim`):
204
+ The speech embeddings obtained by applying the projection layer to the pooled output of the speech encoder
205
+ model.
206
+ text_model_output (`BaseModelOutputWithPooling`):
207
+ The pooled output of the `last_hidden_state` of the text encoder Model.
208
+ speech_model_output (`BaseModelOutputWithPooling`):
209
+ The pooled output of the `last_hidden_state` of the speech encoder Model.
210
+ decoder_hidden_states (`torch.FloatTensor`, *optional*):
211
+ The hidden states of the decoder model.
212
+ text_encoder_hidden_states (`torch.FloatTensor`, *optional*):
213
+ The hidden states of the text encoder model.
214
+ speech_encoder_hidden_states (`torch.FloatTensor`, *optional*):
215
+ The hidden states of the speech encoder model.
216
+ """
217
+
218
+ loss: Optional[torch.FloatTensor] = None
219
+ speech_ids: Optional[torch.LongTensor] = None
220
+ logits_per_speech: torch.FloatTensor = None
221
+ logits_per_text: torch.FloatTensor = None
222
+ text_embeds: torch.FloatTensor = None
223
+ speech_embeds: torch.FloatTensor = None
224
+ text_model_output: BaseModelOutputWithPooling = None
225
+ speech_model_output: BaseModelOutputWithPooling = None
226
+ decoder_hidden_states: torch.FloatTensor = None
227
+ text_encoder_hidden_states: torch.FloatTensor = None
228
+ speech_encoder_hidden_states: torch.FloatTensor = None
229
+
230
+
231
+ # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->Clvp
232
+ class ClvpRMSNorm(nn.Module):
233
+ def __init__(self, hidden_size, eps=1e-6):
234
+ """
235
+ ClvpRMSNorm is equivalent to T5LayerNorm
236
+ """
237
+ super().__init__()
238
+ self.weight = nn.Parameter(torch.ones(hidden_size))
239
+ self.variance_epsilon = eps
240
+
241
+ def forward(self, hidden_states):
242
+ input_dtype = hidden_states.dtype
243
+ hidden_states = hidden_states.to(torch.float32)
244
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
245
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
246
+ return self.weight * hidden_states.to(input_dtype)
247
+
248
+
249
+ class ClvpRotaryPositionalEmbedding(nn.Module):
250
+ """
251
+ Rotary Position Embedding Class for CLVP. It was proposed in the paper 'ROFORMER: ENHANCED TRANSFORMER WITH ROTARY
252
+ POSITION EMBEDDING', Please see https://arxiv.org/pdf/2104.09864v1.pdf .
253
+ """
254
+
255
+ def __init__(self, config):
256
+ super().__init__()
257
+ dim = max(config.projection_dim // (config.num_attention_heads * 2), 32)
258
+ inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2, dtype=torch.int64).float() / dim))
259
+
260
+ self.register_buffer("inv_freq", inv_freq)
261
+ self.cached_sequence_length = None
262
+ self.cached_rotary_positional_embedding = None
263
+
264
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
265
+ sequence_length = hidden_states.shape[1]
266
+
267
+ if sequence_length == self.cached_sequence_length and self.cached_rotary_positional_embedding is not None:
268
+ return self.cached_rotary_positional_embedding
269
+
270
+ self.cached_sequence_length = sequence_length
271
+ time_stamps = torch.arange(sequence_length, device=hidden_states.device).type_as(self.inv_freq)
272
+ freqs = torch.einsum("i,j->ij", time_stamps, self.inv_freq)
273
+ embeddings = torch.cat((freqs, freqs), dim=-1)
274
+
275
+ self.cached_rotary_positional_embedding = embeddings.unsqueeze(0)
276
+ return self.cached_rotary_positional_embedding
277
+
278
+
279
+ class ClvpSelfAttention(nn.Module):
280
+ """
281
+ Multi-headed attention to combine Absolute and Rotary Positional Embeddings into a single Attention module.
282
+ """
283
+
284
+ def __init__(self, config):
285
+ super().__init__()
286
+ self.config = config
287
+ self.embed_dim = config.hidden_size
288
+ self.num_heads = config.num_attention_heads
289
+ self.head_dim = self.embed_dim // self.num_heads
290
+ if self.head_dim * self.num_heads != self.embed_dim:
291
+ raise ValueError(
292
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
293
+ f" {self.num_heads})."
294
+ )
295
+ self.scale = self.head_dim**-0.5
296
+ self.dropout = config.attention_dropout
297
+
298
+ if hasattr(config, "max_position_embeddings"):
299
+ max_positions = config.max_position_embeddings
300
+ bias = torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool))
301
+ bias = bias.view(1, 1, max_positions, max_positions)
302
+ self.register_buffer("bias", bias, persistent=False)
303
+
304
+ self.k_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=config.use_attention_bias)
305
+ self.v_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=config.use_attention_bias)
306
+ self.q_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=config.use_attention_bias)
307
+ self.out_proj = nn.Linear(self.embed_dim, self.embed_dim)
308
+
309
+ # Copied from transformers.models.clip.modeling_clip.CLIPAttention._shape
310
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
311
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
312
+
313
+ def forward(
314
+ self,
315
+ hidden_states: torch.FloatTensor,
316
+ rotary_pos_emb: Optional[torch.FloatTensor] = None,
317
+ attention_mask: Optional[torch.LongTensor] = None,
318
+ position_ids: Optional[torch.LongTensor] = None,
319
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
320
+ use_cache: Optional[bool] = False,
321
+ head_mask: Optional[torch.FloatTensor] = None,
322
+ output_attentions: Optional[bool] = False,
323
+ ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor], Optional[Tuple[torch.FloatTensor]]]:
324
+ # Raise error when position_ids is None but rotary_pos_emb is provided, because we need that when applying
325
+ # rotary_pos_emb to query and key states.
326
+ if rotary_pos_emb is not None and position_ids is None:
327
+ raise ValueError("`position_ids` must be provided when `rotary_pos_emb` is not None.")
328
+
329
+ bsz, _, embed_dim = hidden_states.size()
330
+
331
+ # get query proj
332
+ query_states = self._shape(self.q_proj(hidden_states), -1, bsz) * self.scale
333
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
334
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
335
+
336
+ if past_key_value is not None:
337
+ past_key, past_value = past_key_value
338
+ key_states = torch.cat((past_key, key_states), dim=-2)
339
+ value_states = torch.cat((past_value, value_states), dim=-2)
340
+
341
+ if use_cache is True:
342
+ present = (key_states, value_states)
343
+ else:
344
+ present = None
345
+
346
+ if rotary_pos_emb is not None:
347
+ rotary_emb_dim = rotary_pos_emb.shape[-1]
348
+
349
+ # Partial rotary embedding
350
+ query_rot, query_pass = (
351
+ query_states[..., :rotary_emb_dim],
352
+ query_states[..., rotary_emb_dim:],
353
+ )
354
+ key_rot, key_pass = (
355
+ key_states[..., :rotary_emb_dim],
356
+ key_states[..., rotary_emb_dim:],
357
+ )
358
+ value_rot, value_pass = (
359
+ value_states[..., :rotary_emb_dim],
360
+ value_states[..., rotary_emb_dim:],
361
+ )
362
+
363
+ cos, sin = rotary_pos_emb.cos().squeeze(0), rotary_pos_emb.sin().squeeze(0)
364
+ query_rot, key_rot, value_rot = apply_rotary_pos_emb(query_rot, key_rot, value_rot, cos, sin, position_ids)
365
+
366
+ # [batch_size, num_heads, seq_length, head_dim]
367
+ query_states = torch.cat((query_rot, query_pass), dim=-1)
368
+ key_states = torch.cat((key_rot, key_pass), dim=-1)
369
+ value_states = torch.cat((value_rot, value_pass), dim=-1)
370
+
371
+ tgt_len = query_states.shape[2]
372
+ src_len = key_states.shape[2]
373
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3))
374
+
375
+ if attention_mask is not None:
376
+ if attention_mask.size() != (bsz, 1, tgt_len, src_len):
377
+ raise ValueError(
378
+ f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
379
+ )
380
+ attn_weights = attn_weights + attention_mask
381
+
382
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
383
+
384
+ # Mask heads if we want to
385
+ if head_mask is not None:
386
+ attn_weights = attn_weights * head_mask
387
+
388
+ attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
389
+ attn_output = torch.matmul(attn_probs, value_states)
390
+
391
+ if attn_output.size() != (bsz, self.num_heads, tgt_len, self.head_dim):
392
+ raise ValueError(
393
+ f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is"
394
+ f" {attn_output.size()}"
395
+ )
396
+
397
+ attn_output = attn_output.transpose(1, 2).contiguous()
398
+ attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)
399
+
400
+ attn_output = self.out_proj(attn_output)
401
+
402
+ if not output_attentions:
403
+ attn_weights = None
404
+
405
+ return attn_output, present, attn_weights
406
+
407
+
408
+ class ClvpGatedLinearUnit(nn.Module):
409
+ """
410
+ `ClvpGatedLinearUnit` uses the second half of the `hidden_states` to act as a gate for the first half of the
411
+ `hidden_states` which controls the flow of data from the first of the tensor.
412
+ """
413
+
414
+ def __init__(self, config):
415
+ super().__init__()
416
+ self.activation_fn = ACT2FN[config.hidden_act]
417
+ self.proj = nn.Linear(config.hidden_size, config.intermediate_size * 2)
418
+
419
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
420
+ hidden_states, gate = self.proj(hidden_states).chunk(2, dim=-1)
421
+ return hidden_states * self.activation_fn(gate)
422
+
423
+
424
+ class ClvpEncoderMLP(nn.Module):
425
+ """
426
+ This MLP is used in CLVP speech or text encoder models.
427
+ """
428
+
429
+ def __init__(self, config):
430
+ super().__init__()
431
+ self.config = config
432
+
433
+ self.fc1 = ClvpGatedLinearUnit(config)
434
+ self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size)
435
+ self.dropout_layer = nn.Dropout(config.dropout)
436
+
437
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
438
+ hidden_states = self.fc1(hidden_states)
439
+ hidden_states = self.dropout_layer(hidden_states)
440
+ hidden_states = self.fc2(hidden_states)
441
+ return hidden_states
442
+
443
+
444
+ class ClvpEncoderLayer(nn.Module):
445
+ def __init__(self, config: ClvpConfig):
446
+ super().__init__()
447
+ self.config = config
448
+ self.embed_dim = config.hidden_size
449
+ self.self_attn = ClvpSelfAttention(config)
450
+ self.mlp = ClvpEncoderMLP(config)
451
+
452
+ self.input_rmsnorm = ClvpRMSNorm(self.embed_dim, eps=config.layer_norm_eps)
453
+ self.post_attention_rmsnorm = ClvpRMSNorm(self.embed_dim, eps=config.layer_norm_eps)
454
+
455
+ def forward(
456
+ self,
457
+ hidden_states: torch.FloatTensor,
458
+ rotary_pos_emb: torch.FloatTensor,
459
+ attention_mask: torch.LongTensor,
460
+ position_ids: torch.LongTensor,
461
+ output_attentions: Optional[bool] = False,
462
+ ) -> Tuple[torch.FloatTensor]:
463
+ """
464
+ Args:
465
+ hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, embed_dim)`):
466
+ input to the layer.
467
+ rotary_pos_emb (`torch.FloatTensor`):
468
+ rotary position embeddings generated by `ClvpRotaryPositionalEmbedding` module.
469
+ attention_mask (`torch.FloatTensor` of shape `(batch, 1, tgt_len, src_len)`):
470
+ attention mask where padding elements are indicated by very large negative values.
471
+ position_ids (`torch.LongTensor`):
472
+ Denotes position ids of the input tokens.
473
+ output_attentions (`bool`, *optional*, defaults to `False`):
474
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
475
+ returned tensors for more detail.
476
+ """
477
+ residual = hidden_states
478
+
479
+ hidden_states = self.input_rmsnorm(hidden_states)
480
+
481
+ attention_outputs = self.self_attn(
482
+ hidden_states=hidden_states,
483
+ rotary_pos_emb=rotary_pos_emb,
484
+ attention_mask=attention_mask,
485
+ position_ids=position_ids,
486
+ output_attentions=output_attentions,
487
+ )
488
+
489
+ hidden_states = attention_outputs[0]
490
+
491
+ hidden_states = residual + hidden_states
492
+
493
+ residual = hidden_states
494
+ hidden_states = self.post_attention_rmsnorm(hidden_states)
495
+ hidden_states = self.mlp(hidden_states)
496
+ hidden_states = residual + hidden_states
497
+
498
+ outputs = (hidden_states,)
499
+
500
+ if output_attentions:
501
+ outputs += (attention_outputs[-1],)
502
+
503
+ return outputs
504
+
505
+
506
+ # Copied from transformers.models.gpt2.modeling_gpt2.GPT2MLP with GPT2->ClvpDecoderMLP
507
+ class ClvpDecoderMLP(nn.Module):
508
+ def __init__(self, intermediate_size, config):
509
+ super().__init__()
510
+ embed_dim = config.hidden_size
511
+ self.c_fc = Conv1D(intermediate_size, embed_dim)
512
+ self.c_proj = Conv1D(embed_dim, intermediate_size)
513
+ self.act = ACT2FN[config.activation_function]
514
+ self.dropout = nn.Dropout(config.resid_pdrop)
515
+
516
+ def forward(self, hidden_states: Optional[Tuple[torch.FloatTensor]]) -> torch.FloatTensor:
517
+ hidden_states = self.c_fc(hidden_states)
518
+ hidden_states = self.act(hidden_states)
519
+ hidden_states = self.c_proj(hidden_states)
520
+ hidden_states = self.dropout(hidden_states)
521
+ return hidden_states
522
+
523
+
524
+ class ClvpDecoderLayer(nn.Module):
525
+ def __init__(self, config):
526
+ super().__init__()
527
+ hidden_size = config.hidden_size
528
+ inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size
529
+
530
+ self.input_layernorm = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
531
+ self.attn = ClvpSelfAttention(config)
532
+ self.post_attention_layernorm = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
533
+
534
+ self.mlp = ClvpDecoderMLP(inner_dim, config)
535
+
536
+ def forward(
537
+ self,
538
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
539
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
540
+ attention_mask: Optional[torch.LongTensor] = None,
541
+ position_ids: Optional[torch.LongTensor] = None,
542
+ head_mask: Optional[torch.FloatTensor] = None,
543
+ use_cache: Optional[bool] = False,
544
+ output_attentions: Optional[bool] = False,
545
+ ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]:
546
+ residual = hidden_states
547
+ hidden_states = self.input_layernorm(hidden_states)
548
+ attn_outputs = self.attn(
549
+ hidden_states,
550
+ past_key_value=past_key_value,
551
+ attention_mask=attention_mask,
552
+ position_ids=position_ids,
553
+ head_mask=head_mask,
554
+ use_cache=use_cache,
555
+ output_attentions=output_attentions,
556
+ )
557
+ attn_output = attn_outputs[0]
558
+ outputs = attn_outputs[1:]
559
+ # residual connection
560
+ hidden_states = attn_output + residual
561
+
562
+ residual = hidden_states
563
+ hidden_states = self.post_attention_layernorm(hidden_states)
564
+ feed_forward_hidden_states = self.mlp(hidden_states)
565
+ # residual connection
566
+ hidden_states = residual + feed_forward_hidden_states
567
+
568
+ if use_cache:
569
+ outputs = (hidden_states,) + outputs
570
+ else:
571
+ outputs = (hidden_states,) + outputs[1:]
572
+
573
+ return outputs
574
+
575
+
576
+ class ClvpConditioningEncoder(nn.Module):
577
+ """
578
+ This class processes the log-mel spectrograms(extracted by the Feature Extractor) and text tokens(produced by the
579
+ tokenizer) as inputs for the decoder model.
580
+
581
+ First each log-mel spectrogram is processed into a single vector which captures valuable characteristics from each
582
+ of them, then the text tokens are converted into token embeddings and position embeddings are added afterwards.
583
+ Both of these vectors are concatenated and then passed to the decoder model.
584
+
585
+ The text tokens helps to incorporate the "text information" and the log-mel spectrogram is used to specify the
586
+ "voice characteristics" into the generated mel tokens.
587
+ """
588
+
589
+ def __init__(self, config: ClvpConfig):
590
+ super().__init__()
591
+
592
+ self.text_config = config.text_config
593
+ self.decoder_config = config.decoder_config
594
+
595
+ self.text_token_embedding = nn.Embedding(self.text_config.vocab_size, self.decoder_config.hidden_size)
596
+ self.text_position_embedding = nn.Embedding(
597
+ self.decoder_config.max_text_tokens, self.decoder_config.hidden_size
598
+ )
599
+
600
+ self.mel_conv = nn.Conv1d(self.decoder_config.feature_size, self.decoder_config.hidden_size, kernel_size=1)
601
+
602
+ # define group norms to be used before each attention layer
603
+ num_groups = self.compute_groupnorm_groups(self.decoder_config.hidden_size)
604
+ self.group_norms = nn.ModuleList(
605
+ [
606
+ nn.GroupNorm(num_groups, self.decoder_config.hidden_size, eps=1e-5, affine=True)
607
+ for _ in range(self.decoder_config.num_mel_attn_blocks)
608
+ ]
609
+ )
610
+
611
+ # define the attention layers
612
+ self.mel_attn_blocks = nn.ModuleList(
613
+ [ClvpSelfAttention(self.decoder_config) for _ in range(self.decoder_config.num_mel_attn_blocks)]
614
+ )
615
+
616
+ self.gradient_checkpointing = False
617
+
618
+ def compute_groupnorm_groups(self, channels: int, groups: int = 32):
619
+ """
620
+ Calculates the value of `num_groups` for nn.GroupNorm. This logic is taken from the official tortoise
621
+ repository. link :
622
+ https://github.com/neonbjb/tortoise-tts/blob/4003544b6ff4b68c09856e04d3eff9da26d023c2/tortoise/models/arch_util.py#L26
623
+ """
624
+ if channels <= 16:
625
+ groups = 8
626
+ elif channels <= 64:
627
+ groups = 16
628
+ while channels % groups != 0:
629
+ groups = int(groups / 2)
630
+
631
+ if groups <= 2:
632
+ raise ValueError(
633
+ f"Number of groups for the GroupNorm must be greater than 2, but it is {groups}."
634
+ f"Please consider using a different `hidden_size`"
635
+ )
636
+
637
+ return groups
638
+
639
+ def forward(
640
+ self,
641
+ input_features: torch.FloatTensor,
642
+ input_ids: Optional[torch.LongTensor] = None,
643
+ inputs_embeds: Optional[torch.FloatTensor] = None,
644
+ attention_mask: Optional[torch.LongTensor] = None,
645
+ ):
646
+ # process text
647
+ if input_ids is not None and inputs_embeds is not None:
648
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
649
+ elif input_ids is not None:
650
+ batch_size, seq_length = input_ids.size()
651
+ elif inputs_embeds is not None:
652
+ batch_size, seq_length = inputs_embeds.size()[:-1]
653
+ else:
654
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
655
+
656
+ # construct attention mask if not given
657
+ if attention_mask is None:
658
+ attention_mask = torch.ones([batch_size, seq_length], dtype=torch.long, device=input_ids.device)
659
+
660
+ # We add bos and eos input_ids in the modeling file instead of the tokenizer file to keep the logic simple
661
+ # This logic is specific to ClvpConditioningEncoder and not used by other modules.
662
+ input_ids, attention_mask = _pad_extra_bos_eos_tokens(
663
+ input_ids,
664
+ attention_mask,
665
+ bos_token_id=self.text_config.bos_token_id,
666
+ eos_token_id=self.text_config.eos_token_id,
667
+ )
668
+
669
+ inputs_embeds = self.text_token_embedding(input_ids)
670
+ position_ids = attention_mask.cumsum(-1) - 1
671
+ position_embeds = self.text_position_embedding(position_ids)
672
+ text_embeds = inputs_embeds + position_embeds
673
+
674
+ if self.gradient_checkpointing and self.training:
675
+ # process each log-mel spectrogram into a single vector
676
+ mel_spec = torch.utils.checkpoint.checkpoint(self.mel_conv, input_features)
677
+
678
+ for i, mel_attn_block in enumerate(self.mel_attn_blocks):
679
+ residual_mel_spec = mel_spec.transpose(1, 2)
680
+
681
+ mel_spec = torch.utils.checkpoint.checkpoint(self.group_norms[i], mel_spec).transpose(1, 2)
682
+ mel_spec = torch.utils.checkpoint.checkpoint(mel_attn_block, mel_spec)[0] + residual_mel_spec
683
+ mel_spec = mel_spec.transpose(1, 2)
684
+
685
+ else:
686
+ # process each log-mel spectrogram into a single vector
687
+ mel_spec = self.mel_conv(input_features)
688
+
689
+ for i, mel_attn_block in enumerate(self.mel_attn_blocks):
690
+ residual_mel_spec = mel_spec.transpose(1, 2)
691
+
692
+ mel_spec = self.group_norms[i](mel_spec).transpose(1, 2)
693
+ mel_spec = mel_attn_block(mel_spec)[0] + residual_mel_spec
694
+ mel_spec = mel_spec.transpose(1, 2)
695
+
696
+ mel_spec = mel_spec[:, :, 0]
697
+ mel_spec = mel_spec.unsqueeze(1)
698
+
699
+ # repeat if there is either (1 text vs N audios) or (N texts vs 1 audio)
700
+ if text_embeds.shape[0] == 1 and mel_spec.shape[0] != 1:
701
+ text_embeds = text_embeds.repeat(mel_spec.shape[0], 1, 1)
702
+ elif text_embeds.shape[0] != 1 and mel_spec.shape[0] == 1:
703
+ mel_spec = mel_spec.repeat(text_embeds.shape[0], 1, 1)
704
+ # If there is N texts and M audios we will raise error since the number of text and audio must be same.
705
+ elif text_embeds.shape[0] != mel_spec.shape[0]:
706
+ raise ValueError(
707
+ f"The number of texts and number of audios must be same. "
708
+ f"Found {text_embeds.shape[0]} texts vs {mel_spec.shape[0]} audios"
709
+ )
710
+
711
+ return torch.concat([mel_spec, text_embeds], dim=1)
712
+
713
+
714
+ class ClvpPreTrainedModel(PreTrainedModel):
715
+ """
716
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
717
+ models.
718
+ """
719
+
720
+ config_class = ClvpConfig
721
+ base_model_prefix = "clvp"
722
+ supports_gradient_checkpointing = True
723
+ _skip_keys_device_placement = "past_key_values"
724
+
725
+ def _init_weights(self, module):
726
+ """Initialize the weights"""
727
+ factor = self.config.initializer_factor
728
+ if isinstance(module, nn.Embedding):
729
+ module.weight.data.normal_(mean=0.0, std=factor * 0.02)
730
+ elif isinstance(module, (nn.Linear, Conv1D, nn.Conv1d)):
731
+ module.weight.data.normal_(mean=0.0, std=factor * 0.02)
732
+ if module.bias is not None:
733
+ module.bias.data.zero_()
734
+ elif isinstance(module, ClvpEncoderMLP):
735
+ factor = self.config.initializer_factor
736
+ in_proj_std = (module.config.hidden_size**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor
737
+ fc_std = (2 * module.config.hidden_size) ** -0.5 * factor
738
+ nn.init.normal_(module.fc1.proj.weight if getattr(module.fc1, "proj") else module.fc1.weight, std=fc_std)
739
+ nn.init.normal_(module.fc2.weight, std=in_proj_std)
740
+ elif isinstance(module, ClvpEncoder):
741
+ config = self.config.text_config if hasattr(self.config, "text_config") else self.config
742
+ factor = config.initializer_factor
743
+ module.projection.weight.data.normal_(mean=0.0, std=factor * (config.hidden_size**-0.5))
744
+ elif isinstance(module, ClvpConditioningEncoder):
745
+ module.mel_conv.weight.data.normal_(mean=0.0, std=factor)
746
+ module.mel_conv.bias.data.zero_()
747
+ elif isinstance(module, ClvpForCausalLM):
748
+ for name, p in module.named_parameters():
749
+ if name == "c_proj.weight":
750
+ p.data.normal_(
751
+ mean=0.0, std=(self.config.initializer_range / math.sqrt(2 * self.config.num_hidden_layers))
752
+ )
753
+ if isinstance(module, nn.LayerNorm):
754
+ module.bias.data.zero_()
755
+ module.weight.data.fill_(1.0)
756
+
757
+
758
+ CLVP_START_DOCSTRING = r"""
759
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
760
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
761
+ etc.)
762
+
763
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
764
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
765
+ and behavior.
766
+
767
+ Parameters:
768
+ config ([`ClvpConfig`]): Model configuration class with all the parameters of the model.
769
+ Initializing with a config file does not load the weights associated with the model, only the
770
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
771
+ """
772
+
773
+
774
+ CLVP_INPUTS_DOCSTRING = r"""
775
+ Args:
776
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
777
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
778
+ it.
779
+
780
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
781
+ [`PreTrainedTokenizer.__call__`] for details.
782
+
783
+ [What are input IDs?](../glossary#input-ids)
784
+ input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, time_dim)`):
785
+ Indicates log mel-spectrogram representations for audio returned by [`ClvpFeatureExtractor`].
786
+ conditioning_encoder_inputs_embeds (`torch.FloatTensor`, *optional*):
787
+ inputs_embeds for `ClvpConditioningEncoder`. Can be used in place of `input_ids`.
788
+ text_encoder_inputs_embeds (`torch.FloatTensor`, *optional*):
789
+ inputs_embeds for the text encoder model passed in place of `input_ids`.
790
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
791
+ Mask to avoid performing attention on padding text token indices. Mask values selected in `[0, 1]`:
792
+
793
+ - 1 for tokens that are **not masked**,
794
+ - 0 for tokens that are **masked**.
795
+
796
+ [What are attention masks?](../glossary#attention-mask)
797
+ return_loss (`bool`, *optional*):
798
+ Whether or not to return the contrastive loss.
799
+ output_attentions (`bool`, *optional*):
800
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
801
+ tensors for more detail.
802
+ output_hidden_states (`bool`, *optional*):
803
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
804
+ more detail.
805
+ return_dict (`bool`, *optional*):
806
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
807
+ """
808
+
809
+
810
+ CLVP_DECODER_INPUTS_DOCSTRING = r"""
811
+ Args:
812
+ input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`):
813
+ Indices of input sequence tokens in the vocabulary.
814
+
815
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
816
+ [`PreTrainedTokenizer.__call__`] for details.
817
+
818
+ [What are input IDs?](../glossary#input-ids)
819
+ past_key_values (`Tuple[Tuple[torch.Tensor]]` of length `config.n_layers`):
820
+ Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
821
+ `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have
822
+ their past given to this model should not be passed as `input_ids` as they have already been computed.
823
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
824
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
825
+
826
+ - 1 for tokens that are **not masked**,
827
+ - 0 for tokens that are **masked**.
828
+
829
+ If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for
830
+ `past_key_values`. In other words, the `attention_mask` always has to have the length:
831
+ `len(past_key_values) + len(input_ids)`
832
+
833
+ [What are attention masks?](../glossary#attention-mask)
834
+ token_type_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*):
835
+ Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
836
+ 1]`:
837
+
838
+ - 0 corresponds to a *sentence A* token,
839
+ - 1 corresponds to a *sentence B* token.
840
+
841
+ [What are token type IDs?](../glossary#token-type-ids)
842
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
843
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
844
+ config.max_position_embeddings - 1]`.
845
+
846
+ [What are position IDs?](../glossary#position-ids)
847
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
848
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
849
+
850
+ - 1 indicates the head is **not masked**,
851
+ - 0 indicates the head is **masked**.
852
+
853
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
854
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
855
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
856
+ model's internal embedding lookup matrix.
857
+
858
+ If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see
859
+ `past_key_values`).
860
+ use_cache (`bool`, *optional*):
861
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
862
+ `past_key_values`).
863
+ output_attentions (`bool`, *optional*):
864
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
865
+ tensors for more detail.
866
+ output_hidden_states (`bool`, *optional*):
867
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
868
+ more detail.
869
+ return_dict (`bool`, *optional*):
870
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
871
+ """
872
+
873
+
874
+ class ClvpEncoder(ClvpPreTrainedModel):
875
+ """
876
+ Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
877
+ [`ClvpEncoderLayer`].
878
+
879
+ Args:
880
+ config: ClvpConfig
881
+ """
882
+
883
+ def __init__(self, config: ClvpConfig):
884
+ super().__init__(config)
885
+
886
+ self.config = config
887
+ self.token_embedding = nn.Embedding(config.vocab_size, config.hidden_size)
888
+ self.rotary_pos_emb = ClvpRotaryPositionalEmbedding(config) if config.use_rotary_embedding else None
889
+ self.layers = nn.ModuleList([ClvpEncoderLayer(config) for _ in range(config.num_hidden_layers)])
890
+
891
+ self.sequence_summary = SequenceSummary(config)
892
+ self.final_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
893
+
894
+ self.projection = nn.Linear(config.hidden_size, config.projection_dim, bias=False)
895
+
896
+ self.gradient_checkpointing = False
897
+
898
+ self.post_init()
899
+
900
+ def get_input_embeddings(self):
901
+ return self.token_embedding
902
+
903
+ def set_input_embeddings(self, value):
904
+ self.token_embedding = value
905
+
906
+ def forward(
907
+ self,
908
+ input_ids: Optional[torch.LongTensor] = None,
909
+ inputs_embeds: Optional[torch.LongTensor] = None,
910
+ attention_mask: Optional[torch.LongTensor] = None,
911
+ position_ids: Optional[torch.LongTensor] = None,
912
+ output_attentions: Optional[bool] = None,
913
+ output_hidden_states: Optional[bool] = None,
914
+ return_dict: Optional[bool] = None,
915
+ ) -> Union[Tuple, BaseModelOutput]:
916
+ r"""
917
+ Args:
918
+ input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*):
919
+ Indices of input sequence tokens in the vocabulary.
920
+
921
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
922
+ [`PreTrainedTokenizer.__call__`] for details.
923
+
924
+ [What are input IDs?](../glossary#input-ids)
925
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
926
+ input embeddings for the model. This bypasses the model's internal embedding lookup matrix.
927
+ attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
928
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
929
+
930
+ - 1 for tokens that are **not masked**,
931
+ - 0 for tokens that are **masked**.
932
+
933
+ [What are attention masks?](../glossary#attention-mask)
934
+ position_ids (`torch.LongTensor`, *optional*):
935
+ Denotes the position ids of `input_ids`.
936
+ output_attentions (`bool`, *optional*):
937
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
938
+ returned tensors for more detail.
939
+ output_hidden_states (`bool`, *optional*):
940
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
941
+ for more detail.
942
+ return_dict (`bool`, *optional*):
943
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
944
+ """
945
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
946
+ output_hidden_states = (
947
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
948
+ )
949
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
950
+
951
+ if input_ids is not None and inputs_embeds is not None:
952
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
953
+ elif input_ids is not None:
954
+ self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
955
+ input_shape = input_ids.size()
956
+ input_ids = input_ids.view(-1, input_shape[-1])
957
+ inputs_embeds = self.token_embedding(input_ids)
958
+ elif inputs_embeds is not None:
959
+ input_shape = inputs_embeds.size()[:-1]
960
+ else:
961
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
962
+
963
+ # expand attention_mask and create position_ids if needed
964
+ if attention_mask is not None:
965
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
966
+ attention_mask = _prepare_4d_attention_mask(attention_mask, inputs_embeds.dtype)
967
+
968
+ if position_ids is None:
969
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
970
+ position_ids = torch.arange(input_shape[1], dtype=torch.long, device=device)
971
+ position_ids = position_ids.unsqueeze(0)
972
+
973
+ encoder_states = () if output_hidden_states else None
974
+ all_attentions = () if output_attentions else None
975
+
976
+ rotary_pos_emb = self.rotary_pos_emb(inputs_embeds) if self.rotary_pos_emb is not None else None
977
+
978
+ hidden_states = inputs_embeds
979
+ for idx, encoder_layer in enumerate(self.layers):
980
+ if output_hidden_states:
981
+ encoder_states = encoder_states + (hidden_states,)
982
+ if self.gradient_checkpointing and self.training:
983
+ layer_outputs = torch.utils.checkpoint.checkpoint(
984
+ encoder_layer.__call__,
985
+ hidden_states,
986
+ rotary_pos_emb,
987
+ attention_mask,
988
+ position_ids,
989
+ )
990
+ else:
991
+ layer_outputs = encoder_layer(
992
+ hidden_states,
993
+ rotary_pos_emb,
994
+ attention_mask,
995
+ position_ids,
996
+ output_attentions=output_attentions,
997
+ )
998
+
999
+ hidden_states = layer_outputs[0]
1000
+
1001
+ if output_attentions:
1002
+ all_attentions = all_attentions + (layer_outputs[1],)
1003
+
1004
+ if output_hidden_states:
1005
+ encoder_states = encoder_states + (hidden_states,)
1006
+
1007
+ last_hidden_state = hidden_states
1008
+ last_hidden_state = self.final_layer_norm(last_hidden_state)
1009
+
1010
+ # take the mean over axis 1 and get pooled output
1011
+ pooled_output = self.sequence_summary(last_hidden_state)
1012
+
1013
+ # apply the projection layer
1014
+ embeds = self.projection(pooled_output)
1015
+
1016
+ if not return_dict:
1017
+ return tuple(
1018
+ v for v in [embeds, last_hidden_state, pooled_output, encoder_states, all_attentions] if v is not None
1019
+ )
1020
+
1021
+ return ClvpEncoderOutput(
1022
+ embeds=embeds,
1023
+ last_hidden_state=last_hidden_state,
1024
+ pooler_output=pooled_output,
1025
+ hidden_states=encoder_states,
1026
+ attentions=all_attentions,
1027
+ )
1028
+
1029
+
1030
+ class ClvpDecoder(ClvpPreTrainedModel):
1031
+ """
1032
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`ClvpDecoderLayer`]
1033
+ """
1034
+
1035
+ def __init__(self, config):
1036
+ super().__init__(config)
1037
+
1038
+ self.config = config
1039
+
1040
+ self.input_embeds_layer = nn.Embedding(self.config.vocab_size, self.config.hidden_size)
1041
+ self.position_embeds_layer = nn.Embedding(self.config.max_position_embeddings, self.config.hidden_size)
1042
+
1043
+ self.drop = nn.Dropout(self.config.embd_pdrop)
1044
+ self.layers = nn.ModuleList([ClvpDecoderLayer(self.config) for _ in range(self.config.num_hidden_layers)])
1045
+ self.layer_norm = nn.LayerNorm(self.config.hidden_size, eps=self.config.layer_norm_epsilon)
1046
+
1047
+ self.gradient_checkpointing = False
1048
+
1049
+ # Initialize weights and apply final processing
1050
+ self.post_init()
1051
+
1052
+ def get_input_embeddings(self):
1053
+ return self.input_embeds_layer
1054
+
1055
+ def set_input_embeddings(self, new_embeddings):
1056
+ self.input_embeds_layer = new_embeddings
1057
+
1058
+ def _prune_heads(self, heads_to_prune):
1059
+ """
1060
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
1061
+ """
1062
+ for layer, heads in heads_to_prune.items():
1063
+ self.layers[layer].attn.prune_heads(heads)
1064
+
1065
+ @add_start_docstrings_to_model_forward(CLVP_DECODER_INPUTS_DOCSTRING)
1066
+ def forward(
1067
+ self,
1068
+ input_ids: Optional[torch.LongTensor] = None,
1069
+ attention_mask: Optional[torch.FloatTensor] = None,
1070
+ token_type_ids: Optional[torch.LongTensor] = None,
1071
+ position_ids: Optional[torch.LongTensor] = None,
1072
+ head_mask: Optional[torch.FloatTensor] = None,
1073
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1074
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1075
+ use_cache: Optional[bool] = None,
1076
+ output_attentions: Optional[bool] = None,
1077
+ output_hidden_states: Optional[bool] = None,
1078
+ return_dict: Optional[bool] = None,
1079
+ ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]:
1080
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1081
+ output_hidden_states = (
1082
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1083
+ )
1084
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1085
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1086
+
1087
+ if input_ids is not None and inputs_embeds is not None:
1088
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
1089
+ elif input_ids is not None:
1090
+ self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
1091
+ input_shape = input_ids.size()
1092
+ input_ids = input_ids.view(-1, input_shape[-1])
1093
+ input_ids.shape[0]
1094
+ elif inputs_embeds is not None:
1095
+ input_shape = inputs_embeds.size()[:-1]
1096
+ inputs_embeds.shape[0]
1097
+ else:
1098
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
1099
+
1100
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
1101
+
1102
+ if token_type_ids is not None:
1103
+ token_type_ids = token_type_ids.view(-1, input_shape[-1])
1104
+
1105
+ if past_key_values is None:
1106
+ past_key_values_length = 0
1107
+ past_key_values = tuple([None] * len(self.layers))
1108
+ else:
1109
+ past_key_values_length = past_key_values[0][0].size(-2)
1110
+ if position_ids is None:
1111
+ position_ids = torch.arange(
1112
+ past_key_values_length, input_shape[-1] + past_key_values_length, dtype=torch.long, device=device
1113
+ )
1114
+ position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
1115
+
1116
+ if inputs_embeds is None:
1117
+ inputs_embeds = self.input_embeds_layer(input_ids)
1118
+ position_embeds = self.position_embeds_layer(position_ids)
1119
+ inputs_embeds = inputs_embeds + position_embeds
1120
+
1121
+ attention_mask = _prepare_4d_causal_attention_mask(
1122
+ attention_mask, input_shape, inputs_embeds, past_key_values_length
1123
+ )
1124
+
1125
+ # Prepare head mask if needed
1126
+ # 1.0 in head_mask indicate we keep the head
1127
+ # attention_probs has shape bsz x num_attention_heads x N x N
1128
+ # head_mask has shape num_hidden_layers x batch x num_attention_heads x N x N
1129
+ head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
1130
+
1131
+ hidden_states = inputs_embeds
1132
+
1133
+ if token_type_ids is not None:
1134
+ token_type_embeds = self.input_embeds_layer(token_type_ids)
1135
+ hidden_states = hidden_states + token_type_embeds
1136
+
1137
+ hidden_states = self.drop(hidden_states)
1138
+
1139
+ output_shape = (-1,) + input_shape[1:] + (hidden_states.size(-1),)
1140
+
1141
+ if self.gradient_checkpointing and self.training:
1142
+ if use_cache:
1143
+ logger.warning_once(
1144
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
1145
+ )
1146
+ use_cache = False
1147
+
1148
+ presents = () if use_cache else None
1149
+ all_self_attentions = () if output_attentions else None
1150
+ all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
1151
+ all_hidden_states = () if output_hidden_states else None
1152
+ for i, (block, past_key_value) in enumerate(zip(self.layers, past_key_values)):
1153
+ if output_hidden_states:
1154
+ all_hidden_states = all_hidden_states + (hidden_states,)
1155
+
1156
+ if self.gradient_checkpointing and self.training:
1157
+ outputs = torch.utils.checkpoint.checkpoint(
1158
+ block.__call__,
1159
+ hidden_states,
1160
+ None,
1161
+ attention_mask,
1162
+ position_ids,
1163
+ head_mask[i],
1164
+ )
1165
+ else:
1166
+ outputs = block(
1167
+ hidden_states,
1168
+ past_key_value=past_key_value,
1169
+ attention_mask=attention_mask,
1170
+ position_ids=position_ids,
1171
+ head_mask=head_mask[i],
1172
+ use_cache=use_cache,
1173
+ output_attentions=output_attentions,
1174
+ )
1175
+
1176
+ hidden_states = outputs[0]
1177
+ if use_cache is True:
1178
+ presents = presents + (outputs[1],)
1179
+
1180
+ if output_attentions:
1181
+ all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
1182
+ if self.config.add_cross_attention:
1183
+ all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],)
1184
+
1185
+ hidden_states = self.layer_norm(hidden_states)
1186
+
1187
+ hidden_states = hidden_states.view(output_shape)
1188
+
1189
+ # Add last hidden state
1190
+ if output_hidden_states:
1191
+ all_hidden_states = all_hidden_states + (hidden_states,)
1192
+
1193
+ if not return_dict:
1194
+ return tuple(
1195
+ v
1196
+ for v in [hidden_states, presents, all_hidden_states, all_self_attentions, all_cross_attentions]
1197
+ if v is not None
1198
+ )
1199
+
1200
+ return BaseModelOutputWithPastAndCrossAttentions(
1201
+ last_hidden_state=hidden_states,
1202
+ past_key_values=presents,
1203
+ hidden_states=all_hidden_states,
1204
+ attentions=all_self_attentions,
1205
+ cross_attentions=all_cross_attentions,
1206
+ )
1207
+
1208
+
1209
+ @add_start_docstrings(
1210
+ "The bare Clvp decoder model outputting raw hidden-states without any specific head on top.",
1211
+ CLVP_START_DOCSTRING,
1212
+ )
1213
+ class ClvpModel(ClvpPreTrainedModel):
1214
+ def __init__(self, config: ClvpDecoderConfig):
1215
+ super().__init__(config)
1216
+ self.config = config
1217
+ self.decoder = ClvpDecoder(self.config)
1218
+
1219
+ # Initialize weights and apply final processing
1220
+ self.post_init()
1221
+
1222
+ def get_input_embeddings(self):
1223
+ return self.decoder.input_embeds_layer
1224
+
1225
+ def set_input_embeddings(self, value):
1226
+ self.decoder.input_embeds_layer = value
1227
+
1228
+ def get_decoder(self):
1229
+ return self.decoder
1230
+
1231
+ @add_start_docstrings_to_model_forward(CLVP_DECODER_INPUTS_DOCSTRING)
1232
+ def forward(
1233
+ self,
1234
+ input_ids: Optional[torch.LongTensor] = None,
1235
+ attention_mask: Optional[torch.FloatTensor] = None,
1236
+ token_type_ids: Optional[torch.LongTensor] = None,
1237
+ position_ids: Optional[torch.LongTensor] = None,
1238
+ head_mask: Optional[torch.FloatTensor] = None,
1239
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1240
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1241
+ use_cache: Optional[bool] = None,
1242
+ output_attentions: Optional[bool] = None,
1243
+ output_hidden_states: Optional[bool] = None,
1244
+ return_dict: Optional[bool] = None,
1245
+ ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]:
1246
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1247
+ output_hidden_states = (
1248
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1249
+ )
1250
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1251
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1252
+
1253
+ # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
1254
+ decoder_outputs = self.decoder(
1255
+ input_ids=input_ids,
1256
+ attention_mask=attention_mask,
1257
+ token_type_ids=token_type_ids,
1258
+ position_ids=position_ids,
1259
+ head_mask=head_mask,
1260
+ past_key_values=past_key_values,
1261
+ inputs_embeds=inputs_embeds,
1262
+ use_cache=use_cache,
1263
+ output_attentions=output_attentions,
1264
+ output_hidden_states=output_hidden_states,
1265
+ return_dict=return_dict,
1266
+ )
1267
+
1268
+ if not return_dict:
1269
+ return decoder_outputs
1270
+
1271
+ return BaseModelOutputWithPastAndCrossAttentions(
1272
+ last_hidden_state=decoder_outputs.last_hidden_state,
1273
+ past_key_values=decoder_outputs.past_key_values,
1274
+ hidden_states=decoder_outputs.hidden_states,
1275
+ attentions=decoder_outputs.attentions,
1276
+ cross_attentions=decoder_outputs.cross_attentions,
1277
+ )
1278
+
1279
+
1280
+ @add_start_docstrings(
1281
+ "The CLVP decoder model with a language modelling head on top.",
1282
+ CLVP_START_DOCSTRING,
1283
+ )
1284
+ class ClvpForCausalLM(ClvpPreTrainedModel):
1285
+ def __init__(self, config):
1286
+ super().__init__(config)
1287
+
1288
+ self.config = config
1289
+ self.model = ClvpModel(self.config)
1290
+
1291
+ self.final_norm = nn.LayerNorm(self.config.hidden_size)
1292
+ self.lm_head = nn.Linear(self.config.hidden_size, self.config.vocab_size, bias=True)
1293
+
1294
+ # Initialize weights and apply final processing
1295
+ self.post_init()
1296
+
1297
+ def get_input_embeddings(self):
1298
+ return self.model.decoder.input_embeds_layer
1299
+
1300
+ def set_input_embeddings(self, new_embeddings):
1301
+ self.model.decoder.input_embeds_layer = new_embeddings
1302
+
1303
+ def _prepare_model_inputs(
1304
+ self,
1305
+ inputs: Optional[torch.Tensor] = None,
1306
+ bos_token_id: Optional[int] = None,
1307
+ model_kwargs: Optional[Dict[str, torch.Tensor]] = None,
1308
+ ) -> Tuple[torch.Tensor, Optional[str], Dict[str, torch.Tensor]]:
1309
+ """
1310
+ This function extracts the model-specific `inputs` for generation.
1311
+ """
1312
+ input_name = self.main_input_name
1313
+
1314
+ model_kwargs = {k: v for k, v in model_kwargs.items() if v is not None}
1315
+
1316
+ inputs_kwarg = model_kwargs.pop(input_name, None)
1317
+ if inputs_kwarg is not None and inputs is not None:
1318
+ raise ValueError(
1319
+ f"`inputs`: {inputs}` were passed alongside {input_name} which is not allowed."
1320
+ f"Make sure to either pass {inputs} or {input_name}=..."
1321
+ )
1322
+ elif inputs_kwarg is not None:
1323
+ inputs = inputs_kwarg
1324
+
1325
+ if input_name == "input_ids" and "inputs_embeds" in model_kwargs:
1326
+ model_kwargs["input_ids"] = self._maybe_initialize_input_ids_for_generation(
1327
+ inputs, bos_token_id, model_kwargs=model_kwargs
1328
+ )
1329
+ inputs, input_name = model_kwargs["inputs_embeds"], "inputs_embeds"
1330
+
1331
+ # Check if conditioning_embeds are provided or not, if yes then concatenate the bos_token_id at the end of the conditioning_embeds.
1332
+ # Then we must subtract the positional_ids because during the forward pass it will be added anyways, so we must cancel them out here.
1333
+ conditioning_embeds = model_kwargs.get("conditioning_embeds", None)
1334
+
1335
+ if conditioning_embeds is not None:
1336
+ mel_start_token_embedding = self.model.decoder.input_embeds_layer(
1337
+ torch.full(
1338
+ (conditioning_embeds.shape[0], 1),
1339
+ fill_value=self.config.bos_token_id,
1340
+ device=conditioning_embeds.device,
1341
+ )
1342
+ )
1343
+ mel_start_token_embedding += self.model.decoder.position_embeds_layer(
1344
+ torch.full((conditioning_embeds.shape[0], 1), fill_value=0, device=conditioning_embeds.device)
1345
+ )
1346
+ conditioning_embeds = torch.concat([conditioning_embeds, mel_start_token_embedding], dim=1)
1347
+
1348
+ # subtract the positional_ids here
1349
+ if hasattr(model_kwargs, "attention_mask"):
1350
+ position_ids = model_kwargs["attention_mask"].long().cumsum(-1) - 1
1351
+ else:
1352
+ position_ids = torch.range(
1353
+ 0, conditioning_embeds.shape[1] - 1, dtype=torch.long, device=conditioning_embeds.device
1354
+ )
1355
+ position_ids = position_ids.unsqueeze(0).repeat(conditioning_embeds.shape[0], 1)
1356
+
1357
+ model_kwargs["inputs_embeds"] = conditioning_embeds - self.model.decoder.position_embeds_layer(
1358
+ position_ids
1359
+ )
1360
+ model_kwargs["input_ids"] = (
1361
+ torch.ones((model_kwargs["inputs_embeds"].shape[0], 1), dtype=torch.long, device=self.device)
1362
+ * self.config.bos_token_id
1363
+ )
1364
+
1365
+ return model_kwargs["inputs_embeds"], "inputs_embeds", model_kwargs
1366
+
1367
+ inputs = self._maybe_initialize_input_ids_for_generation(inputs, bos_token_id, model_kwargs)
1368
+ return inputs, input_name, model_kwargs
1369
+
1370
+ def prepare_inputs_for_generation(
1371
+ self, input_ids, past_key_values=None, inputs_embeds=None, conditioning_embeds=None, **kwargs
1372
+ ):
1373
+ input_ids_length = input_ids.shape[-1]
1374
+ token_type_ids = kwargs.get("token_type_ids", None)
1375
+ # only last token for inputs_ids if past is defined in kwargs
1376
+ if past_key_values:
1377
+ past_length = past_key_values[0][0].shape[2]
1378
+
1379
+ # Some generation methods already pass only the last input ID
1380
+ if input_ids.shape[1] > past_length:
1381
+ remove_prefix_length = past_length
1382
+ else:
1383
+ # Default to old behavior: keep only final ID
1384
+ remove_prefix_length = input_ids.shape[1] - 1
1385
+
1386
+ input_ids = input_ids[:, remove_prefix_length:]
1387
+ if token_type_ids is not None:
1388
+ token_type_ids = token_type_ids[:, -input_ids.shape[1] :]
1389
+
1390
+ attention_mask = kwargs.get("attention_mask", None)
1391
+ position_ids = kwargs.get("position_ids", None)
1392
+
1393
+ if attention_mask is not None and position_ids is None:
1394
+ # create position_ids on the fly for batch generation
1395
+ position_ids = attention_mask.long().cumsum(-1) - 1
1396
+ position_ids.masked_fill_(attention_mask == 0, 1)
1397
+ if past_key_values:
1398
+ position_ids = position_ids[:, -1].unsqueeze(-1)
1399
+ else:
1400
+ position_ids = None
1401
+
1402
+ if conditioning_embeds is not None and past_key_values is not None:
1403
+ position_ids = torch.tensor([input_ids_length], dtype=torch.long, device=input_ids.device)
1404
+
1405
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1406
+ if inputs_embeds is not None and past_key_values is None:
1407
+ model_inputs = {"inputs_embeds": inputs_embeds}
1408
+ else:
1409
+ model_inputs = {"input_ids": input_ids}
1410
+
1411
+ model_inputs.update(
1412
+ {
1413
+ "past_key_values": past_key_values,
1414
+ "use_cache": kwargs.get("use_cache"),
1415
+ "position_ids": position_ids,
1416
+ "token_type_ids": token_type_ids,
1417
+ }
1418
+ )
1419
+ return model_inputs
1420
+
1421
+ @add_start_docstrings_to_model_forward(CLVP_DECODER_INPUTS_DOCSTRING)
1422
+ def forward(
1423
+ self,
1424
+ input_ids: Optional[torch.LongTensor] = None,
1425
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1426
+ attention_mask: Optional[torch.FloatTensor] = None,
1427
+ token_type_ids: Optional[torch.LongTensor] = None,
1428
+ position_ids: Optional[torch.LongTensor] = None,
1429
+ head_mask: Optional[torch.FloatTensor] = None,
1430
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1431
+ labels: Optional[torch.LongTensor] = None,
1432
+ use_cache: Optional[bool] = None,
1433
+ output_attentions: Optional[bool] = None,
1434
+ output_hidden_states: Optional[bool] = None,
1435
+ return_dict: Optional[bool] = None,
1436
+ ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
1437
+ r"""
1438
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1439
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1440
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
1441
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
1442
+ """
1443
+
1444
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1445
+ output_hidden_states = (
1446
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1447
+ )
1448
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1449
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1450
+
1451
+ outputs = self.model(
1452
+ input_ids=input_ids,
1453
+ past_key_values=past_key_values,
1454
+ attention_mask=attention_mask,
1455
+ token_type_ids=token_type_ids,
1456
+ position_ids=position_ids,
1457
+ head_mask=head_mask,
1458
+ inputs_embeds=inputs_embeds,
1459
+ use_cache=use_cache,
1460
+ output_attentions=output_attentions,
1461
+ output_hidden_states=output_hidden_states,
1462
+ return_dict=return_dict,
1463
+ )
1464
+
1465
+ hidden_states = outputs[0]
1466
+
1467
+ lm_logits = self.final_norm(hidden_states)
1468
+ lm_logits = self.lm_head(lm_logits)
1469
+
1470
+ loss = None
1471
+ if labels is not None:
1472
+ labels = labels.to(lm_logits.device)
1473
+ # Shift so that tokens < n predict n
1474
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1475
+ shift_labels = labels[..., 1:].contiguous()
1476
+ # Flatten the tokens
1477
+ loss_fct = CrossEntropyLoss()
1478
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1479
+
1480
+ if not return_dict:
1481
+ output = (lm_logits,) + outputs[1:]
1482
+ return ((loss,) + output) if loss is not None else output
1483
+
1484
+ return CausalLMOutputWithCrossAttentions(
1485
+ loss=loss,
1486
+ logits=lm_logits,
1487
+ past_key_values=outputs.past_key_values,
1488
+ hidden_states=outputs.hidden_states,
1489
+ attentions=outputs.attentions,
1490
+ cross_attentions=outputs.cross_attentions,
1491
+ )
1492
+
1493
+ @staticmethod
1494
+ # Copied from transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel._reorder_cache
1495
+ def _reorder_cache(
1496
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
1497
+ ) -> Tuple[Tuple[torch.Tensor]]:
1498
+ """
1499
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
1500
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
1501
+ beam_idx at every generation step.
1502
+ """
1503
+ return tuple(
1504
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
1505
+ for layer_past in past_key_values
1506
+ )
1507
+
1508
+
1509
+ @add_start_docstrings(
1510
+ "The composite CLVP model with a text encoder, speech encoder and speech decoder model."
1511
+ "The speech decoder model generates the speech_ids from the text and the text encoder and speech encoder works"
1512
+ "together to filter out the best speech_ids.",
1513
+ CLVP_START_DOCSTRING,
1514
+ )
1515
+ class ClvpModelForConditionalGeneration(ClvpPreTrainedModel):
1516
+ config_class = ClvpConfig
1517
+
1518
+ def __init__(self, config: ClvpConfig):
1519
+ super().__init__(config)
1520
+
1521
+ if not isinstance(config.text_config, ClvpEncoderConfig):
1522
+ raise ValueError(
1523
+ "config.text_config is expected to be of type `ClvpEncoderConfig` but is of type"
1524
+ f" {type(config.text_config)}."
1525
+ )
1526
+
1527
+ if not isinstance(config.speech_config, ClvpEncoderConfig):
1528
+ raise ValueError(
1529
+ "config.speech_config is expected to be of type `ClvpEncoderConfig` but is of type"
1530
+ f" {type(config.speech_config)}."
1531
+ )
1532
+
1533
+ if not isinstance(config.decoder_config, ClvpDecoderConfig):
1534
+ raise ValueError(
1535
+ "config.decoder_config is expected to be of type `ClvpDecoderConfig` but is of type"
1536
+ f" {type(config.decoder_config)}."
1537
+ )
1538
+
1539
+ self.conditioning_encoder = ClvpConditioningEncoder(config)
1540
+
1541
+ self.speech_decoder_model = ClvpForCausalLM(config.decoder_config)
1542
+
1543
+ self.text_encoder_model = ClvpEncoder(config.text_config)
1544
+ self.speech_encoder_model = ClvpEncoder(config.speech_config)
1545
+
1546
+ self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))
1547
+
1548
+ # Initialize weights and apply final processing
1549
+ self.post_init()
1550
+
1551
+ # taken from the original repo,
1552
+ # link : https://github.com/neonbjb/tortoise-tts/blob/4003544b6ff4b68c09856e04d3eff9da26d023c2/tortoise/api.py#L117
1553
+ def fix_speech_decoder_output(self, speech_ids: torch.LongTensor) -> torch.LongTensor:
1554
+ """
1555
+ This method modifies the output of the decoder model, such as replacing the `eos_token_id` and changing the
1556
+ last few tokens of each sequence.
1557
+
1558
+ Args:
1559
+ speech_ids (`torch.LongTensor`):
1560
+ This refers to the output of the decoder model.
1561
+ """
1562
+ decoder_fixing_codes = self.config.decoder_config.decoder_fixing_codes
1563
+ speech_ids = speech_ids[:, 1:]
1564
+
1565
+ stop_token_indices = torch.where(speech_ids == self.speech_decoder_model.config.eos_token_id, 1, 0)
1566
+ speech_ids = torch.masked_fill(speech_ids, mask=stop_token_indices.bool(), value=decoder_fixing_codes[0])
1567
+
1568
+ for i, each_seq_stop_token_index in enumerate(stop_token_indices):
1569
+ # This means that no stop tokens were found so the sentence was still being generated, in that case we don't need
1570
+ # to apply any padding so just skip to the next sequence of tokens.
1571
+ if each_seq_stop_token_index.sum() == 0:
1572
+ continue
1573
+
1574
+ stm = each_seq_stop_token_index.argmax()
1575
+ speech_ids[i, stm:] = decoder_fixing_codes[0]
1576
+ if stm - 3 < speech_ids.shape[1]:
1577
+ speech_ids[i, -3:] = torch.tensor(
1578
+ [decoder_fixing_codes[1:]], device=speech_ids.device, dtype=torch.long
1579
+ )
1580
+
1581
+ return speech_ids
1582
+
1583
+ def get_text_features(
1584
+ self,
1585
+ input_ids: Optional[torch.LongTensor] = None,
1586
+ text_encoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1587
+ attention_mask: Optional[torch.LongTensor] = None,
1588
+ ) -> torch.FloatTensor:
1589
+ r"""
1590
+ This method can be used to extract text_embeds from a text. The text embeddings obtained by applying the
1591
+ projection layer to the pooled output of the CLVP text encoder model.
1592
+
1593
+ Args:
1594
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
1595
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
1596
+ provide it.
1597
+
1598
+ [What are input IDs?](../glossary#input-ids)
1599
+ text_encoder_inputs_embeds (`torch.FloatTensor`, *optional*):
1600
+ inputs_embeds for the text encoder model passed in place of `input_ids`.
1601
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1602
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1603
+
1604
+ - 1 for tokens that are **not masked**,
1605
+ - 0 for tokens that are **masked**.
1606
+
1607
+ [What are attention masks?](../glossary#attention-mask)
1608
+
1609
+ Returns:
1610
+ `torch.FloatTensor` of shape `(batch_size, output_dim)`:
1611
+ The text embeddings obtained by applying the projection layer to the pooled output of the CLVP Text
1612
+ Model.
1613
+
1614
+ Examples:
1615
+
1616
+ ```python
1617
+ >>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
1618
+
1619
+ >>> # Define the Text
1620
+ >>> text = "This is an example text."
1621
+
1622
+ >>> # Define processor and model
1623
+ >>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
1624
+ >>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
1625
+
1626
+ >>> # Generate processor output and text embeds
1627
+ >>> processor_output = processor(text=text, return_tensors="pt")
1628
+ >>> text_embeds = model.get_text_features(input_ids=processor_output["input_ids"])
1629
+ ```
1630
+ """
1631
+
1632
+ outputs = self.text_encoder_model(
1633
+ input_ids=input_ids,
1634
+ inputs_embeds=text_encoder_inputs_embeds,
1635
+ attention_mask=attention_mask,
1636
+ )
1637
+
1638
+ return outputs[0]
1639
+
1640
+ def get_speech_features(
1641
+ self,
1642
+ speech_ids: Optional[torch.LongTensor] = None,
1643
+ input_ids: Optional[torch.LongTensor] = None,
1644
+ input_features: Optional[torch.FloatTensor] = None,
1645
+ conditioning_encoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1646
+ attention_mask: Optional[torch.Tensor] = None,
1647
+ generation_config: Optional[GenerationConfig] = None,
1648
+ **kwargs,
1649
+ ) -> torch.FloatTensor:
1650
+ r"""
1651
+ This method can be used to extract speech_embeds. The speech embeddings are obtained by applying the speech
1652
+ model on speech_ids. If speech_ids is not present but both input_ids and input_features are given then the
1653
+ decoder model will be used to first generate the speech_ids and then applying the speech model.
1654
+
1655
+ Args:
1656
+ speech_ids (`torch.LongTensor` of shape `(batch_size, num_speech_ids)`, *optional*):
1657
+ Speech Tokens. Padding will be ignored by default should you provide it. If speech_ids are provided
1658
+ then input_ids and input_features will be automatically ignored.
1659
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1660
+ Input text Tokens. Processed from the [`ClvpTokenizer`]. If speech_ids is not provided, then input_ids
1661
+ and input_features will be used.
1662
+ input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, time_dim)`, *optional*):
1663
+ Indicates log-melspectrogram representations for audio returned by [`ClvpFeatureExtractor`]. If
1664
+ speech_ids is not provided, then input_ids and input_features will be used.
1665
+ conditioning_encoder_inputs_embeds (`torch.FloatTensor`, *optional*):
1666
+ inputs_embeds for `ClvpConditioningEncoder`. Can be used in place of `input_ids`.
1667
+ attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1668
+ Mask to avoid performing attention on padding speech token indices. Mask values selected in `[0, 1]`:
1669
+
1670
+ - 1 for tokens that are **not masked**,
1671
+ - 0 for tokens that are **masked**.
1672
+
1673
+ [What are attention masks?](../glossary#attention-mask)
1674
+ generation_config (`GenerationConfig`, *optional*):
1675
+ generation config to control the generation of speech_ids if they are not provided.
1676
+
1677
+ Returns:
1678
+ `torch.FloatTensor` of shape `(batch_size, output_dim)`:
1679
+ The speech embeddings obtained by applying the projection layer to the pooled output of the CLVP Speech
1680
+ Model.
1681
+
1682
+ Examples:
1683
+
1684
+ ```python
1685
+ >>> import datasets
1686
+ >>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
1687
+
1688
+ >>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library)
1689
+ >>> text = "This is an example text."
1690
+ >>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
1691
+ >>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
1692
+ >>> _, audio, sr = ds.sort("id").select(range(1))[:1]["audio"][0].values()
1693
+
1694
+ >>> # Define processor and model
1695
+ >>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
1696
+ >>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
1697
+
1698
+ >>> # Generate processor output and model output
1699
+ >>> processor_output = processor(raw_speech=audio, sampling_rate=sr, text=text, return_tensors="pt")
1700
+ >>> speech_embeds = model.get_speech_features(
1701
+ ... input_ids=processor_output["input_ids"], input_features=processor_output["input_features"]
1702
+ ... )
1703
+ ```
1704
+ """
1705
+
1706
+ if speech_ids is None:
1707
+ if (input_ids is None and conditioning_encoder_inputs_embeds is None) or input_features is None:
1708
+ raise ValueError(
1709
+ "Either speech_ids or input_ids/conditioning_encoder_inputs_embeds and input_features must be provided."
1710
+ )
1711
+
1712
+ if generation_config is None:
1713
+ generation_config = self.generation_config
1714
+ generation_config.update(**kwargs)
1715
+
1716
+ conditioning_embeds = self.conditioning_encoder(
1717
+ input_features=input_features,
1718
+ input_ids=input_ids,
1719
+ inputs_embeds=conditioning_encoder_inputs_embeds,
1720
+ attention_mask=attention_mask,
1721
+ )
1722
+
1723
+ speech_ids = self.speech_decoder_model.generate(
1724
+ conditioning_embeds=conditioning_embeds,
1725
+ generation_config=generation_config,
1726
+ )
1727
+
1728
+ speech_ids = self.fix_speech_decoder_output(speech_ids[0])
1729
+
1730
+ outputs = self.speech_encoder_model(
1731
+ input_ids=speech_ids,
1732
+ attention_mask=attention_mask,
1733
+ )
1734
+
1735
+ return outputs[0]
1736
+
1737
+ @add_start_docstrings_to_model_forward(CLVP_INPUTS_DOCSTRING)
1738
+ @replace_return_docstrings(output_type=ClvpOutput, config_class=ClvpConfig)
1739
+ def forward(
1740
+ self,
1741
+ input_ids: torch.LongTensor = None,
1742
+ input_features: torch.FloatTensor = None,
1743
+ conditioning_encoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1744
+ text_encoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1745
+ attention_mask: Optional[torch.LongTensor] = None,
1746
+ return_loss: Optional[bool] = None,
1747
+ output_hidden_states: Optional[bool] = None,
1748
+ output_attentions: Optional[bool] = False,
1749
+ return_dict: Optional[bool] = None,
1750
+ ) -> Union[Tuple, ClvpOutput]:
1751
+ r"""
1752
+ Returns:
1753
+
1754
+ Examples:
1755
+
1756
+ ```python
1757
+ >>> import datasets
1758
+ >>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
1759
+
1760
+ >>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library)
1761
+ >>> text = "This is an example text."
1762
+
1763
+ >>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
1764
+ >>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
1765
+ >>> _, audio, sr = ds.sort("id").select(range(1))[:1]["audio"][0].values()
1766
+
1767
+ >>> # Define processor and model
1768
+ >>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
1769
+ >>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
1770
+
1771
+ >>> # processor outputs and model outputs
1772
+ >>> processor_output = processor(raw_speech=audio, sampling_rate=sr, text=text, return_tensors="pt")
1773
+ >>> outputs = model(
1774
+ ... input_ids=processor_output["input_ids"],
1775
+ ... input_features=processor_output["input_features"],
1776
+ ... return_dict=True,
1777
+ ... )
1778
+ ```
1779
+ """
1780
+
1781
+ # Use CLVP model's config for some fields (if specified) instead of those of speech & text components.
1782
+ output_hidden_states = (
1783
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1784
+ )
1785
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1786
+
1787
+ conditioning_embeds = self.conditioning_encoder(
1788
+ input_features=input_features,
1789
+ input_ids=input_ids,
1790
+ inputs_embeds=conditioning_encoder_inputs_embeds,
1791
+ attention_mask=attention_mask,
1792
+ )
1793
+
1794
+ decoder_outputs = self.speech_decoder_model(
1795
+ inputs_embeds=conditioning_embeds,
1796
+ output_hidden_states=output_hidden_states,
1797
+ return_dict=return_dict,
1798
+ )
1799
+
1800
+ speech_ids = decoder_outputs[0]
1801
+
1802
+ # since we will get the embeds of shape `(batch_size, seq_len, embedding_dim)` during the forward pass
1803
+ # we must convert it to tokens, to make it compaitable with speech_transformer
1804
+ if speech_ids.ndim == 3:
1805
+ speech_ids = speech_ids.argmax(2)
1806
+ speech_ids = self.fix_speech_decoder_output(speech_ids)
1807
+
1808
+ speech_outputs = self.speech_encoder_model(
1809
+ input_ids=speech_ids,
1810
+ output_hidden_states=output_hidden_states,
1811
+ return_dict=return_dict,
1812
+ )
1813
+
1814
+ text_outputs = self.text_encoder_model(
1815
+ input_ids=input_ids,
1816
+ inputs_embeds=text_encoder_inputs_embeds,
1817
+ attention_mask=attention_mask,
1818
+ output_hidden_states=output_hidden_states,
1819
+ return_dict=return_dict,
1820
+ )
1821
+
1822
+ speech_embeds = speech_outputs[0]
1823
+ text_embeds = text_outputs[0]
1824
+
1825
+ # normalized features
1826
+ speech_embeds = speech_embeds / speech_embeds.norm(p=2, dim=-1, keepdim=True)
1827
+ text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True)
1828
+
1829
+ # cosine similarity as logits
1830
+ logit_scale = self.logit_scale.exp()
1831
+ logits_per_text = torch.matmul(text_embeds, speech_embeds.t()) * logit_scale
1832
+ logits_per_speech = logits_per_text.t()
1833
+
1834
+ loss = None
1835
+ if return_loss:
1836
+ loss = clvp_loss(logits_per_text)
1837
+
1838
+ if not return_dict:
1839
+ output = (
1840
+ logits_per_speech,
1841
+ logits_per_text,
1842
+ text_embeds,
1843
+ speech_embeds,
1844
+ text_outputs[2],
1845
+ speech_outputs[2],
1846
+ )
1847
+ if output_hidden_states:
1848
+ output += (
1849
+ decoder_outputs[-1],
1850
+ text_outputs[-1],
1851
+ speech_outputs[-1],
1852
+ )
1853
+
1854
+ return ((loss,) + output) if loss is not None else output
1855
+
1856
+ return ClvpOutput(
1857
+ loss=loss,
1858
+ logits_per_speech=logits_per_speech,
1859
+ logits_per_text=logits_per_text,
1860
+ text_embeds=text_embeds,
1861
+ speech_embeds=speech_embeds,
1862
+ text_model_output=text_outputs[2],
1863
+ speech_model_output=speech_outputs[2],
1864
+ decoder_hidden_states=decoder_outputs.hidden_states,
1865
+ text_encoder_hidden_states=text_outputs.hidden_states,
1866
+ speech_encoder_hidden_states=speech_outputs.hidden_states,
1867
+ )
1868
+
1869
+ @torch.no_grad()
1870
+ def generate(
1871
+ self,
1872
+ input_ids: torch.LongTensor = None,
1873
+ input_features: torch.FloatTensor = None,
1874
+ attention_mask: Optional[torch.LongTensor] = None,
1875
+ generation_config: Optional[GenerationConfig] = None,
1876
+ pad_to_max_mel_tokens: Optional[int] = None,
1877
+ output_hidden_states: Optional[bool] = None,
1878
+ **kwargs,
1879
+ ):
1880
+ """
1881
+ Generate method for `ClvpModelForConditionalGeneration`, this method calls the `generate` method of
1882
+ `ClvpForCausalLM` and then uses those generated `speech_ids` to process `text_embeds` and `speech_embeds` using
1883
+ `ClvpEncoder`.
1884
+
1885
+ Args:
1886
+ input_ids (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
1887
+ Input text Tokens. Processed from the [`ClvpTokenizer`].
1888
+ input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, time_dim)`, *optional*):
1889
+ Indicates log-melspectrogram representations for audio returned by [`ClvpFeatureExtractor`].
1890
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1891
+ Mask to avoid performing attention on padding text token indices. Mask values selected in `[0, 1]`:
1892
+
1893
+ - 1 for tokens that are **not masked**,
1894
+ - 0 for tokens that are **masked**.
1895
+
1896
+ [What are attention masks?](../glossary#attention-mask)
1897
+ generation_config (`~generation.GenerationConfig`, *optional*):
1898
+ The generation configuration to be used as base parametrization for the generation call. `**kwargs`
1899
+ passed to generate matching the attributes of `generation_config` will override them. If
1900
+ `generation_config` is not provided, the default will be used, which had the following loading
1901
+ priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model
1902
+ configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s
1903
+ default values, whose documentation should be checked to parameterize generation.
1904
+ pad_to_max_mel_tokens (`int`, *optional*):
1905
+ Pads generated speech_ids to the specified value. This is to implement the same logic from the official
1906
+ repo, link: https://github.com/neonbjb/tortoise-tts/blob/80f89987a5abda5e2b082618cd74f9c7411141dc/tortoise/api.py#L430
1907
+ and to make sure the logits are same.
1908
+ This does not affect generation quality so please don't consider using it since it is less efficient.
1909
+ output_hidden_states (`bool`, *optional*):
1910
+ Whether or not to return the hidden states of decoder model, text encoder and speech encoder models.
1911
+
1912
+ Returns:
1913
+ `ClvpOutput` or tuple: A `ClvpOutput` (if `return_dict_in_generate=True` or when
1914
+ `config.return_dict_in_generate=True`) or a tuple.
1915
+ """
1916
+
1917
+ # If the input sequences are larger than (self.config.decoder_config.max_text_tokens - 3) then raise error,
1918
+ # because we need to add 3 tokens ( 1 bos tokens and 2 eos tokens) to the input_ids in ClvpConditioningEncoder to
1919
+ # properly sample
1920
+ sequence_length = input_ids.shape[-1]
1921
+ if sequence_length > (self.config.decoder_config.max_text_tokens - 3):
1922
+ raise ValueError(
1923
+ f"Maximum sequence length reached! Found input_ids of length {sequence_length}."
1924
+ f"Please make sure that the maximum length of input_ids is {self.config.decoder_config.max_text_tokens - 3}"
1925
+ )
1926
+
1927
+ if generation_config is None:
1928
+ generation_config = self.generation_config
1929
+
1930
+ generation_config = copy.deepcopy(generation_config)
1931
+ model_kwargs = generation_config.update(**kwargs) # All unused kwargs must be model kwargs
1932
+ generation_config.validate()
1933
+ self._validate_model_kwargs(model_kwargs.copy())
1934
+
1935
+ # pad input_ids as specified in the original repo
1936
+ # link: https://github.com/neonbjb/tortoise-tts/blob/80f89987a5abda5e2b082618cd74f9c7411141dc/tortoise/api.py#L380
1937
+ input_ids, attention_mask = _pad_extra_bos_eos_tokens(
1938
+ input_ids,
1939
+ attention_mask,
1940
+ add_bos_token=False,
1941
+ bos_token_id=self.config.text_config.bos_token_id,
1942
+ eos_token_id=self.config.text_config.eos_token_id,
1943
+ )
1944
+
1945
+ conditioning_embeds = self.conditioning_encoder(
1946
+ input_features=input_features,
1947
+ input_ids=input_ids,
1948
+ attention_mask=attention_mask,
1949
+ )
1950
+
1951
+ decoder_outputs = self.speech_decoder_model.generate(
1952
+ conditioning_embeds=conditioning_embeds,
1953
+ generation_config=generation_config,
1954
+ output_hidden_states=output_hidden_states,
1955
+ return_dict=generation_config.return_dict_in_generate,
1956
+ )
1957
+ if isinstance(decoder_outputs, ModelOutput):
1958
+ speech_ids = decoder_outputs.sequences
1959
+
1960
+ # pad to pad_to_max_mel_tokens if given, to replicate the original repo logic
1961
+ # link: https://github.com/neonbjb/tortoise-tts/blob/80f89987a5abda5e2b082618cd74f9c7411141dc/tortoise/api.py#L430
1962
+ if pad_to_max_mel_tokens is not None:
1963
+ padding_needed = pad_to_max_mel_tokens - speech_ids.shape[-1]
1964
+ speech_ids = torch.nn.functional.pad(
1965
+ speech_ids, (0, padding_needed), value=self.generation_config.eos_token_id
1966
+ )
1967
+
1968
+ speech_ids = self.fix_speech_decoder_output(speech_ids)
1969
+
1970
+ speech_outputs = self.speech_encoder_model(
1971
+ input_ids=speech_ids,
1972
+ output_hidden_states=output_hidden_states,
1973
+ return_dict=generation_config.return_dict_in_generate,
1974
+ )
1975
+ text_outputs = self.text_encoder_model(
1976
+ input_ids=input_ids,
1977
+ attention_mask=attention_mask,
1978
+ output_hidden_states=output_hidden_states,
1979
+ return_dict=generation_config.return_dict_in_generate,
1980
+ )
1981
+
1982
+ speech_embeds = speech_outputs[0]
1983
+ text_embeds = text_outputs[0]
1984
+
1985
+ # normalized features
1986
+ speech_embeds = speech_embeds / speech_embeds.norm(p=2, dim=-1, keepdim=True)
1987
+ text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True)
1988
+
1989
+ # cosine similarity as logits
1990
+ logit_scale = self.logit_scale.exp()
1991
+ logits_per_text = torch.matmul(text_embeds, speech_embeds.t()) * logit_scale
1992
+ logits_per_speech = logits_per_text.t()
1993
+
1994
+ if not generation_config.return_dict_in_generate:
1995
+ output = (
1996
+ speech_ids,
1997
+ logits_per_speech,
1998
+ logits_per_text,
1999
+ text_embeds,
2000
+ speech_embeds,
2001
+ text_outputs[2],
2002
+ speech_outputs[2],
2003
+ )
2004
+ if output_hidden_states:
2005
+ output += (
2006
+ decoder_outputs[-1],
2007
+ text_outputs[-1],
2008
+ speech_outputs[-1],
2009
+ )
2010
+
2011
+ return output
2012
+
2013
+ return ClvpOutput(
2014
+ speech_ids=speech_ids,
2015
+ logits_per_speech=logits_per_speech,
2016
+ logits_per_text=logits_per_text,
2017
+ text_embeds=text_embeds,
2018
+ speech_embeds=speech_embeds,
2019
+ text_model_output=text_outputs[2],
2020
+ speech_model_output=speech_outputs[2],
2021
+ decoder_hidden_states=decoder_outputs.hidden_states,
2022
+ text_encoder_hidden_states=text_outputs.hidden_states,
2023
+ speech_encoder_hidden_states=speech_outputs.hidden_states,
2024
+ )
env-llmeval/lib/python3.10/site-packages/transformers/models/clvp/number_normalizer.py ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """English Normalizer class for CLVP."""
17
+
18
+
19
+ import re
20
+
21
+
22
+ class EnglishNormalizer:
23
+ def __init__(self):
24
+ # List of (regular expression, replacement) pairs for abbreviations:
25
+ self._abbreviations = [
26
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
27
+ for x in [
28
+ ("mrs", "misess"),
29
+ ("mr", "mister"),
30
+ ("dr", "doctor"),
31
+ ("st", "saint"),
32
+ ("co", "company"),
33
+ ("jr", "junior"),
34
+ ("maj", "major"),
35
+ ("gen", "general"),
36
+ ("drs", "doctors"),
37
+ ("rev", "reverend"),
38
+ ("lt", "lieutenant"),
39
+ ("hon", "honorable"),
40
+ ("sgt", "sergeant"),
41
+ ("capt", "captain"),
42
+ ("esq", "esquire"),
43
+ ("ltd", "limited"),
44
+ ("col", "colonel"),
45
+ ("ft", "fort"),
46
+ ]
47
+ ]
48
+
49
+ self.ones = ["", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"]
50
+ self.teens = [
51
+ "ten",
52
+ "eleven",
53
+ "twelve",
54
+ "thirteen",
55
+ "fourteen",
56
+ "fifteen",
57
+ "sixteen",
58
+ "seventeen",
59
+ "eighteen",
60
+ "nineteen",
61
+ ]
62
+ self.tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"]
63
+
64
+ def number_to_words(self, num: int) -> str:
65
+ """
66
+ Converts numbers(`int`) to words(`str`).
67
+
68
+ Please note that it only supports upto - "'nine hundred ninety-nine quadrillion, nine hundred ninety-nine
69
+ trillion, nine hundred ninety-nine billion, nine hundred ninety-nine million, nine hundred ninety-nine
70
+ thousand, nine hundred ninety-nine'" or `number_to_words(999_999_999_999_999_999)`.
71
+ """
72
+ if num == 0:
73
+ return "zero"
74
+ elif num < 0:
75
+ return "minus " + self.number_to_words(abs(num))
76
+ elif num < 10:
77
+ return self.ones[num]
78
+ elif num < 20:
79
+ return self.teens[num - 10]
80
+ elif num < 100:
81
+ return self.tens[num // 10] + ("-" + self.number_to_words(num % 10) if num % 10 != 0 else "")
82
+ elif num < 1000:
83
+ return (
84
+ self.ones[num // 100] + " hundred" + (" " + self.number_to_words(num % 100) if num % 100 != 0 else "")
85
+ )
86
+ elif num < 1_000_000:
87
+ return (
88
+ self.number_to_words(num // 1000)
89
+ + " thousand"
90
+ + (", " + self.number_to_words(num % 1000) if num % 1000 != 0 else "")
91
+ )
92
+ elif num < 1_000_000_000:
93
+ return (
94
+ self.number_to_words(num // 1_000_000)
95
+ + " million"
96
+ + (", " + self.number_to_words(num % 1_000_000) if num % 1_000_000 != 0 else "")
97
+ )
98
+ elif num < 1_000_000_000_000:
99
+ return (
100
+ self.number_to_words(num // 1_000_000_000)
101
+ + " billion"
102
+ + (", " + self.number_to_words(num % 1_000_000_000) if num % 1_000_000_000 != 0 else "")
103
+ )
104
+ elif num < 1_000_000_000_000_000:
105
+ return (
106
+ self.number_to_words(num // 1_000_000_000_000)
107
+ + " trillion"
108
+ + (", " + self.number_to_words(num % 1_000_000_000_000) if num % 1_000_000_000_000 != 0 else "")
109
+ )
110
+ elif num < 1_000_000_000_000_000_000:
111
+ return (
112
+ self.number_to_words(num // 1_000_000_000_000_000)
113
+ + " quadrillion"
114
+ + (
115
+ ", " + self.number_to_words(num % 1_000_000_000_000_000)
116
+ if num % 1_000_000_000_000_000 != 0
117
+ else ""
118
+ )
119
+ )
120
+ else:
121
+ return "number out of range"
122
+
123
+ def convert_to_ascii(self, text: str) -> str:
124
+ """
125
+ Converts unicode to ascii
126
+ """
127
+ return text.encode("ascii", "ignore").decode("utf-8")
128
+
129
+ def _expand_dollars(self, m: str) -> str:
130
+ """
131
+ This method is used to expand numerical dollar values into spoken words.
132
+ """
133
+ match = m.group(1)
134
+ parts = match.split(".")
135
+ if len(parts) > 2:
136
+ return match + " dollars" # Unexpected format
137
+
138
+ dollars = int(parts[0]) if parts[0] else 0
139
+ cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
140
+ if dollars and cents:
141
+ dollar_unit = "dollar" if dollars == 1 else "dollars"
142
+ cent_unit = "cent" if cents == 1 else "cents"
143
+ return "%s %s, %s %s" % (dollars, dollar_unit, cents, cent_unit)
144
+ elif dollars:
145
+ dollar_unit = "dollar" if dollars == 1 else "dollars"
146
+ return "%s %s" % (dollars, dollar_unit)
147
+ elif cents:
148
+ cent_unit = "cent" if cents == 1 else "cents"
149
+ return "%s %s" % (cents, cent_unit)
150
+ else:
151
+ return "zero dollars"
152
+
153
+ def _remove_commas(self, m: str) -> str:
154
+ """
155
+ This method is used to remove commas from sentences.
156
+ """
157
+ return m.group(1).replace(",", "")
158
+
159
+ def _expand_decimal_point(self, m: str) -> str:
160
+ """
161
+ This method is used to expand '.' into spoken word ' point '.
162
+ """
163
+ return m.group(1).replace(".", " point ")
164
+
165
+ def _expand_ordinal(self, num: str) -> str:
166
+ """
167
+ This method is used to expand ordinals such as '1st', '2nd' into spoken words.
168
+ """
169
+ ordinal_suffixes = {1: "st", 2: "nd", 3: "rd"}
170
+
171
+ num = int(num.group(0)[:-2])
172
+ if 10 <= num % 100 and num % 100 <= 20:
173
+ suffix = "th"
174
+ else:
175
+ suffix = ordinal_suffixes.get(num % 10, "th")
176
+ return self.number_to_words(num) + suffix
177
+
178
+ def _expand_number(self, m: str) -> str:
179
+ """
180
+ This method acts as a preprocessing step for numbers between 1000 and 3000 (same as the original repository,
181
+ link :
182
+ https://github.com/neonbjb/tortoise-tts/blob/4003544b6ff4b68c09856e04d3eff9da26d023c2/tortoise/utils/tokenizer.py#L86)
183
+ """
184
+ num = int(m.group(0))
185
+
186
+ if num > 1000 and num < 3000:
187
+ if num == 2000:
188
+ return "two thousand"
189
+ elif num > 2000 and num < 2010:
190
+ return "two thousand " + self.number_to_words(num % 100)
191
+ elif num % 100 == 0:
192
+ return self.number_to_words(num // 100) + " hundred"
193
+ else:
194
+ return self.number_to_words(num)
195
+ else:
196
+ return self.number_to_words(num)
197
+
198
+ def normalize_numbers(self, text: str) -> str:
199
+ """
200
+ This method is used to normalize numbers within a text such as converting the numbers to words, removing
201
+ commas, etc.
202
+ """
203
+ text = re.sub(re.compile(r"([0-9][0-9\,]+[0-9])"), self._remove_commas, text)
204
+ text = re.sub(re.compile(r"£([0-9\,]*[0-9]+)"), r"\1 pounds", text)
205
+ text = re.sub(re.compile(r"\$([0-9\.\,]*[0-9]+)"), self._expand_dollars, text)
206
+ text = re.sub(re.compile(r"([0-9]+\.[0-9]+)"), self._expand_decimal_point, text)
207
+ text = re.sub(re.compile(r"[0-9]+(st|nd|rd|th)"), self._expand_ordinal, text)
208
+ text = re.sub(re.compile(r"[0-9]+"), self._expand_number, text)
209
+ return text
210
+
211
+ def expand_abbreviations(self, text: str) -> str:
212
+ """
213
+ Expands the abbreviate words.
214
+ """
215
+ for regex, replacement in self._abbreviations:
216
+ text = re.sub(regex, replacement, text)
217
+ return text
218
+
219
+ def collapse_whitespace(self, text: str) -> str:
220
+ """
221
+ Removes multiple whitespaces
222
+ """
223
+ return re.sub(re.compile(r"\s+"), " ", text)
224
+
225
+ def __call__(self, text):
226
+ """
227
+ Converts text to ascii, numbers / number-like quantities to their spelt-out counterparts and expands
228
+ abbreviations
229
+ """
230
+
231
+ text = self.convert_to_ascii(text)
232
+ text = text.lower()
233
+ text = self.normalize_numbers(text)
234
+ text = self.expand_abbreviations(text)
235
+ text = self.collapse_whitespace(text)
236
+ text = text.replace('"', "")
237
+
238
+ return text
env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (1.59 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/feature_extraction_convnext.cpython-310.pyc ADDED
Binary file (1.03 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/image_processing_convnext.cpython-310.pyc ADDED
Binary file (13.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/modeling_convnext.cpython-310.pyc ADDED
Binary file (17.9 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/convnext/__pycache__/modeling_tf_convnext.cpython-310.pyc ADDED
Binary file (22.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__init__.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from typing import TYPE_CHECKING
16
+
17
+ from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
18
+
19
+
20
+ _import_structure = {"configuration_umt5": ["UMT5Config", "UMT5OnnxConfig"]}
21
+
22
+
23
+ try:
24
+ if not is_torch_available():
25
+ raise OptionalDependencyNotAvailable()
26
+ except OptionalDependencyNotAvailable:
27
+ pass
28
+ else:
29
+ _import_structure["modeling_umt5"] = [
30
+ "UMT5EncoderModel",
31
+ "UMT5ForConditionalGeneration",
32
+ "UMT5ForQuestionAnswering",
33
+ "UMT5ForSequenceClassification",
34
+ "UMT5ForTokenClassification",
35
+ "UMT5Model",
36
+ "UMT5PreTrainedModel",
37
+ ]
38
+
39
+ if TYPE_CHECKING:
40
+ from .configuration_umt5 import UMT5Config, UMT5OnnxConfig
41
+
42
+ try:
43
+ if not is_torch_available():
44
+ raise OptionalDependencyNotAvailable()
45
+ except OptionalDependencyNotAvailable:
46
+ pass
47
+ else:
48
+ from .modeling_umt5 import (
49
+ UMT5EncoderModel,
50
+ UMT5ForConditionalGeneration,
51
+ UMT5ForQuestionAnswering,
52
+ UMT5ForSequenceClassification,
53
+ UMT5ForTokenClassification,
54
+ UMT5Model,
55
+ UMT5PreTrainedModel,
56
+ )
57
+ else:
58
+ import sys
59
+
60
+ sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (974 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/configuration_umt5.cpython-310.pyc ADDED
Binary file (6.46 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/convert_umt5_checkpoint_to_pytorch.cpython-310.pyc ADDED
Binary file (8.42 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/__pycache__/modeling_umt5.cpython-310.pyc ADDED
Binary file (53 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/configuration_umt5.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023, The T5 Authors and HuggingFace Inc.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ UMT5 model configuration"""
16
+ from typing import Mapping
17
+
18
+ from ...configuration_utils import PretrainedConfig
19
+ from ...onnx import OnnxSeq2SeqConfigWithPast
20
+ from ...utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+ UMT5_PRETRAINED_CONFIG_ARCHIVE_MAP = {
26
+ "google/umt5-small": "https://huggingface.co/google/umt5-small/resolve/main/config.json",
27
+ # See all umt5 models at https://huggingface.co/models?filter=umt5
28
+ }
29
+
30
+
31
+ class UMT5Config(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`UMT5Model`]. It is used to instantiate a UMT5
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the UMT5
36
+ [google/umt5-small](https://huggingface.co/google/umt5-small) architecture.
37
+
38
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
39
+ documentation from [`PretrainedConfig`] for more information.
40
+
41
+ Arguments:
42
+ vocab_size (`int`, *optional*, defaults to 250112):
43
+ Vocabulary size of the UMT5 model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`UMT5Model`] or [`TFUMT5Model`].
45
+ d_model (`int`, *optional*, defaults to 512):
46
+ Size of the encoder layers and the pooler layer.
47
+ d_kv (`int`, *optional*, defaults to 64):
48
+ Size of the key, query, value projections per attention head. `d_kv` has to be equal to `d_model //
49
+ num_heads`.
50
+ d_ff (`int`, *optional*, defaults to 1024):
51
+ Size of the intermediate feed forward layer in each `UMT5Block`.
52
+ num_layers (`int`, *optional*, defaults to 8):
53
+ Number of hidden layers in the Transformer encoder.
54
+ num_decoder_layers (`int`, *optional*):
55
+ Number of hidden layers in the Transformer decoder. Will use the same value as `num_layers` if not set.
56
+ num_heads (`int`, *optional*, defaults to 6):
57
+ Number of attention heads for each attention layer in the Transformer encoder.
58
+ relative_attention_num_buckets (`int`, *optional*, defaults to 32):
59
+ The number of buckets to use for each attention layer.
60
+ relative_attention_max_distance (`int`, *optional*, defaults to 128):
61
+ The maximum distance of the longer sequences for the bucket separation.
62
+ dropout_rate (`float`, *optional*, defaults to 0.1):
63
+ The ratio for all dropout layers.
64
+ classifier_dropout (`float`, *optional*, defaults to 0.0):
65
+ The dropout ratio for classifier.
66
+ layer_norm_eps (`float`, *optional*, defaults to 1e-6):
67
+ The epsilon used by the layer normalization layers.
68
+ initializer_factor (`float`, *optional*, defaults to 1):
69
+ A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
70
+ testing).
71
+ feed_forward_proj (`string`, *optional*, defaults to `"gated-gelu"`):
72
+ Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`.
73
+ use_cache (`bool`, *optional*, defaults to `True`):
74
+ Whether or not the model should return the last key/values attentions (not used by all models).
75
+ """
76
+
77
+ model_type = "umt5"
78
+ keys_to_ignore_at_inference = ["past_key_values"]
79
+ attribute_map = {"hidden_size": "d_model", "num_attention_heads": "num_heads", "num_hidden_layers": "num_layers"}
80
+
81
+ def __init__(
82
+ self,
83
+ vocab_size=250112,
84
+ d_model=512,
85
+ d_kv=64,
86
+ d_ff=1024,
87
+ num_layers=8,
88
+ num_decoder_layers=None,
89
+ num_heads=6,
90
+ relative_attention_num_buckets=32,
91
+ relative_attention_max_distance=128,
92
+ dropout_rate=0.1,
93
+ layer_norm_epsilon=1e-6,
94
+ initializer_factor=1.0,
95
+ feed_forward_proj="gated-gelu",
96
+ is_encoder_decoder=True,
97
+ use_cache=True,
98
+ tokenizer_class="T5Tokenizer",
99
+ tie_word_embeddings=True,
100
+ pad_token_id=0,
101
+ eos_token_id=1,
102
+ decoder_start_token_id=0,
103
+ classifier_dropout=0.0,
104
+ **kwargs,
105
+ ):
106
+ self.vocab_size = vocab_size
107
+ self.d_model = d_model
108
+ self.d_kv = d_kv
109
+ self.d_ff = d_ff
110
+ self.num_layers = num_layers
111
+ self.num_decoder_layers = (
112
+ num_decoder_layers if num_decoder_layers is not None else self.num_layers
113
+ ) # default = symmetry
114
+ self.num_heads = num_heads
115
+ self.relative_attention_num_buckets = relative_attention_num_buckets
116
+ self.relative_attention_max_distance = relative_attention_max_distance
117
+ self.dropout_rate = dropout_rate
118
+ self.classifier_dropout = classifier_dropout
119
+ self.layer_norm_epsilon = layer_norm_epsilon
120
+ self.initializer_factor = initializer_factor
121
+ self.feed_forward_proj = feed_forward_proj
122
+ self.use_cache = use_cache
123
+
124
+ act_info = self.feed_forward_proj.split("-")
125
+ self.dense_act_fn = act_info[-1]
126
+ self.is_gated_act = act_info[0] == "gated"
127
+
128
+ if len(act_info) > 1 and act_info[0] != "gated" or len(act_info) > 2:
129
+ raise ValueError(
130
+ f"`feed_forward_proj`: {feed_forward_proj} is not a valid activation function of the dense layer. "
131
+ "Please make sure `feed_forward_proj` is of the format `gated-{ACT_FN}` or `{ACT_FN}`, e.g. "
132
+ "'gated-gelu' or 'relu'"
133
+ )
134
+
135
+ if feed_forward_proj == "gated-gelu":
136
+ self.dense_act_fn = "gelu_new"
137
+
138
+ super().__init__(
139
+ is_encoder_decoder=is_encoder_decoder,
140
+ tokenizer_class=tokenizer_class,
141
+ tie_word_embeddings=tie_word_embeddings,
142
+ pad_token_id=pad_token_id,
143
+ eos_token_id=eos_token_id,
144
+ decoder_start_token_id=decoder_start_token_id,
145
+ **kwargs,
146
+ )
147
+
148
+
149
+ class UMT5OnnxConfig(OnnxSeq2SeqConfigWithPast):
150
+ @property
151
+ # Copied from transformers.models.t5.configuration_t5.T5OnnxConfig.inputs
152
+ def inputs(self) -> Mapping[str, Mapping[int, str]]:
153
+ common_inputs = {
154
+ "input_ids": {0: "batch", 1: "encoder_sequence"},
155
+ "attention_mask": {0: "batch", 1: "encoder_sequence"},
156
+ }
157
+ if self.use_past:
158
+ common_inputs["attention_mask"][1] = "past_encoder_sequence + sequence"
159
+ common_inputs["decoder_input_ids"] = {0: "batch"}
160
+ common_inputs["decoder_attention_mask"] = {0: "batch", 1: "past_decoder_sequence + sequence"}
161
+ else:
162
+ common_inputs["decoder_input_ids"] = {0: "batch", 1: "decoder_sequence"}
163
+ common_inputs["decoder_attention_mask"] = {0: "batch", 1: "decoder_sequence"}
164
+
165
+ if self.use_past:
166
+ self.fill_with_past_key_values_(common_inputs, direction="inputs")
167
+
168
+ return common_inputs
169
+
170
+ @property
171
+ # Copied from transformers.models.t5.configuration_t5.T5OnnxConfig.default_onnx_opset
172
+ def default_onnx_opset(self) -> int:
173
+ return 13
174
+
175
+ @property
176
+ def atol_for_validation(self) -> float:
177
+ return 5e-4
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Google LLC and HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Convert T5X checkpoint to PyTorch
17
+
18
+ Steps:
19
+ - Install gsutil according to https://cloud.google.com/storage/docs/gsutil_install
20
+ - Get a T5X checkpoint at https://github.com/google-research/t5x/blob/main/docs/models.md#t5-11-checkpoints Example:
21
+ `gsutil -m cp -r gs://t5-data/pretrained_models/t5x/t5_1_1_small $HOME/`
22
+ - Create or download a corresponding config for the downloaded model. E.g. for T5 v1.1 small, you can use
23
+ https://huggingface.co/google/t5-v1_1-small/blob/main/config.json
24
+ - Convert:
25
+ ```
26
+ python3 convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json\
27
+ --pytorch_dump_path=$HOME/t5_1_1_small_pt
28
+ ```
29
+ """
30
+
31
+ import argparse
32
+ import collections
33
+
34
+ import numpy as np
35
+ import torch
36
+ from flax import traverse_util
37
+ from t5x import checkpoints
38
+
39
+ from transformers import MT5Config, UMT5EncoderModel, UMT5ForConditionalGeneration
40
+ from transformers.utils import logging
41
+
42
+
43
+ logging.set_verbosity_info()
44
+
45
+
46
+ def t5x_relpos_bias_lookup(params, i, prefix):
47
+ """Returns the Relative Position Bias parameters of a layer. Does not transpose."""
48
+ return params[f"{prefix}/{prefix}/relpos_bias/rel_embedding"][:, i, :]
49
+
50
+
51
+ def t5x_attention_lookup(params, i, prefix, layer_name="attention"):
52
+ """Returns the KOQV parameters of (self-)attention. Does not transpose."""
53
+ k_tmp = k_tmp = np.ascontiguousarray(params[f"{prefix}/{prefix}/{layer_name}/key/kernel"][:, i, :, :])
54
+ k = k_tmp.reshape(k_tmp.shape[0], k_tmp.shape[1] * k_tmp.shape[2])
55
+ o_tmp = np.ascontiguousarray(params[f"{prefix}/{prefix}/{layer_name}/out/kernel"][:, i, :, :])
56
+ o = o_tmp.reshape(o_tmp.shape[0] * o_tmp.shape[1], o_tmp.shape[2])
57
+ q_tmp = np.ascontiguousarray(params[f"{prefix}/{prefix}/{layer_name}/query/kernel"][:, i, :, :])
58
+ q = q_tmp.reshape(q_tmp.shape[0], q_tmp.shape[1] * q_tmp.shape[2])
59
+ v_tmp = np.ascontiguousarray(params[f"{prefix}/{prefix}/{layer_name}/value/kernel"][:, i, :, :])
60
+ v = v_tmp.reshape(v_tmp.shape[0], v_tmp.shape[1] * v_tmp.shape[2])
61
+ return k, o, q, v
62
+
63
+
64
+ def t5x_mlp_lookup(params, i, prefix, split_mlp_wi=False):
65
+ """Returns the MLP parameters of a layer. Does not transpose."""
66
+ if split_mlp_wi:
67
+ wi_0 = params[f"{prefix}/{prefix}/mlp/wi_0/kernel"][:, i, :]
68
+ wi_1 = params[f"{prefix}/{prefix}/mlp/wi_1/kernel"][:, i, :]
69
+ wi = (wi_0, wi_1)
70
+ else:
71
+ wi = params[f"{prefix}/{prefix}/mlp/wi/kernel"][:, i, :]
72
+
73
+ wo = params[f"{prefix}/{prefix}/mlp/wo/kernel"][:, i, :]
74
+ return wi, wo
75
+
76
+
77
+ def t5x_layer_norm_lookup(params, i, prefix, layer_name):
78
+ """Returns the layer norm param of a layer."""
79
+ return params[f"{prefix}/{prefix}/{layer_name}/scale"][:, i]
80
+
81
+
82
+ def convert_t5x_to_pytorch(
83
+ variables: dict, *, num_layers: int, is_encoder_only: bool, scalable_attention: bool = False
84
+ ):
85
+ """Converts the parameters from T5X-Flax to Transformers-PyTorch."""
86
+ old = traverse_util.flatten_dict(variables["target"])
87
+ old = {"/".join(k): v for k, v in old.items()}
88
+
89
+ # v1.1 models have a gated GeLU with wi_0 and wi_1 instead of wi
90
+ split_mlp_wi = "encoder/encoder/mlp/wi_0/kernel" in old
91
+ print("Split MLP:", split_mlp_wi)
92
+
93
+ new = collections.OrderedDict()
94
+
95
+ # Shared embeddings.
96
+ new["shared.weight"] = old["token_embedder/embedding"]
97
+
98
+ # Encoder.
99
+ for i in range(num_layers):
100
+ # Block i, layer 0 (Self Attention).
101
+ layer_norm = t5x_layer_norm_lookup(old, i, "encoder", "pre_attention_layer_norm")
102
+ k, o, q, v = t5x_attention_lookup(old, i, "encoder", "attention")
103
+ new[f"encoder.block.{i}.layer.0.layer_norm.weight"] = layer_norm
104
+ new[f"encoder.block.{i}.layer.0.SelfAttention.k.weight"] = k.T
105
+ new[f"encoder.block.{i}.layer.0.SelfAttention.o.weight"] = o.T
106
+ new[f"encoder.block.{i}.layer.0.SelfAttention.q.weight"] = q.T
107
+ new[f"encoder.block.{i}.layer.0.SelfAttention.v.weight"] = v.T
108
+
109
+ # Block i, layer 1 (MLP).
110
+ layer_norm = t5x_layer_norm_lookup(old, i, "encoder", "pre_mlp_layer_norm")
111
+ wi, wo = t5x_mlp_lookup(old, i, "encoder", split_mlp_wi)
112
+ new[f"encoder.block.{i}.layer.1.layer_norm.weight"] = layer_norm
113
+ if split_mlp_wi:
114
+ new[f"encoder.block.{i}.layer.1.DenseReluDense.wi_0.weight"] = wi[0].T
115
+ new[f"encoder.block.{i}.layer.1.DenseReluDense.wi_1.weight"] = wi[1].T
116
+ else:
117
+ new[f"encoder.block.{i}.layer.1.DenseReluDense.wi.weight"] = wi.T
118
+ new[f"encoder.block.{i}.layer.1.DenseReluDense.wo.weight"] = wo.T
119
+ if scalable_attention:
120
+ # convert the rel_embedding of each layer
121
+ new[f"encoder.block.{i}.layer.0.SelfAttention.relative_attention_bias.weight"] = t5x_relpos_bias_lookup(
122
+ old, i, "encoder"
123
+ ).T
124
+
125
+ new["encoder.final_layer_norm.weight"] = old["encoder/encoder_norm/scale"]
126
+
127
+ if not scalable_attention:
128
+ new["encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight"] = t5x_relpos_bias_lookup(
129
+ old, 0, "encoder"
130
+ ).T
131
+ new["decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight"] = t5x_relpos_bias_lookup(
132
+ old, 0, "decoder"
133
+ ).T
134
+
135
+ if not is_encoder_only:
136
+ # Decoder.
137
+ for i in range(num_layers):
138
+ # Block i, layer 0 (Self Attention).
139
+ layer_norm = t5x_layer_norm_lookup(old, i, "decoder", "pre_self_attention_layer_norm")
140
+ k, o, q, v = t5x_attention_lookup(old, i, "decoder", "self_attention")
141
+ new[f"decoder.block.{i}.layer.0.layer_norm.weight"] = layer_norm
142
+ new[f"decoder.block.{i}.layer.0.SelfAttention.k.weight"] = k.T
143
+ new[f"decoder.block.{i}.layer.0.SelfAttention.o.weight"] = o.T
144
+ new[f"decoder.block.{i}.layer.0.SelfAttention.q.weight"] = q.T
145
+ new[f"decoder.block.{i}.layer.0.SelfAttention.v.weight"] = v.T
146
+
147
+ # Block i, layer 1 (Cross Attention).
148
+ layer_norm = t5x_layer_norm_lookup(old, i, "decoder", "pre_cross_attention_layer_norm")
149
+ k, o, q, v = t5x_attention_lookup(old, i, "decoder", "encoder_decoder_attention")
150
+ new[f"decoder.block.{i}.layer.1.layer_norm.weight"] = layer_norm
151
+ new[f"decoder.block.{i}.layer.1.EncDecAttention.k.weight"] = k.T
152
+ new[f"decoder.block.{i}.layer.1.EncDecAttention.o.weight"] = o.T
153
+ new[f"decoder.block.{i}.layer.1.EncDecAttention.q.weight"] = q.T
154
+ new[f"decoder.block.{i}.layer.1.EncDecAttention.v.weight"] = v.T
155
+
156
+ # Block i, layer 2 (MLP).
157
+ layer_norm = t5x_layer_norm_lookup(old, i, "decoder", "pre_mlp_layer_norm")
158
+ wi, wo = t5x_mlp_lookup(old, i, "decoder", split_mlp_wi)
159
+ new[f"decoder.block.{i}.layer.2.layer_norm.weight"] = layer_norm
160
+ if split_mlp_wi:
161
+ new[f"decoder.block.{i}.layer.2.DenseReluDense.wi_0.weight"] = wi[0].T
162
+ new[f"decoder.block.{i}.layer.2.DenseReluDense.wi_1.weight"] = wi[1].T
163
+ else:
164
+ new[f"encoder.block.{i}.layer.2.DenseReluDense.wi.weight"] = wi.T
165
+ new[f"decoder.block.{i}.layer.2.DenseReluDense.wo.weight"] = wo.T
166
+
167
+ if scalable_attention:
168
+ # convert the rel_embedding of each layer
169
+ new[
170
+ f"decoder.block.{i}.layer.0.SelfAttention.relative_attention_bias.weight"
171
+ ] = t5x_relpos_bias_lookup(old, i, "decoder").T
172
+
173
+ new["decoder.final_layer_norm.weight"] = old["decoder/decoder_norm/scale"]
174
+
175
+ # LM Head (only in v1.1 checkpoints, in v1.0 embeddings are used instead)
176
+ if "decoder/logits_dense/kernel" in old:
177
+ new["lm_head.weight"] = old["decoder/logits_dense/kernel"].T
178
+
179
+ return new
180
+
181
+
182
+ def make_state_dict(converted_params, is_encoder_only: bool):
183
+ """Prepares a state dict for the PyTorch model."""
184
+ # Make a state dict with torch tensors.
185
+ state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
186
+
187
+ # Add what is missing.
188
+ if "encoder.embed_tokens.weight" not in state_dict:
189
+ state_dict["encoder.embed_tokens.weight"] = state_dict["shared.weight"]
190
+
191
+ if not is_encoder_only:
192
+ if "decoder.embed_tokens.weight" not in state_dict:
193
+ state_dict["decoder.embed_tokens.weight"] = state_dict["shared.weight"]
194
+
195
+ if "lm_head.weight" not in state_dict: # For old 1.0 models.
196
+ print("Using shared word embeddings as lm_head.")
197
+ state_dict["lm_head.weight"] = state_dict["shared.weight"]
198
+
199
+ return state_dict
200
+
201
+
202
+ def load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only, scalable_attention):
203
+ """Replaces the params in model witht the T5X converted params."""
204
+ variables = checkpoints.load_t5x_checkpoint(t5x_checkpoint_path)
205
+ converted = convert_t5x_to_pytorch(
206
+ variables, num_layers=config.num_layers, is_encoder_only=is_encoder_only, scalable_attention=scalable_attention
207
+ )
208
+ state_dict = make_state_dict(converted, is_encoder_only)
209
+ model.load_state_dict(state_dict, strict=True)
210
+
211
+
212
+ def convert_t5x_checkpoint_to_pytorch(
213
+ t5x_checkpoint_path,
214
+ config_file,
215
+ pytorch_dump_path,
216
+ is_encoder_only: bool = False,
217
+ scalable_attention: bool = False,
218
+ ):
219
+ """Loads the config and model, converts the T5X checkpoint, and saves a PyTorch checkpoint."""
220
+ # Initialise PyTorch model
221
+ config = MT5Config.from_json_file(config_file)
222
+ print(f"Building PyTorch model from configuration: {config}")
223
+ # Non-v1.1 checkpoints could also use T5Model, but this works for all.
224
+ # The v1.0 checkpoints will simply have an LM head that is the word embeddings.
225
+ if is_encoder_only:
226
+ model = UMT5EncoderModel(config)
227
+ else:
228
+ model = UMT5ForConditionalGeneration(config)
229
+
230
+ # Load weights from tf checkpoint
231
+ load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only, scalable_attention)
232
+
233
+ # Save pytorch-model
234
+ print(f"Save PyTorch model to {pytorch_dump_path}")
235
+ model.save_pretrained(pytorch_dump_path)
236
+
237
+ # Verify that we can load the checkpoint.
238
+ model.from_pretrained(pytorch_dump_path)
239
+ print("Done")
240
+
241
+
242
+ if __name__ == "__main__":
243
+ parser = argparse.ArgumentParser(description="Converts a native T5X checkpoint into a PyTorch checkpoint.")
244
+ # Required parameters
245
+ parser.add_argument(
246
+ "--t5x_checkpoint_path", default=None, type=str, required=True, help="Path to the T5X checkpoint."
247
+ )
248
+ parser.add_argument(
249
+ "--config_file",
250
+ default=None,
251
+ type=str,
252
+ required=True,
253
+ help="The config json file corresponding to the pre-trained T5 model.\nThis specifies the model architecture.",
254
+ )
255
+ parser.add_argument(
256
+ "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model."
257
+ )
258
+ parser.add_argument(
259
+ "--is_encoder_only", action="store_true", help="Check if the model is encoder-decoder model", default=False
260
+ )
261
+ parser.add_argument(
262
+ "--scalable_attention",
263
+ action="store_true",
264
+ help="Whether the model uses scaled attention (umt5 model)",
265
+ default=False,
266
+ )
267
+ args = parser.parse_args()
268
+ convert_t5x_checkpoint_to_pytorch(
269
+ args.t5x_checkpoint_path,
270
+ args.config_file,
271
+ args.pytorch_dump_path,
272
+ args.is_encoder_only,
273
+ args.scalable_attention,
274
+ )
env-llmeval/lib/python3.10/site-packages/transformers/models/umt5/modeling_umt5.py ADDED
@@ -0,0 +1,1857 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Mesh TensorFlow authors, T5 Authors and HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ PyTorch UMT5 model."""
16
+
17
+ import copy
18
+ import math
19
+ from typing import List, Optional, Tuple, Union
20
+
21
+ import torch
22
+ from torch import nn
23
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
24
+
25
+ from ...activations import ACT2FN
26
+ from ...modeling_outputs import (
27
+ BaseModelOutput,
28
+ BaseModelOutputWithPastAndCrossAttentions,
29
+ Seq2SeqLMOutput,
30
+ Seq2SeqModelOutput,
31
+ Seq2SeqQuestionAnsweringModelOutput,
32
+ Seq2SeqSequenceClassifierOutput,
33
+ TokenClassifierOutput,
34
+ )
35
+ from ...modeling_utils import PreTrainedModel
36
+ from ...utils import (
37
+ DUMMY_INPUTS,
38
+ DUMMY_MASK,
39
+ add_start_docstrings,
40
+ add_start_docstrings_to_model_forward,
41
+ is_torch_fx_proxy,
42
+ logging,
43
+ replace_return_docstrings,
44
+ )
45
+ from .configuration_umt5 import UMT5Config
46
+
47
+
48
+ logger = logging.get_logger(__name__)
49
+
50
+ _CONFIG_FOR_DOC = "UMT5Config"
51
+ _CHECKPOINT_FOR_DOC = "google/umt5-small"
52
+
53
+
54
+ # Copied from transformers.models.t5.modeling_t5.T5LayerNorm with T5->UMT5
55
+ class UMT5LayerNorm(nn.Module):
56
+ def __init__(self, hidden_size, eps=1e-6):
57
+ """
58
+ Construct a layernorm module in the UMT5 style. No bias and no subtraction of mean.
59
+ """
60
+ super().__init__()
61
+ self.weight = nn.Parameter(torch.ones(hidden_size))
62
+ self.variance_epsilon = eps
63
+
64
+ def forward(self, hidden_states):
65
+ # UMT5 uses a layer_norm which only scales and doesn't shift, which is also known as Root Mean
66
+ # Square Layer Normalization https://arxiv.org/abs/1910.07467 thus varience is calculated
67
+ # w/o mean and there is no bias. Additionally we want to make sure that the accumulation for
68
+ # half-precision inputs is done in fp32
69
+
70
+ variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
71
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
72
+
73
+ # convert into half-precision if necessary
74
+ if self.weight.dtype in [torch.float16, torch.bfloat16]:
75
+ hidden_states = hidden_states.to(self.weight.dtype)
76
+
77
+ return self.weight * hidden_states
78
+
79
+
80
+ # Copied from transformers.models.t5.modeling_t5.T5DenseActDense with T5->UMT5
81
+ class UMT5DenseActDense(nn.Module):
82
+ def __init__(self, config: UMT5Config):
83
+ super().__init__()
84
+ self.wi = nn.Linear(config.d_model, config.d_ff, bias=False)
85
+ self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)
86
+ self.dropout = nn.Dropout(config.dropout_rate)
87
+ self.act = ACT2FN[config.dense_act_fn]
88
+
89
+ def forward(self, hidden_states):
90
+ hidden_states = self.wi(hidden_states)
91
+ hidden_states = self.act(hidden_states)
92
+ hidden_states = self.dropout(hidden_states)
93
+ if (
94
+ isinstance(self.wo.weight, torch.Tensor)
95
+ and hidden_states.dtype != self.wo.weight.dtype
96
+ and self.wo.weight.dtype != torch.int8
97
+ ):
98
+ hidden_states = hidden_states.to(self.wo.weight.dtype)
99
+ hidden_states = self.wo(hidden_states)
100
+ return hidden_states
101
+
102
+
103
+ # Copied from transformers.models.t5.modeling_t5.T5DenseGatedActDense with T5->UMT5
104
+ class UMT5DenseGatedActDense(nn.Module):
105
+ def __init__(self, config: UMT5Config):
106
+ super().__init__()
107
+ self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False)
108
+ self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False)
109
+ self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)
110
+ self.dropout = nn.Dropout(config.dropout_rate)
111
+ self.act = ACT2FN[config.dense_act_fn]
112
+
113
+ def forward(self, hidden_states):
114
+ hidden_gelu = self.act(self.wi_0(hidden_states))
115
+ hidden_linear = self.wi_1(hidden_states)
116
+ hidden_states = hidden_gelu * hidden_linear
117
+ hidden_states = self.dropout(hidden_states)
118
+
119
+ # To make 8bit quantization work for google/flan-t5-xxl, self.wo is kept in float32.
120
+ # See https://github.com/huggingface/transformers/issues/20287
121
+ # we also make sure the weights are not in `int8` in case users will force `_keep_in_fp32_modules` to be `None``
122
+ if (
123
+ isinstance(self.wo.weight, torch.Tensor)
124
+ and hidden_states.dtype != self.wo.weight.dtype
125
+ and self.wo.weight.dtype != torch.int8
126
+ ):
127
+ hidden_states = hidden_states.to(self.wo.weight.dtype)
128
+
129
+ hidden_states = self.wo(hidden_states)
130
+ return hidden_states
131
+
132
+
133
+ # Copied from transformers.models.t5.modeling_t5.T5LayerFF with T5->UMT5
134
+ class UMT5LayerFF(nn.Module):
135
+ def __init__(self, config: UMT5Config):
136
+ super().__init__()
137
+ if config.is_gated_act:
138
+ self.DenseReluDense = UMT5DenseGatedActDense(config)
139
+ else:
140
+ self.DenseReluDense = UMT5DenseActDense(config)
141
+
142
+ self.layer_norm = UMT5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
143
+ self.dropout = nn.Dropout(config.dropout_rate)
144
+
145
+ def forward(self, hidden_states):
146
+ forwarded_states = self.layer_norm(hidden_states)
147
+ forwarded_states = self.DenseReluDense(forwarded_states)
148
+ hidden_states = hidden_states + self.dropout(forwarded_states)
149
+ return hidden_states
150
+
151
+
152
+ class UMT5Attention(nn.Module):
153
+ """
154
+ T5's attention using relative_attention_bias.
155
+ """
156
+
157
+ def __init__(self, config, has_relative_attention_bias=False):
158
+ super().__init__()
159
+ self.is_decoder = config.is_decoder
160
+ self.has_relative_attention_bias = has_relative_attention_bias
161
+ self.relative_attention_num_buckets = config.relative_attention_num_buckets
162
+ self.relative_attention_max_distance = config.relative_attention_max_distance
163
+ self.d_model = config.d_model
164
+ self.key_value_proj_dim = config.d_kv
165
+ self.n_heads = config.num_heads
166
+ self.dropout = config.dropout_rate
167
+ self.inner_dim = self.n_heads * self.key_value_proj_dim
168
+
169
+ # Mesh TensorFlow initialization to avoid scaling before softmax
170
+ self.q = nn.Linear(self.d_model, self.inner_dim, bias=False)
171
+ self.k = nn.Linear(self.d_model, self.inner_dim, bias=False)
172
+ self.v = nn.Linear(self.d_model, self.inner_dim, bias=False)
173
+ self.o = nn.Linear(self.inner_dim, self.d_model, bias=False)
174
+
175
+ if self.has_relative_attention_bias:
176
+ self.relative_attention_bias = nn.Embedding(self.relative_attention_num_buckets, self.n_heads)
177
+ self.pruned_heads = set()
178
+
179
+ def _shape(self, projection: torch.Tensor) -> torch.Tensor:
180
+ new_projection_shape = projection.size()[:-1] + (self.n_heads, self.key_value_proj_dim)
181
+ # move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D)
182
+ new_projection = projection.view(new_projection_shape).permute(0, 2, 1, 3)
183
+ return new_projection
184
+
185
+ def _relative_position_bucket(self, relative_position):
186
+ """
187
+ Adapted from Mesh Tensorflow:
188
+ https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
189
+
190
+ Translate relative position to a bucket number for relative attention. The relative position is defined as
191
+ memory_position - query_position, i.e. the distance in tokens from the attending position to the attended-to
192
+ position. If bidirectional=False, then positive relative positions are invalid. We use smaller buckets for
193
+ small absolute relative_position and larger buckets for larger absolute relative_positions. All relative
194
+ positions >=max_distance map to the same bucket. All relative positions <=-max_distance map to the same bucket.
195
+ This should allow for more graceful generalization to longer sequences than the model has been trained on
196
+
197
+ Args:
198
+ relative_position: an int32 Tensor
199
+ bidirectional: a boolean - whether the attention is bidirectional
200
+ num_buckets: an integer
201
+ max_distance: an integer
202
+
203
+ Returns:
204
+ a Tensor with the same shape as relative_position, containing int32 values in the range [0, num_buckets)
205
+ """
206
+ relative_buckets = 0
207
+ num_buckets = self.relative_attention_num_buckets
208
+ max_distance = self.relative_attention_max_distance
209
+ if not self.is_decoder:
210
+ num_buckets //= 2
211
+ relative_buckets += (relative_position > 0).to(torch.long) * num_buckets
212
+ relative_position = torch.abs(relative_position)
213
+ else:
214
+ relative_position = -torch.min(relative_position, torch.zeros_like(relative_position))
215
+ # now relative_position is in the range [0, inf)
216
+
217
+ # half of the buckets are for exact increments in positions
218
+ max_exact = num_buckets // 2
219
+ is_small = relative_position < max_exact
220
+
221
+ # The other half of the buckets are for logarithmically bigger bins in positions up to max_distance
222
+ log_ratio = torch.log(relative_position.float() / max_exact) / math.log(max_distance / max_exact)
223
+ log_ratio = log_ratio * (num_buckets - max_exact)
224
+ relative_position_if_large = max_exact + log_ratio.to(torch.long)
225
+ relative_position_if_large = torch.min(
226
+ relative_position_if_large, torch.full_like(relative_position_if_large, num_buckets - 1)
227
+ )
228
+
229
+ relative_buckets += torch.where(is_small, relative_position, relative_position_if_large)
230
+ return relative_buckets
231
+
232
+ def compute_bias(self, query_length, key_length, device=None):
233
+ """Compute binned relative position bias"""
234
+ if device is None:
235
+ device = self.relative_attention_bias.weight.device
236
+ context_position = torch.arange(query_length, dtype=torch.long, device=device)[:, None]
237
+ memory_position = torch.arange(key_length, dtype=torch.long, device=device)[None, :]
238
+ relative_position = memory_position - context_position # shape (query_length, key_length)
239
+ relative_position_bucket = self._relative_position_bucket(relative_position)
240
+ values = self.relative_attention_bias(relative_position_bucket) # shape (query_length, key_length, num_heads)
241
+ values = values.permute([2, 0, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length)
242
+ return values
243
+
244
+ def forward(
245
+ self,
246
+ hidden_states: torch.Tensor,
247
+ encoder_hidden_states: Optional[torch.Tensor] = None,
248
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
249
+ attention_mask: Optional[torch.Tensor] = None,
250
+ layer_head_mask: Optional[torch.Tensor] = None,
251
+ ):
252
+ is_cross_attention = encoder_hidden_states is not None
253
+ batch_size, seq_length = hidden_states.shape[:2]
254
+
255
+ # use encoder_hidden_states if cross attention
256
+ current_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
257
+ # checking that the `sequence_length` of the `past_key_value` is the same as the he provided
258
+ # `encoder_hidden_states` to support prefix tuning
259
+ if is_cross_attention and past_key_value and past_key_value[0].shape[2] == current_states.shape[1]:
260
+ # reuse k,v, cross_attentions
261
+ key_states = past_key_value[0]
262
+ value_states = past_key_value[1]
263
+ else:
264
+ key_states = self._shape(self.k(current_states))
265
+ value_states = self._shape(self.v(current_states))
266
+ if past_key_value is not None and not is_cross_attention:
267
+ # reuse k, v, self_attention
268
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
269
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
270
+
271
+ query_states = self._shape(self.q(hidden_states))
272
+ attention_scores = torch.matmul(query_states, key_states.transpose(-1, -2))
273
+
274
+ # compute positional bias
275
+ if self.has_relative_attention_bias:
276
+ query_length = seq_length
277
+ if past_key_value is not None:
278
+ query_length += past_key_value[0].shape[2]
279
+ position_bias = self.compute_bias(query_length, key_states.size(2), device=attention_scores.device)
280
+ else:
281
+ position_bias = torch.zeros(
282
+ (1, self.n_heads, seq_length, key_states.size(2)),
283
+ device=attention_scores.device,
284
+ dtype=attention_scores.dtype,
285
+ requires_grad=self.training,
286
+ )
287
+ if past_key_value is not None:
288
+ position_bias = position_bias[:, :, -hidden_states.size(1) :, :]
289
+ if attention_mask is not None:
290
+ position_bias = position_bias + attention_mask # (batch_size, n_heads, seq_length, key_length)
291
+
292
+ if self.is_decoder:
293
+ # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
294
+ # Further calls to cross_attention layer can then reuse all cross-attention
295
+ # key/value_states (first "if" case)
296
+ # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
297
+ # all previous decoder key/value_states. Further calls to uni-directional self-attention
298
+ # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
299
+ # if encoder bi-directional self-attention `past_key_value` is always `None`
300
+ past_key_value = (key_states, value_states)
301
+
302
+ attention_scores += position_bias
303
+ # (batch_size, n_heads, seq_length, key_length)
304
+ attn_weights = nn.functional.softmax(attention_scores.float(), dim=-1).type_as(attention_scores)
305
+ attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
306
+
307
+ # Mask heads if we want to
308
+ if layer_head_mask is not None:
309
+ attn_weights = attn_weights * layer_head_mask
310
+
311
+ # attn_output = torch.bmm(attn_probs, value_states) ?
312
+ context_states = torch.matmul(attn_weights, value_states)
313
+ # attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) ?
314
+ context_states = context_states.permute(0, 2, 1, 3).contiguous().view(batch_size, seq_length, -1)
315
+ attn_output = self.o(context_states)
316
+ return attn_output, attn_weights, past_key_value
317
+
318
+
319
+ class UMT5LayerSelfAttention(nn.Module):
320
+ def __init__(self, config):
321
+ super().__init__()
322
+ self.SelfAttention = UMT5Attention(config, has_relative_attention_bias=True)
323
+ self.layer_norm = UMT5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
324
+ self.dropout = nn.Dropout(config.dropout_rate)
325
+
326
+ def forward(
327
+ self,
328
+ hidden_states,
329
+ attention_mask=None,
330
+ layer_head_mask=None,
331
+ past_key_value=None,
332
+ ):
333
+ normed_hidden_states = self.layer_norm(hidden_states)
334
+ attention_output = self.SelfAttention(
335
+ normed_hidden_states,
336
+ attention_mask=attention_mask,
337
+ layer_head_mask=layer_head_mask,
338
+ past_key_value=past_key_value,
339
+ )
340
+ hidden_states = hidden_states + self.dropout(attention_output[0])
341
+ outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them
342
+ return outputs
343
+
344
+
345
+ class UMT5LayerCrossAttention(nn.Module):
346
+ def __init__(self, config):
347
+ super().__init__()
348
+ self.EncDecAttention = UMT5Attention(config, has_relative_attention_bias=False)
349
+ self.layer_norm = UMT5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
350
+ self.dropout = nn.Dropout(config.dropout_rate)
351
+
352
+ def forward(
353
+ self,
354
+ hidden_states,
355
+ encoder_hidden_states=None,
356
+ attention_mask=None,
357
+ layer_head_mask=None,
358
+ past_key_value=None,
359
+ ):
360
+ normed_hidden_states = self.layer_norm(hidden_states)
361
+ attention_output = self.EncDecAttention(
362
+ normed_hidden_states,
363
+ encoder_hidden_states=encoder_hidden_states,
364
+ attention_mask=attention_mask,
365
+ layer_head_mask=layer_head_mask,
366
+ past_key_value=past_key_value,
367
+ )
368
+ layer_output = hidden_states + self.dropout(attention_output[0])
369
+ outputs = (layer_output,) + attention_output[1:] # add attentions if we output them
370
+ return outputs
371
+
372
+
373
+ class UMT5Block(nn.Module):
374
+ def __init__(self, config):
375
+ super().__init__()
376
+ self.is_decoder = config.is_decoder
377
+ self.layer = nn.ModuleList()
378
+ self.layer.append(UMT5LayerSelfAttention(config))
379
+ if self.is_decoder:
380
+ self.layer.append(UMT5LayerCrossAttention(config))
381
+
382
+ self.layer.append(UMT5LayerFF(config))
383
+
384
+ def forward(
385
+ self,
386
+ hidden_states,
387
+ attention_mask=None,
388
+ encoder_hidden_states=None,
389
+ encoder_attention_mask=None,
390
+ layer_head_mask=None,
391
+ cross_attn_layer_head_mask=None,
392
+ past_key_value=None,
393
+ use_cache=False,
394
+ output_attentions=False,
395
+ ):
396
+ # Self Attention
397
+ # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
398
+ self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
399
+
400
+ hidden_states, self_attn_weights, present_key_value = self.layer[0](
401
+ hidden_states,
402
+ attention_mask=attention_mask,
403
+ layer_head_mask=layer_head_mask,
404
+ past_key_value=self_attn_past_key_value,
405
+ )
406
+
407
+ # clamp inf values to enable fp16 training
408
+ if hidden_states.dtype == torch.float16:
409
+ max_dtype = torch.finfo(hidden_states.dtype).max
410
+ clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)
411
+ hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
412
+
413
+ # Cross-Attention Block
414
+ cross_attn_present_key_value = None
415
+ cross_attn_weights = None
416
+ do_cross_attention = self.is_decoder and encoder_hidden_states is not None
417
+ if do_cross_attention:
418
+ # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
419
+ cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
420
+ hidden_states, cross_attn_weights, cross_attn_present_key_value = self.layer[1](
421
+ hidden_states,
422
+ encoder_hidden_states=encoder_hidden_states,
423
+ attention_mask=encoder_attention_mask,
424
+ layer_head_mask=cross_attn_layer_head_mask,
425
+ past_key_value=cross_attn_past_key_value,
426
+ )
427
+ # clamp inf values to enable fp16 training
428
+ if hidden_states.dtype == torch.float16:
429
+ max_dtype = torch.finfo(hidden_states.dtype).max
430
+ clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)
431
+ hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
432
+
433
+ present_key_value += cross_attn_present_key_value
434
+
435
+ # Apply Feed Forward layer
436
+ hidden_states = self.layer[-1](hidden_states)
437
+
438
+ # clamp inf values to enable fp16 training
439
+ if hidden_states.dtype == torch.float16:
440
+ max_dtype = torch.finfo(hidden_states.dtype).max
441
+ clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)
442
+ hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
443
+
444
+ outputs = (
445
+ hidden_states,
446
+ present_key_value,
447
+ )
448
+
449
+ if output_attentions:
450
+ outputs += (self_attn_weights, cross_attn_weights)
451
+
452
+ return outputs
453
+
454
+
455
+ # Copied from transformers.models.t5.modeling_t5.T5ClassificationHead with T5->UMT5
456
+ class UMT5ClassificationHead(nn.Module):
457
+ """Head for sentence-level classification tasks."""
458
+
459
+ def __init__(self, config: UMT5Config):
460
+ super().__init__()
461
+ self.dense = nn.Linear(config.d_model, config.d_model)
462
+ self.dropout = nn.Dropout(p=config.classifier_dropout)
463
+ self.out_proj = nn.Linear(config.d_model, config.num_labels)
464
+
465
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
466
+ hidden_states = self.dropout(hidden_states)
467
+ hidden_states = self.dense(hidden_states)
468
+ hidden_states = torch.tanh(hidden_states)
469
+ hidden_states = self.dropout(hidden_states)
470
+ hidden_states = self.out_proj(hidden_states)
471
+ return hidden_states
472
+
473
+
474
+ class UMT5PreTrainedModel(PreTrainedModel):
475
+ """
476
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
477
+ models.
478
+ """
479
+
480
+ config_class = UMT5Config
481
+ base_model_prefix = "transformer"
482
+ supports_gradient_checkpointing = True
483
+ _no_split_modules = ["UMT5Block"]
484
+ _keep_in_fp32_modules = ["wo"]
485
+
486
+ @property
487
+ def dummy_inputs(self):
488
+ input_ids = torch.tensor(DUMMY_INPUTS)
489
+ input_mask = torch.tensor(DUMMY_MASK)
490
+ dummy_inputs = {
491
+ "decoder_input_ids": input_ids,
492
+ "input_ids": input_ids,
493
+ "decoder_attention_mask": input_mask,
494
+ }
495
+ return dummy_inputs
496
+
497
+ def _init_weights(self, module):
498
+ """Initialize the weights"""
499
+ factor = self.config.initializer_factor # Used for testing weights initialization
500
+ if isinstance(module, UMT5LayerNorm):
501
+ module.weight.data.fill_(factor * 1.0)
502
+ elif isinstance(
503
+ module,
504
+ (
505
+ UMT5Model,
506
+ UMT5ForConditionalGeneration,
507
+ UMT5EncoderModel,
508
+ UMT5ForQuestionAnswering,
509
+ ),
510
+ ):
511
+ # Mesh TensorFlow embeddings initialization
512
+ # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L1624
513
+ module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0)
514
+ if hasattr(module, "lm_head") and not self.config.tie_word_embeddings:
515
+ module.lm_head.weight.data.normal_(mean=0.0, std=factor * 1.0)
516
+ if hasattr(module, "qa_outputs"):
517
+ module.qa_outputs.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
518
+ module.qa_outputs.bias.data.zero_()
519
+ elif isinstance(module, UMT5ForTokenClassification):
520
+ if hasattr(module, "classifier"):
521
+ module.classifier.weight.data.normal_(mean=0.0, std=factor * 1.0)
522
+ module.classifier.bias.data.zero_()
523
+ elif isinstance(module, UMT5ClassificationHead):
524
+ module.dense.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
525
+ if hasattr(module.dense, "bias") and module.dense.bias is not None:
526
+ module.dense.bias.data.zero_()
527
+ module.out_proj.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
528
+ if hasattr(module.out_proj, "bias") and module.out_proj.bias is not None:
529
+ module.out_proj.bias.data.zero_()
530
+ elif isinstance(module, UMT5DenseActDense):
531
+ # Mesh TensorFlow FF initialization
532
+ # See https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L56
533
+ # and https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L89
534
+ module.wi.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
535
+ if hasattr(module.wi, "bias") and module.wi.bias is not None:
536
+ module.wi.bias.data.zero_()
537
+ module.wo.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_ff) ** -0.5))
538
+ if hasattr(module.wo, "bias") and module.wo.bias is not None:
539
+ module.wo.bias.data.zero_()
540
+ elif isinstance(module, UMT5DenseGatedActDense):
541
+ module.wi_0.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
542
+ if hasattr(module.wi_0, "bias") and module.wi_0.bias is not None:
543
+ module.wi_0.bias.data.zero_()
544
+ module.wi_1.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
545
+ if hasattr(module.wi_1, "bias") and module.wi_1.bias is not None:
546
+ module.wi_1.bias.data.zero_()
547
+ module.wo.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_ff) ** -0.5))
548
+ if hasattr(module.wo, "bias") and module.wo.bias is not None:
549
+ module.wo.bias.data.zero_()
550
+ elif isinstance(module, UMT5Attention):
551
+ # Mesh TensorFlow attention initialization to avoid scaling before softmax
552
+ # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/attention.py#L136
553
+ d_model = self.config.d_model
554
+ key_value_proj_dim = self.config.d_kv
555
+ n_heads = self.config.num_heads
556
+ module.q.weight.data.normal_(mean=0.0, std=factor * ((d_model * key_value_proj_dim) ** -0.5))
557
+ module.k.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5))
558
+ module.v.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5))
559
+ module.o.weight.data.normal_(mean=0.0, std=factor * ((n_heads * key_value_proj_dim) ** -0.5))
560
+ if module.has_relative_attention_bias:
561
+ module.relative_attention_bias.weight.data.normal_(mean=0.0, std=factor * ((d_model) ** -0.5))
562
+
563
+ def _shift_right(self, input_ids):
564
+ decoder_start_token_id = self.config.decoder_start_token_id
565
+ pad_token_id = self.config.pad_token_id
566
+
567
+ if decoder_start_token_id is None:
568
+ raise ValueError(
569
+ "self.model.config.decoder_start_token_id has to be defined. In UMT5 it is usually set to the pad_token_id. "
570
+ "See UMT5 docs for more information."
571
+ )
572
+
573
+ # shift inputs to the right
574
+ if is_torch_fx_proxy(input_ids):
575
+ # Item assignment is not supported natively for proxies.
576
+ shifted_input_ids = torch.full(input_ids.shape[:-1] + (1,), decoder_start_token_id)
577
+ shifted_input_ids = torch.cat([shifted_input_ids, input_ids[..., :-1]], dim=-1)
578
+ else:
579
+ shifted_input_ids = input_ids.new_zeros(input_ids.shape)
580
+ shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
581
+ shifted_input_ids[..., 0] = decoder_start_token_id
582
+
583
+ if pad_token_id is None:
584
+ raise ValueError("self.model.config.pad_token_id has to be defined.")
585
+ # replace possible -100 values in labels by `pad_token_id`
586
+ shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
587
+
588
+ return shifted_input_ids
589
+
590
+
591
+ class UMT5Stack(UMT5PreTrainedModel):
592
+ def __init__(self, config, embed_tokens=None):
593
+ super().__init__(config)
594
+ self.embed_tokens = embed_tokens
595
+ self.is_decoder = config.is_decoder
596
+ self.block = nn.ModuleList([UMT5Block(config) for i in range(config.num_layers)])
597
+ self.final_layer_norm = UMT5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
598
+ self.dropout = nn.Dropout(config.dropout_rate)
599
+
600
+ # Initialize weights and apply final processing
601
+ self.gradient_checkpointing = False
602
+ self.post_init()
603
+
604
+ def get_input_embeddings(self):
605
+ return self.embed_tokens
606
+
607
+ def set_input_embeddings(self, new_embeddings):
608
+ self.embed_tokens = new_embeddings
609
+
610
+ def forward(
611
+ self,
612
+ input_ids=None,
613
+ attention_mask=None,
614
+ encoder_hidden_states=None,
615
+ encoder_attention_mask=None,
616
+ inputs_embeds=None,
617
+ head_mask=None,
618
+ cross_attn_head_mask=None,
619
+ past_key_values=None,
620
+ use_cache=None,
621
+ output_attentions=None,
622
+ output_hidden_states=None,
623
+ return_dict=None,
624
+ ):
625
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
626
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
627
+ output_hidden_states = (
628
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
629
+ )
630
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
631
+
632
+ if input_ids is not None and inputs_embeds is not None:
633
+ err_msg_prefix = "decoder_" if self.is_decoder else ""
634
+ raise ValueError(
635
+ f"You cannot specify both {err_msg_prefix}input_ids and {err_msg_prefix}inputs_embeds at the same time"
636
+ )
637
+ elif input_ids is not None:
638
+ input_shape = input_ids.size()
639
+ input_ids = input_ids.view(-1, input_shape[-1])
640
+ elif inputs_embeds is not None:
641
+ input_shape = inputs_embeds.size()[:-1]
642
+ else:
643
+ err_msg_prefix = "decoder_" if self.is_decoder else ""
644
+ raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
645
+
646
+ if inputs_embeds is None:
647
+ if self.embed_tokens is None:
648
+ raise ValueError("You have to initialize the model with valid token embeddings")
649
+ inputs_embeds = self.embed_tokens(input_ids)
650
+
651
+ batch_size, seq_length = input_shape
652
+
653
+ # required mask seq length can be calculated via length of past
654
+ mask_seq_length = past_key_values[0][0].shape[2] + seq_length if past_key_values is not None else seq_length
655
+
656
+ if use_cache is True:
657
+ if not self.is_decoder:
658
+ raise ValueError(f"`use_cache` can only be set to `True` if {self} is used as a decoder")
659
+
660
+ if attention_mask is None:
661
+ attention_mask = torch.ones(batch_size, mask_seq_length, device=inputs_embeds.device)
662
+ if self.is_decoder and encoder_attention_mask is None and encoder_hidden_states is not None:
663
+ encoder_seq_length = encoder_hidden_states.shape[1]
664
+ encoder_attention_mask = torch.ones(
665
+ batch_size, encoder_seq_length, device=inputs_embeds.device, dtype=torch.long
666
+ )
667
+
668
+ # initialize past_key_values with `None` if past does not exist
669
+ if past_key_values is None:
670
+ past_key_values = [None] * len(self.block)
671
+
672
+ # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
673
+ # ourselves in which case we just need to make it broadcastable to all heads.
674
+ extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape)
675
+
676
+ # If a 2D or 3D attention mask is provided for the cross-attention
677
+ # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
678
+ if self.is_decoder and encoder_hidden_states is not None:
679
+ encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
680
+ encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
681
+ if encoder_attention_mask is None:
682
+ encoder_attention_mask = torch.ones(encoder_hidden_shape, device=inputs_embeds.device)
683
+ encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
684
+ else:
685
+ encoder_extended_attention_mask = None
686
+
687
+ if self.gradient_checkpointing and self.training:
688
+ if use_cache:
689
+ logger.warning_once(
690
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
691
+ )
692
+ use_cache = False
693
+
694
+ # Prepare head mask if needed
695
+ head_mask = self.get_head_mask(head_mask, self.config.num_layers)
696
+ cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers)
697
+ present_key_value_states = () if use_cache else None
698
+ all_hidden_states = () if output_hidden_states else None
699
+ all_attentions = () if output_attentions else None
700
+ all_cross_attentions = () if output_attentions and self.is_decoder else None
701
+
702
+ hidden_states = self.dropout(inputs_embeds)
703
+
704
+ for i, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)):
705
+ layer_head_mask = head_mask[i]
706
+ cross_attn_layer_head_mask = cross_attn_head_mask[i]
707
+
708
+ if output_hidden_states:
709
+ all_hidden_states = all_hidden_states + (hidden_states,)
710
+
711
+ if self.gradient_checkpointing and self.training:
712
+ layer_outputs = self._gradient_checkpointing_func(
713
+ layer_module.forward,
714
+ hidden_states,
715
+ extended_attention_mask,
716
+ encoder_hidden_states,
717
+ encoder_extended_attention_mask,
718
+ layer_head_mask,
719
+ cross_attn_layer_head_mask,
720
+ None, # past_key_value is always None with gradient checkpointing
721
+ use_cache,
722
+ output_attentions,
723
+ )
724
+ else:
725
+ layer_outputs = layer_module(
726
+ hidden_states,
727
+ attention_mask=extended_attention_mask,
728
+ encoder_hidden_states=encoder_hidden_states,
729
+ encoder_attention_mask=encoder_extended_attention_mask,
730
+ layer_head_mask=layer_head_mask,
731
+ cross_attn_layer_head_mask=cross_attn_layer_head_mask,
732
+ past_key_value=past_key_value,
733
+ use_cache=use_cache,
734
+ output_attentions=output_attentions,
735
+ )
736
+
737
+ hidden_states = layer_outputs[0]
738
+
739
+ if use_cache:
740
+ present_key_value_states += (layer_outputs[1],)
741
+
742
+ if output_attentions:
743
+ all_attentions += (layer_outputs[2],)
744
+ if self.is_decoder:
745
+ all_cross_attentions += (layer_outputs[3],)
746
+
747
+ hidden_states = self.final_layer_norm(hidden_states)
748
+ hidden_states = self.dropout(hidden_states)
749
+
750
+ # Add last layer
751
+ if output_hidden_states:
752
+ all_hidden_states = all_hidden_states + (hidden_states,)
753
+
754
+ if not return_dict:
755
+ return tuple(
756
+ v
757
+ for v in [
758
+ hidden_states,
759
+ present_key_value_states,
760
+ all_hidden_states,
761
+ all_attentions,
762
+ all_cross_attentions,
763
+ ]
764
+ if v is not None
765
+ )
766
+ return BaseModelOutputWithPastAndCrossAttentions(
767
+ last_hidden_state=hidden_states,
768
+ past_key_values=present_key_value_states,
769
+ hidden_states=all_hidden_states,
770
+ attentions=all_attentions,
771
+ cross_attentions=all_cross_attentions,
772
+ )
773
+
774
+
775
+ UMT5_START_DOCSTRING = r"""
776
+
777
+ The UMT5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
778
+ Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
779
+ Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
780
+ text-to-text denoising generative setting.
781
+
782
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
783
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
784
+ etc.)
785
+
786
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
787
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
788
+ and behavior.
789
+
790
+ Parameters:
791
+ config ([`UMT5Config`]): Model configuration class with all the parameters of the model.
792
+ Initializing with a config file does not load the weights associated with the model, only the
793
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
794
+ """
795
+
796
+ UMT5_INPUTS_DOCSTRING = r"""
797
+ Args:
798
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
799
+ Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
800
+ you should be able to pad the inputs on both the right and the left.
801
+
802
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
803
+ [`PreTrainedTokenizer.__call__`] for detail.
804
+
805
+ [What are input IDs?](../glossary#input-ids)
806
+
807
+ To know more on how to prepare `input_ids` for pretraining take a look a [UMT5 Training](./umt5#training).
808
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
809
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
810
+
811
+ - 1 for tokens that are **not masked**,
812
+ - 0 for tokens that are **masked**.
813
+
814
+ [What are attention masks?](../glossary#attention-mask)
815
+ decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
816
+ Indices of decoder input sequence tokens in the vocabulary.
817
+
818
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
819
+ [`PreTrainedTokenizer.__call__`] for details.
820
+
821
+ [What are decoder input IDs?](../glossary#decoder-input-ids)
822
+
823
+ UMT5 uses the `pad_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values`
824
+ is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`).
825
+
826
+ To know more on how to prepare `decoder_input_ids` for pretraining take a look at [UMT5
827
+ Training](./umt5#training).
828
+ decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
829
+ Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
830
+ be used by default.
831
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
832
+ Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in `[0,
833
+ 1]`:
834
+
835
+ - 1 indicates the head is **not masked**,
836
+ - 0 indicates the head is **masked**.
837
+
838
+ decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
839
+ Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0,
840
+ 1]`:
841
+
842
+ - 1 indicates the head is **not masked**,
843
+ - 0 indicates the head is **masked**.
844
+
845
+ cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
846
+ Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
847
+ `[0, 1]`:
848
+
849
+ - 1 indicates the head is **not masked**,
850
+ - 0 indicates the head is **masked**.
851
+
852
+ encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*):
853
+ Tuple consists of (`last_hidden_state`, `optional`: *hidden_states*, `optional`: *attentions*)
854
+ `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` is a sequence of hidden states at
855
+ the output of the last layer of the encoder. Used in the cross-attention of the decoder.
856
+ past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
857
+ Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
858
+
859
+ If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
860
+ don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
861
+ `decoder_input_ids` of shape `(batch_size, sequence_length)`.
862
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
863
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
864
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
865
+ model's internal embedding lookup matrix.
866
+ decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*):
867
+ Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded
868
+ representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be
869
+ input (see `past_key_values`). This is useful if you want more control over how to convert
870
+ `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix.
871
+
872
+ If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value
873
+ of `inputs_embeds`.
874
+
875
+ use_cache (`bool`, *optional*):
876
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
877
+ `past_key_values`).
878
+
879
+ output_attentions (`bool`, *optional*):
880
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
881
+ tensors for more detail.
882
+ output_hidden_states (`bool`, *optional*):
883
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
884
+ more detail.
885
+ return_dict (`bool`, *optional*):
886
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
887
+ """
888
+
889
+ UMT5_ENCODER_INPUTS_DOCSTRING = r"""
890
+ Args:
891
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
892
+ Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
893
+ you should be able to pad the inputs on both the right and the left.
894
+
895
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
896
+ [`PreTrainedTokenizer.__call__`] for detail.
897
+
898
+ To know more on how to prepare `input_ids` for pretraining take a look a [UMT5 Training](./umt5#training).
899
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
900
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
901
+
902
+ - 1 for tokens that are **not masked**,
903
+ - 0 for tokens that are **masked**.
904
+
905
+ [What are attention masks?](../glossary#attention-mask)
906
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
907
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
908
+
909
+ - 1 indicates the head is **not masked**,
910
+ - 0 indicates the head is **masked**.
911
+
912
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
913
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
914
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
915
+ model's internal embedding lookup matrix.
916
+ output_attentions (`bool`, *optional*):
917
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
918
+ tensors for more detail.
919
+ output_hidden_states (`bool`, *optional*):
920
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
921
+ more detail.
922
+ return_dict (`bool`, *optional*):
923
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
924
+ """
925
+
926
+
927
+ @add_start_docstrings(
928
+ "The bare UMT5 Model transformer outputting raw hidden-states without any specific head on top.",
929
+ UMT5_START_DOCSTRING,
930
+ )
931
+ class UMT5Model(UMT5PreTrainedModel):
932
+ r"""
933
+ Examples:
934
+
935
+ ```python
936
+ >>> from transformers import UMT5Model, AutoTokenizer
937
+
938
+ >>> model = UMT5Model.from_pretrained("google/umt5-small")
939
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
940
+ >>> noisy_text = "UN Offizier sagt, dass weiter <extra_id_0> werden muss in Syrien."
941
+ >>> label = "<extra_id_0> verhandelt"
942
+ >>> inputs = tokenizer(inputs, return_tensors="pt")
943
+ >>> labels = tokenizer(label=label, return_tensors="pt")
944
+
945
+ >>> outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"])
946
+ >>> hidden_states = outputs.last_hidden_state
947
+ ```"""
948
+
949
+ model_type = "umt5"
950
+ config_class = UMT5Config
951
+ _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"]
952
+
953
+ def __init__(self, config):
954
+ super().__init__(config)
955
+ self.shared = nn.Embedding(config.vocab_size, config.d_model)
956
+
957
+ encoder_config = copy.deepcopy(config)
958
+ encoder_config.is_decoder = False
959
+ encoder_config.use_cache = False
960
+ encoder_config.is_encoder_decoder = False
961
+ self.encoder = UMT5Stack(encoder_config, self.shared)
962
+
963
+ decoder_config = copy.deepcopy(config)
964
+ decoder_config.is_decoder = True
965
+ decoder_config.is_encoder_decoder = False
966
+ decoder_config.num_layers = config.num_decoder_layers
967
+ self.decoder = UMT5Stack(decoder_config, self.shared)
968
+
969
+ # Initialize weights and apply final processing
970
+ self.post_init()
971
+
972
+ # Copied from transformers.models.t5.modeling_t5.T5Model.get_input_embeddings
973
+ def get_input_embeddings(self):
974
+ return self.shared
975
+
976
+ # Copied from transformers.models.t5.modeling_t5.T5Model.set_input_embeddings
977
+ def set_input_embeddings(self, new_embeddings):
978
+ self.shared = new_embeddings
979
+ self.encoder.set_input_embeddings(new_embeddings)
980
+ self.decoder.set_input_embeddings(new_embeddings)
981
+
982
+ # Copied from transformers.models.t5.modeling_t5.T5Model._tie_weights
983
+ def _tie_weights(self):
984
+ if self.config.tie_word_embeddings:
985
+ self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared)
986
+ self._tie_or_clone_weights(self.decoder.embed_tokens, self.shared)
987
+
988
+ # Copied from transformers.models.t5.modeling_t5.T5Model.get_encoder
989
+ def get_encoder(self):
990
+ return self.encoder
991
+
992
+ # Copied from transformers.models.t5.modeling_t5.T5Model.get_decoder
993
+ def get_decoder(self):
994
+ return self.decoder
995
+
996
+ # Copied from transformers.models.t5.modeling_t5.T5Model._prune_heads
997
+ def _prune_heads(self, heads_to_prune):
998
+ """
999
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
1000
+ class PreTrainedModel
1001
+ """
1002
+ for layer, heads in heads_to_prune.items():
1003
+ self.encoder.layer[layer].attention.prune_heads(heads)
1004
+
1005
+ @add_start_docstrings_to_model_forward(UMT5_INPUTS_DOCSTRING)
1006
+ @replace_return_docstrings(output_type=Seq2SeqModelOutput, config_class=_CONFIG_FOR_DOC)
1007
+ def forward(
1008
+ self,
1009
+ input_ids: Optional[torch.LongTensor] = None,
1010
+ attention_mask: Optional[torch.FloatTensor] = None,
1011
+ decoder_input_ids: Optional[torch.LongTensor] = None,
1012
+ decoder_attention_mask: Optional[torch.BoolTensor] = None,
1013
+ head_mask: Optional[torch.FloatTensor] = None,
1014
+ decoder_head_mask: Optional[torch.FloatTensor] = None,
1015
+ cross_attn_head_mask: Optional[torch.Tensor] = None,
1016
+ encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
1017
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
1018
+ inputs_embeds: Optional[torch.Tensor] = None,
1019
+ decoder_inputs_embeds: Optional[torch.Tensor] = None,
1020
+ use_cache: Optional[bool] = None,
1021
+ output_attentions: Optional[bool] = None,
1022
+ output_hidden_states: Optional[bool] = None,
1023
+ return_dict: Optional[bool] = None,
1024
+ ) -> Union[Tuple[torch.FloatTensor], Seq2SeqModelOutput]:
1025
+ r"""
1026
+ Returns:
1027
+
1028
+ Example:
1029
+
1030
+ ```python
1031
+ >>> from transformers import AutoTokenizer, UMT5Model
1032
+
1033
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
1034
+ >>> model = UMT5Model.from_pretrained("google/umt5-small")
1035
+
1036
+ >>> input_ids = tokenizer(
1037
+ ... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
1038
+ ... ).input_ids # Batch size 1
1039
+ >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
1040
+
1041
+ >>> # preprocess: Prepend decoder_input_ids with start token which is pad token for UMT5Model.
1042
+ >>> # This is not needed for torch's UMT5ForConditionalGeneration as it does this internally using labels arg.
1043
+ >>> decoder_input_ids = model._shift_right(decoder_input_ids)
1044
+
1045
+ >>> # forward pass
1046
+ >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
1047
+ >>> last_hidden_states = outputs.last_hidden_state
1048
+ ```"""
1049
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1050
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1051
+
1052
+ # Encode if needed (training, first prediction pass)
1053
+ if encoder_outputs is None:
1054
+ encoder_outputs = self.encoder(
1055
+ input_ids=input_ids,
1056
+ attention_mask=attention_mask,
1057
+ inputs_embeds=inputs_embeds,
1058
+ head_mask=head_mask,
1059
+ output_attentions=output_attentions,
1060
+ output_hidden_states=output_hidden_states,
1061
+ return_dict=return_dict,
1062
+ )
1063
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1064
+ encoder_outputs = BaseModelOutput(
1065
+ last_hidden_state=encoder_outputs[0],
1066
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1067
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1068
+ )
1069
+
1070
+ hidden_states = encoder_outputs[0]
1071
+
1072
+ # Decode
1073
+ decoder_outputs = self.decoder(
1074
+ input_ids=decoder_input_ids,
1075
+ attention_mask=decoder_attention_mask,
1076
+ inputs_embeds=decoder_inputs_embeds,
1077
+ past_key_values=past_key_values,
1078
+ encoder_hidden_states=hidden_states,
1079
+ encoder_attention_mask=attention_mask,
1080
+ head_mask=decoder_head_mask,
1081
+ cross_attn_head_mask=cross_attn_head_mask,
1082
+ use_cache=use_cache,
1083
+ output_attentions=output_attentions,
1084
+ output_hidden_states=output_hidden_states,
1085
+ return_dict=return_dict,
1086
+ )
1087
+
1088
+ if not return_dict:
1089
+ return decoder_outputs + encoder_outputs
1090
+
1091
+ return Seq2SeqModelOutput(
1092
+ last_hidden_state=decoder_outputs.last_hidden_state,
1093
+ past_key_values=decoder_outputs.past_key_values,
1094
+ decoder_hidden_states=decoder_outputs.hidden_states,
1095
+ decoder_attentions=decoder_outputs.attentions,
1096
+ cross_attentions=decoder_outputs.cross_attentions,
1097
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1098
+ encoder_hidden_states=encoder_outputs.hidden_states,
1099
+ encoder_attentions=encoder_outputs.attentions,
1100
+ )
1101
+
1102
+
1103
+ @add_start_docstrings("""UMT5 Model with a `language modeling` head on top.""", UMT5_START_DOCSTRING)
1104
+ class UMT5ForConditionalGeneration(UMT5PreTrainedModel):
1105
+ r"""
1106
+ Examples:
1107
+
1108
+ ```python
1109
+ >>> from transformers import UMT5ForConditionalGeneration, AutoTokenizer
1110
+
1111
+ >>> model = UMT5ForConditionalGeneration.from_pretrained("google/umt5-small")
1112
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
1113
+ >>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
1114
+ >>> summary = "Weiter Verhandlung in Syrien."
1115
+ >>> inputs = tokenizer(article, text_target=summary, return_tensors="pt")
1116
+
1117
+ >>> outputs = model(**inputs)
1118
+ >>> loss = outputs.loss
1119
+ ```"""
1120
+
1121
+ model_type = "umt5"
1122
+ _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight", "lm_head.weight"]
1123
+
1124
+ def __init__(self, config):
1125
+ super().__init__(config)
1126
+ self.model_dim = config.d_model
1127
+
1128
+ self.shared = nn.Embedding(config.vocab_size, config.d_model)
1129
+
1130
+ encoder_config = copy.deepcopy(config)
1131
+ encoder_config.is_decoder = False
1132
+ encoder_config.use_cache = False
1133
+ encoder_config.is_encoder_decoder = False
1134
+ self.encoder = UMT5Stack(encoder_config, self.shared)
1135
+
1136
+ decoder_config = copy.deepcopy(config)
1137
+ decoder_config.is_decoder = True
1138
+ decoder_config.is_encoder_decoder = False
1139
+ decoder_config.num_layers = config.num_decoder_layers
1140
+ self.decoder = UMT5Stack(decoder_config, self.shared)
1141
+
1142
+ self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)
1143
+
1144
+ # Initialize weights and apply final processing
1145
+ self.post_init()
1146
+
1147
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.get_input_embeddings
1148
+ def get_input_embeddings(self):
1149
+ return self.shared
1150
+
1151
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.set_input_embeddings
1152
+ def set_input_embeddings(self, new_embeddings):
1153
+ self.shared = new_embeddings
1154
+ self.encoder.set_input_embeddings(new_embeddings)
1155
+ self.decoder.set_input_embeddings(new_embeddings)
1156
+
1157
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration._tie_weights
1158
+ def _tie_weights(self):
1159
+ if self.config.tie_word_embeddings:
1160
+ self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared)
1161
+ self._tie_or_clone_weights(self.decoder.embed_tokens, self.shared)
1162
+
1163
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.set_output_embeddings
1164
+ def set_output_embeddings(self, new_embeddings):
1165
+ self.lm_head = new_embeddings
1166
+
1167
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.get_output_embeddings
1168
+ def get_output_embeddings(self):
1169
+ return self.lm_head
1170
+
1171
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.get_encoder
1172
+ def get_encoder(self):
1173
+ return self.encoder
1174
+
1175
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.get_decoder
1176
+ def get_decoder(self):
1177
+ return self.decoder
1178
+
1179
+ @add_start_docstrings_to_model_forward(UMT5_INPUTS_DOCSTRING)
1180
+ @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC)
1181
+ def forward(
1182
+ self,
1183
+ input_ids: Optional[torch.LongTensor] = None,
1184
+ attention_mask: Optional[torch.FloatTensor] = None,
1185
+ decoder_input_ids: Optional[torch.LongTensor] = None,
1186
+ decoder_attention_mask: Optional[torch.BoolTensor] = None,
1187
+ head_mask: Optional[torch.FloatTensor] = None,
1188
+ decoder_head_mask: Optional[torch.FloatTensor] = None,
1189
+ cross_attn_head_mask: Optional[torch.Tensor] = None,
1190
+ encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1191
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1192
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1193
+ decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1194
+ labels: Optional[torch.LongTensor] = None,
1195
+ use_cache: Optional[bool] = None,
1196
+ output_attentions: Optional[bool] = None,
1197
+ output_hidden_states: Optional[bool] = None,
1198
+ return_dict: Optional[bool] = None,
1199
+ ) -> Union[Tuple[torch.FloatTensor], Seq2SeqLMOutput]:
1200
+ r"""
1201
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1202
+ Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ...,
1203
+ config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for
1204
+ labels in `[0, ..., config.vocab_size]`
1205
+
1206
+ Returns:
1207
+
1208
+ Examples:
1209
+
1210
+ ```python
1211
+ >>> from transformers import AutoTokenizer, UMT5ForConditionalGeneration
1212
+
1213
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
1214
+ >>> model = UMT5ForConditionalGeneration.from_pretrained("google/umt5-small")
1215
+
1216
+ >>> # training
1217
+ >>> input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
1218
+ >>> labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
1219
+ >>> outputs = model(input_ids=input_ids, labels=labels)
1220
+ >>> loss = outputs.loss
1221
+ >>> logits = outputs.logits
1222
+
1223
+ >>> # inference
1224
+ >>> input_ids = tokenizer("Studies have shown that <extra_id_0> good for you", return_tensors="pt").input_ids
1225
+ >>> outputs = model.generate(input_ids)
1226
+ >>> tokenizer.decode(outputs[0], skip_special_tokens=True)
1227
+ ```"""
1228
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1229
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1230
+
1231
+ # Encode if needed (training, first prediction pass)
1232
+ if encoder_outputs is None:
1233
+ # Convert encoder inputs in embeddings if needed
1234
+ encoder_outputs = self.encoder(
1235
+ input_ids=input_ids,
1236
+ attention_mask=attention_mask,
1237
+ inputs_embeds=inputs_embeds,
1238
+ head_mask=head_mask,
1239
+ output_attentions=output_attentions,
1240
+ output_hidden_states=output_hidden_states,
1241
+ return_dict=return_dict,
1242
+ )
1243
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1244
+ encoder_outputs = BaseModelOutput(
1245
+ last_hidden_state=encoder_outputs[0],
1246
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1247
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1248
+ )
1249
+
1250
+ hidden_states = encoder_outputs[0]
1251
+
1252
+ if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None:
1253
+ # get decoder inputs from shifting lm labels to the right
1254
+ decoder_input_ids = self._shift_right(labels)
1255
+
1256
+ # Decode
1257
+ decoder_outputs = self.decoder(
1258
+ input_ids=decoder_input_ids,
1259
+ attention_mask=decoder_attention_mask,
1260
+ inputs_embeds=decoder_inputs_embeds,
1261
+ past_key_values=past_key_values,
1262
+ encoder_hidden_states=hidden_states,
1263
+ encoder_attention_mask=attention_mask,
1264
+ head_mask=decoder_head_mask,
1265
+ cross_attn_head_mask=cross_attn_head_mask,
1266
+ use_cache=use_cache,
1267
+ output_attentions=output_attentions,
1268
+ output_hidden_states=output_hidden_states,
1269
+ return_dict=return_dict,
1270
+ )
1271
+
1272
+ sequence_output = decoder_outputs[0]
1273
+
1274
+ if self.config.tie_word_embeddings:
1275
+ # Rescale output before projecting on vocab
1276
+ # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586
1277
+ sequence_output = sequence_output * (self.model_dim**-0.5)
1278
+
1279
+ lm_logits = self.lm_head(sequence_output)
1280
+
1281
+ loss = None
1282
+ if labels is not None:
1283
+ loss_fct = CrossEntropyLoss(ignore_index=-100)
1284
+ # move labels to correct device to enable PP
1285
+ labels = labels.to(lm_logits.device)
1286
+ loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))
1287
+
1288
+ if not return_dict:
1289
+ output = (lm_logits,) + decoder_outputs[1:] + encoder_outputs
1290
+ return ((loss,) + output) if loss is not None else output
1291
+
1292
+ return Seq2SeqLMOutput(
1293
+ loss=loss,
1294
+ logits=lm_logits,
1295
+ past_key_values=decoder_outputs.past_key_values,
1296
+ decoder_hidden_states=decoder_outputs.hidden_states,
1297
+ decoder_attentions=decoder_outputs.attentions,
1298
+ cross_attentions=decoder_outputs.cross_attentions,
1299
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1300
+ encoder_hidden_states=encoder_outputs.hidden_states,
1301
+ encoder_attentions=encoder_outputs.attentions,
1302
+ )
1303
+
1304
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.prepare_inputs_for_generation
1305
+ def prepare_inputs_for_generation(
1306
+ self,
1307
+ input_ids,
1308
+ past_key_values=None,
1309
+ attention_mask=None,
1310
+ head_mask=None,
1311
+ decoder_head_mask=None,
1312
+ decoder_attention_mask=None,
1313
+ cross_attn_head_mask=None,
1314
+ use_cache=None,
1315
+ encoder_outputs=None,
1316
+ **kwargs,
1317
+ ):
1318
+ # cut decoder_input_ids if past_key_values is used
1319
+ if past_key_values is not None:
1320
+ past_length = past_key_values[0][0].shape[2]
1321
+
1322
+ # Some generation methods already pass only the last input ID
1323
+ if input_ids.shape[1] > past_length:
1324
+ remove_prefix_length = past_length
1325
+ else:
1326
+ # Default to old behavior: keep only final ID
1327
+ remove_prefix_length = input_ids.shape[1] - 1
1328
+
1329
+ input_ids = input_ids[:, remove_prefix_length:]
1330
+
1331
+ return {
1332
+ "decoder_input_ids": input_ids,
1333
+ "past_key_values": past_key_values,
1334
+ "encoder_outputs": encoder_outputs,
1335
+ "attention_mask": attention_mask,
1336
+ "head_mask": head_mask,
1337
+ "decoder_head_mask": decoder_head_mask,
1338
+ "decoder_attention_mask": decoder_attention_mask,
1339
+ "cross_attn_head_mask": cross_attn_head_mask,
1340
+ "use_cache": use_cache,
1341
+ }
1342
+
1343
+ # Copied from transformers.models.t5.modeling_t5.T5ForConditionalGeneration.prepare_decoder_input_ids_from_labels
1344
+ def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
1345
+ return self._shift_right(labels)
1346
+
1347
+ @staticmethod
1348
+ def _reorder_cache(past_key_values, beam_idx):
1349
+ reordered_past = ()
1350
+ for layer_past in past_key_values:
1351
+ reordered_past += (
1352
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
1353
+ )
1354
+ return reordered_past
1355
+
1356
+
1357
+ @add_start_docstrings(
1358
+ "The bare UMT5 Model transformer outputting encoder's raw hidden-states without any specific head on top.",
1359
+ UMT5_START_DOCSTRING,
1360
+ )
1361
+ class UMT5EncoderModel(UMT5PreTrainedModel):
1362
+ r"""
1363
+ Examples:
1364
+
1365
+ ```python
1366
+ >>> from transformers import UMT5EncoderModel, AutoTokenizer
1367
+
1368
+ >>> model = UMT5EncoderModel.from_pretrained("google/umt5-small")
1369
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
1370
+ >>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
1371
+ >>> input_ids = tokenizer(article, return_tensors="pt").input_ids
1372
+ >>> outputs = model(input_ids)
1373
+ >>> hidden_state = outputs.last_hidden_state
1374
+ ```"""
1375
+
1376
+ model_type = "umt5"
1377
+ # config_class = UMT5Config
1378
+ _tied_weights_keys = ["encoder.embed_tokens.weight"]
1379
+
1380
+ def __init__(self, config):
1381
+ super().__init__(config)
1382
+ self.shared = nn.Embedding(config.vocab_size, config.d_model)
1383
+
1384
+ encoder_config = copy.deepcopy(config)
1385
+ encoder_config.use_cache = False
1386
+ encoder_config.is_encoder_decoder = False
1387
+ self.encoder = UMT5Stack(encoder_config, self.shared)
1388
+
1389
+ # Initialize weights and apply final processing
1390
+ self.post_init()
1391
+
1392
+ # Copied from transformers.models.t5.modeling_t5.T5EncoderModel.get_input_embeddings
1393
+ def get_input_embeddings(self):
1394
+ return self.shared
1395
+
1396
+ # Copied from transformers.models.t5.modeling_t5.T5EncoderModel.set_input_embeddings
1397
+ def set_input_embeddings(self, new_embeddings):
1398
+ self.shared = new_embeddings
1399
+ self.encoder.set_input_embeddings(new_embeddings)
1400
+
1401
+ # Copied from transformers.models.t5.modeling_t5.T5EncoderModel._tie_weights
1402
+ def _tie_weights(self):
1403
+ if self.config.tie_word_embeddings:
1404
+ self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared)
1405
+
1406
+ # Copied from transformers.models.t5.modeling_t5.T5EncoderModel.get_encoder
1407
+ def get_encoder(self):
1408
+ return self.encoder
1409
+
1410
+ # Copied from transformers.models.t5.modeling_t5.T5EncoderModel._prune_heads
1411
+ def _prune_heads(self, heads_to_prune):
1412
+ """
1413
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
1414
+ class PreTrainedModel
1415
+ """
1416
+ for layer, heads in heads_to_prune.items():
1417
+ self.encoder.block[layer].layer[0].SelfAttention.prune_heads(heads)
1418
+
1419
+ @add_start_docstrings_to_model_forward(UMT5_ENCODER_INPUTS_DOCSTRING)
1420
+ @replace_return_docstrings(output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC)
1421
+ # Copied from transformers.models.t5.modeling_t5.T5EncoderModel.forward with T5->UMT5, google-t5/t5-small->google/umt5-small
1422
+ def forward(
1423
+ self,
1424
+ input_ids: Optional[torch.LongTensor] = None,
1425
+ attention_mask: Optional[torch.FloatTensor] = None,
1426
+ head_mask: Optional[torch.FloatTensor] = None,
1427
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1428
+ output_attentions: Optional[bool] = None,
1429
+ output_hidden_states: Optional[bool] = None,
1430
+ return_dict: Optional[bool] = None,
1431
+ ) -> Union[Tuple[torch.FloatTensor], BaseModelOutput]:
1432
+ r"""
1433
+ Returns:
1434
+
1435
+ Example:
1436
+
1437
+ ```python
1438
+ >>> from transformers import AutoTokenizer, UMT5EncoderModel
1439
+
1440
+ >>> tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
1441
+ >>> model = UMT5EncoderModel.from_pretrained("google/umt5-small")
1442
+ >>> input_ids = tokenizer(
1443
+ ... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
1444
+ ... ).input_ids # Batch size 1
1445
+ >>> outputs = model(input_ids=input_ids)
1446
+ >>> last_hidden_states = outputs.last_hidden_state
1447
+ ```"""
1448
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1449
+
1450
+ encoder_outputs = self.encoder(
1451
+ input_ids=input_ids,
1452
+ attention_mask=attention_mask,
1453
+ inputs_embeds=inputs_embeds,
1454
+ head_mask=head_mask,
1455
+ output_attentions=output_attentions,
1456
+ output_hidden_states=output_hidden_states,
1457
+ return_dict=return_dict,
1458
+ )
1459
+
1460
+ return encoder_outputs
1461
+
1462
+
1463
+ @add_start_docstrings(
1464
+ """
1465
+ UMT5 model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
1466
+ tasks.
1467
+ """,
1468
+ UMT5_START_DOCSTRING,
1469
+ )
1470
+ class UMT5ForSequenceClassification(UMT5PreTrainedModel):
1471
+ _keys_to_ignore_on_load_unexpected = ["decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight"]
1472
+ _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"]
1473
+
1474
+ # Copied from transformers.models.t5.modeling_t5.T5ForSequenceClassification.__init__ with T5->UMT5
1475
+ def __init__(self, config: UMT5Config):
1476
+ super().__init__(config)
1477
+ self.transformer = UMT5Model(config)
1478
+ self.classification_head = UMT5ClassificationHead(config)
1479
+
1480
+ # Initialize weights and apply final processing
1481
+ self.post_init()
1482
+
1483
+ self.model_parallel = False
1484
+
1485
+ @add_start_docstrings_to_model_forward(UMT5_INPUTS_DOCSTRING)
1486
+ @replace_return_docstrings(output_type=Seq2SeqSequenceClassifierOutput, config_class=_CONFIG_FOR_DOC)
1487
+ def forward(
1488
+ self,
1489
+ input_ids: torch.LongTensor = None,
1490
+ attention_mask: Optional[torch.Tensor] = None,
1491
+ decoder_input_ids: Optional[torch.LongTensor] = None,
1492
+ decoder_attention_mask: Optional[torch.LongTensor] = None,
1493
+ head_mask: Optional[torch.Tensor] = None,
1494
+ decoder_head_mask: Optional[torch.Tensor] = None,
1495
+ cross_attn_head_mask: Optional[torch.Tensor] = None,
1496
+ encoder_outputs: Optional[List[torch.FloatTensor]] = None,
1497
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1498
+ decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1499
+ labels: Optional[torch.LongTensor] = None,
1500
+ use_cache: Optional[bool] = None,
1501
+ output_attentions: Optional[bool] = None,
1502
+ output_hidden_states: Optional[bool] = None,
1503
+ return_dict: Optional[bool] = None,
1504
+ ) -> Union[Tuple, Seq2SeqSequenceClassifierOutput]:
1505
+ r"""
1506
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1507
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1508
+ config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1509
+ Returns:
1510
+ """
1511
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1512
+ if labels is not None:
1513
+ use_cache = False
1514
+
1515
+ if input_ids is None and inputs_embeds is not None:
1516
+ raise NotImplementedError(
1517
+ f"Passing input embeddings is currently not supported for {self.__class__.__name__}"
1518
+ )
1519
+
1520
+ # Copied from models.bart.modeling_bart.BartModel.forward different to other models, T5 automatically creates
1521
+ # decoder_input_ids from input_ids if no decoder_input_ids are provided
1522
+ if decoder_input_ids is None and decoder_inputs_embeds is None:
1523
+ if input_ids is None:
1524
+ raise ValueError(
1525
+ "If no `decoder_input_ids` or `decoder_inputs_embeds` are "
1526
+ "passed, `input_ids` cannot be `None`. Please pass either "
1527
+ "`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`."
1528
+ )
1529
+ decoder_input_ids = self._shift_right(input_ids)
1530
+
1531
+ outputs = self.transformer(
1532
+ input_ids,
1533
+ attention_mask=attention_mask,
1534
+ decoder_input_ids=decoder_input_ids,
1535
+ decoder_attention_mask=decoder_attention_mask,
1536
+ head_mask=head_mask,
1537
+ decoder_head_mask=decoder_head_mask,
1538
+ cross_attn_head_mask=cross_attn_head_mask,
1539
+ encoder_outputs=encoder_outputs,
1540
+ inputs_embeds=inputs_embeds,
1541
+ decoder_inputs_embeds=decoder_inputs_embeds,
1542
+ use_cache=use_cache,
1543
+ output_attentions=output_attentions,
1544
+ output_hidden_states=output_hidden_states,
1545
+ return_dict=return_dict,
1546
+ )
1547
+ sequence_output = outputs[0]
1548
+
1549
+ eos_mask = input_ids.eq(self.config.eos_token_id).to(sequence_output.device)
1550
+
1551
+ if len(torch.unique_consecutive(eos_mask.sum(1))) > 1:
1552
+ raise ValueError("All examples must have the same number of <eos> tokens.")
1553
+ batch_size, _, hidden_size = sequence_output.shape
1554
+ sentence_representation = sequence_output[eos_mask, :].view(batch_size, -1, hidden_size)[:, -1, :]
1555
+ logits = self.classification_head(sentence_representation)
1556
+
1557
+ loss = None
1558
+ if labels is not None:
1559
+ labels = labels.to(logits.device)
1560
+ if self.config.problem_type is None:
1561
+ if self.config.num_labels == 1:
1562
+ self.config.problem_type = "regression"
1563
+ elif self.config.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1564
+ self.config.problem_type = "single_label_classification"
1565
+ else:
1566
+ self.config.problem_type = "multi_label_classification"
1567
+
1568
+ if self.config.problem_type == "regression":
1569
+ loss_fct = MSELoss()
1570
+ if self.config.num_labels == 1:
1571
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
1572
+ else:
1573
+ loss = loss_fct(logits, labels)
1574
+ elif self.config.problem_type == "single_label_classification":
1575
+ loss_fct = CrossEntropyLoss()
1576
+ loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))
1577
+ elif self.config.problem_type == "multi_label_classification":
1578
+ loss_fct = BCEWithLogitsLoss()
1579
+ loss = loss_fct(logits, labels)
1580
+ if not return_dict:
1581
+ output = (logits,) + outputs[1:]
1582
+ return ((loss,) + output) if loss is not None else output
1583
+
1584
+ return Seq2SeqSequenceClassifierOutput(
1585
+ loss=loss,
1586
+ logits=logits,
1587
+ past_key_values=outputs.past_key_values,
1588
+ decoder_hidden_states=outputs.decoder_hidden_states,
1589
+ decoder_attentions=outputs.decoder_attentions,
1590
+ cross_attentions=outputs.cross_attentions,
1591
+ encoder_last_hidden_state=outputs.encoder_last_hidden_state,
1592
+ encoder_hidden_states=outputs.encoder_hidden_states,
1593
+ encoder_attentions=outputs.encoder_attentions,
1594
+ )
1595
+
1596
+
1597
+ @add_start_docstrings(
1598
+ """
1599
+ UMT5 Encoder Model with a token classification head on top (a linear layer on top of the hidden-states output)
1600
+ e.g. for Named-Entity-Recognition (NER) tasks.
1601
+ """,
1602
+ UMT5_START_DOCSTRING,
1603
+ )
1604
+ class UMT5ForTokenClassification(UMT5PreTrainedModel):
1605
+ _keys_to_ignore_on_load_unexpected = ["decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight"]
1606
+ _tied_weights_keys = ["transformer.encoder.embed_tokens.weight"]
1607
+
1608
+ # Copied from transformers.models.t5.modeling_t5.T5ForTokenClassification.__init__ with T5->UMT5
1609
+ def __init__(self, config: UMT5Config):
1610
+ super().__init__(config)
1611
+ self.num_labels = config.num_labels
1612
+
1613
+ self.transformer = UMT5EncoderModel(config)
1614
+ self.dropout = nn.Dropout(config.classifier_dropout)
1615
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
1616
+
1617
+ # Initialize weights and apply final processing
1618
+ self.post_init()
1619
+
1620
+ @add_start_docstrings_to_model_forward(UMT5_INPUTS_DOCSTRING)
1621
+ @replace_return_docstrings(output_type=TokenClassifierOutput, config_class=_CONFIG_FOR_DOC)
1622
+ # Copied from transformers.models.t5.modeling_t5.T5ForTokenClassification.forward with T5->UMT5
1623
+ def forward(
1624
+ self,
1625
+ input_ids: Optional[torch.Tensor] = None,
1626
+ attention_mask: Optional[torch.Tensor] = None,
1627
+ head_mask: Optional[torch.Tensor] = None,
1628
+ inputs_embeds: Optional[torch.Tensor] = None,
1629
+ labels: Optional[torch.Tensor] = None,
1630
+ output_attentions: Optional[bool] = None,
1631
+ output_hidden_states: Optional[bool] = None,
1632
+ return_dict: Optional[bool] = None,
1633
+ ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]:
1634
+ r"""
1635
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1636
+ Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
1637
+ Returns:
1638
+ """
1639
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1640
+
1641
+ outputs = self.transformer(
1642
+ input_ids,
1643
+ attention_mask=attention_mask,
1644
+ head_mask=head_mask,
1645
+ inputs_embeds=inputs_embeds,
1646
+ output_attentions=output_attentions,
1647
+ output_hidden_states=output_hidden_states,
1648
+ return_dict=return_dict,
1649
+ )
1650
+
1651
+ hidden_states = outputs[0]
1652
+ hidden_states = self.dropout(hidden_states)
1653
+ logits = self.classifier(hidden_states)
1654
+
1655
+ loss = None
1656
+ if labels is not None:
1657
+ loss_fct = CrossEntropyLoss()
1658
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1659
+
1660
+ if not return_dict:
1661
+ output = (logits, outputs[2:-1])
1662
+ return ((loss,) + output) if loss is not None else output
1663
+
1664
+ return TokenClassifierOutput(
1665
+ loss=loss,
1666
+ logits=logits,
1667
+ hidden_states=outputs.hidden_states,
1668
+ attentions=outputs.attentions,
1669
+ )
1670
+
1671
+
1672
+ @add_start_docstrings(
1673
+ """
1674
+ UMT5 Model with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers
1675
+ on top of the hidden-states output to compute `span start logits` and `span end logits`).
1676
+ """,
1677
+ UMT5_START_DOCSTRING,
1678
+ )
1679
+ class UMT5ForQuestionAnswering(UMT5PreTrainedModel):
1680
+ _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"]
1681
+
1682
+ def __init__(self, config):
1683
+ super().__init__(config)
1684
+ self.model_dim = config.d_model
1685
+
1686
+ self.shared = nn.Embedding(config.vocab_size, config.d_model)
1687
+
1688
+ encoder_config = copy.deepcopy(config)
1689
+ encoder_config.is_decoder = False
1690
+ encoder_config.use_cache = False
1691
+ encoder_config.is_encoder_decoder = False
1692
+ self.encoder = UMT5Stack(encoder_config, self.shared)
1693
+
1694
+ decoder_config = copy.deepcopy(config)
1695
+ decoder_config.is_decoder = True
1696
+ decoder_config.is_encoder_decoder = False
1697
+ decoder_config.num_layers = config.num_decoder_layers
1698
+ self.decoder = UMT5Stack(decoder_config, self.shared)
1699
+
1700
+ self.num_labels = config.num_labels
1701
+ self.qa_outputs = nn.Linear(config.d_model, config.num_labels)
1702
+
1703
+ # Initialize weights and apply final processing
1704
+ self.post_init()
1705
+
1706
+ # Copied from transformers.models.t5.modeling_t5.T5ForQuestionAnswering.get_input_embeddings
1707
+ def get_input_embeddings(self):
1708
+ return self.shared
1709
+
1710
+ # Copied from transformers.models.t5.modeling_t5.T5ForQuestionAnswering.set_input_embeddings
1711
+ def set_input_embeddings(self, new_embeddings):
1712
+ self.shared = new_embeddings
1713
+ self.encoder.set_input_embeddings(new_embeddings)
1714
+ self.decoder.set_input_embeddings(new_embeddings)
1715
+
1716
+ # Copied from transformers.models.t5.modeling_t5.T5ForQuestionAnswering._tie_weights
1717
+ def _tie_weights(self):
1718
+ if self.config.tie_word_embeddings:
1719
+ self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared)
1720
+ self._tie_or_clone_weights(self.decoder.embed_tokens, self.shared)
1721
+
1722
+ # Copied from transformers.models.t5.modeling_t5.T5ForQuestionAnswering.get_encoder
1723
+ def get_encoder(self):
1724
+ return self.encoder
1725
+
1726
+ # Copied from transformers.models.t5.modeling_t5.T5ForQuestionAnswering.get_decoder
1727
+ def get_decoder(self):
1728
+ return self.decoder
1729
+
1730
+ @add_start_docstrings_to_model_forward(UMT5_INPUTS_DOCSTRING)
1731
+ @replace_return_docstrings(output_type=Seq2SeqQuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC)
1732
+ def forward(
1733
+ self,
1734
+ input_ids: Optional[torch.LongTensor] = None,
1735
+ attention_mask: Optional[torch.FloatTensor] = None,
1736
+ decoder_input_ids: Optional[torch.LongTensor] = None,
1737
+ decoder_attention_mask: Optional[torch.BoolTensor] = None,
1738
+ head_mask: Optional[torch.FloatTensor] = None,
1739
+ decoder_head_mask: Optional[torch.FloatTensor] = None,
1740
+ cross_attn_head_mask: Optional[torch.Tensor] = None,
1741
+ encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1742
+ start_positions: Optional[torch.LongTensor] = None,
1743
+ end_positions: Optional[torch.LongTensor] = None,
1744
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1745
+ decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
1746
+ use_cache: Optional[bool] = None,
1747
+ output_attentions: Optional[bool] = None,
1748
+ output_hidden_states: Optional[bool] = None,
1749
+ return_dict: Optional[bool] = None,
1750
+ ) -> Union[Tuple[torch.FloatTensor], Seq2SeqQuestionAnsweringModelOutput]:
1751
+ r"""
1752
+ start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1753
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1754
+ Positions are clamped to the length of the sequence (*sequence_length*). Position outside of the sequence
1755
+ are not taken into account for computing the loss.
1756
+ end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1757
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1758
+ Positions are clamped to the length of the sequence (*sequence_length*). Position outside of the sequence
1759
+ are not taken into account for computing the loss.
1760
+ Returns:
1761
+ """
1762
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1763
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1764
+ if start_positions is not None and end_positions is not None:
1765
+ use_cache = False
1766
+
1767
+ # Copied from models.bart.modeling_bart.BartModel.forward
1768
+ # different to other models, T5 automatically creates decoder_input_ids from
1769
+ # input_ids if no decoder_input_ids are provided
1770
+ if decoder_input_ids is None and decoder_inputs_embeds is None:
1771
+ if input_ids is None:
1772
+ raise ValueError(
1773
+ "If no `decoder_input_ids` or `decoder_inputs_embeds` are "
1774
+ "passed, `input_ids` cannot be `None`. Please pass either "
1775
+ "`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`."
1776
+ )
1777
+ decoder_input_ids = self._shift_right(input_ids)
1778
+
1779
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1780
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1781
+
1782
+ # Encode if needed (training, first prediction pass)
1783
+ if encoder_outputs is None:
1784
+ encoder_outputs = self.encoder(
1785
+ input_ids=input_ids,
1786
+ attention_mask=attention_mask,
1787
+ inputs_embeds=inputs_embeds,
1788
+ head_mask=head_mask,
1789
+ output_attentions=output_attentions,
1790
+ output_hidden_states=output_hidden_states,
1791
+ return_dict=return_dict,
1792
+ )
1793
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1794
+ encoder_outputs = BaseModelOutput(
1795
+ last_hidden_state=encoder_outputs[0],
1796
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1797
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1798
+ )
1799
+
1800
+ hidden_states = encoder_outputs[0]
1801
+
1802
+ # Decode
1803
+ decoder_outputs = self.decoder(
1804
+ input_ids=decoder_input_ids,
1805
+ attention_mask=decoder_attention_mask,
1806
+ inputs_embeds=decoder_inputs_embeds,
1807
+ past_key_values=None,
1808
+ encoder_hidden_states=hidden_states,
1809
+ encoder_attention_mask=attention_mask,
1810
+ head_mask=decoder_head_mask,
1811
+ cross_attn_head_mask=cross_attn_head_mask,
1812
+ use_cache=use_cache,
1813
+ output_attentions=output_attentions,
1814
+ output_hidden_states=output_hidden_states,
1815
+ return_dict=return_dict,
1816
+ )
1817
+
1818
+ sequence_output = decoder_outputs[0]
1819
+
1820
+ logits = self.qa_outputs(sequence_output)
1821
+ start_logits, end_logits = logits.split(1, dim=-1)
1822
+ start_logits = start_logits.squeeze(-1).contiguous()
1823
+ end_logits = end_logits.squeeze(-1).contiguous()
1824
+
1825
+ total_loss = None
1826
+ if start_positions is not None and end_positions is not None:
1827
+ # If we are on multi-GPU, split add a dimension
1828
+ if len(start_positions.size()) > 1:
1829
+ start_positions = start_positions.squeeze(-1).to(start_logits.device)
1830
+ if len(end_positions.size()) > 1:
1831
+ end_positions = end_positions.squeeze(-1).to(end_logits.device)
1832
+ # sometimes the start/end positions are outside our model inputs, we ignore these terms
1833
+ ignored_index = start_logits.size(1)
1834
+ start_positions = start_positions.clamp(0, ignored_index)
1835
+ end_positions = end_positions.clamp(0, ignored_index)
1836
+
1837
+ loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
1838
+ start_loss = loss_fct(start_logits, start_positions)
1839
+ end_loss = loss_fct(end_logits, end_positions)
1840
+ total_loss = (start_loss + end_loss) / 2
1841
+
1842
+ if not return_dict:
1843
+ output = (start_logits, end_logits) + decoder_outputs[1:] + encoder_outputs
1844
+ return ((total_loss,) + output) if total_loss is not None else output
1845
+
1846
+ return Seq2SeqQuestionAnsweringModelOutput(
1847
+ loss=total_loss,
1848
+ start_logits=start_logits,
1849
+ end_logits=end_logits,
1850
+ past_key_values=decoder_outputs.past_key_values,
1851
+ decoder_hidden_states=decoder_outputs.hidden_states,
1852
+ decoder_attentions=decoder_outputs.attentions,
1853
+ cross_attentions=decoder_outputs.cross_attentions,
1854
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1855
+ encoder_hidden_states=encoder_outputs.hidden_states,
1856
+ encoder_attentions=encoder_outputs.attentions,
1857
+ )
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__init__.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from typing import TYPE_CHECKING
16
+
17
+ from ..utils import _LazyModule
18
+
19
+
20
+ _import_structure = {
21
+ "config": [
22
+ "EXTERNAL_DATA_FORMAT_SIZE_LIMIT",
23
+ "OnnxConfig",
24
+ "OnnxConfigWithPast",
25
+ "OnnxSeq2SeqConfigWithPast",
26
+ "PatchingSpec",
27
+ ],
28
+ "convert": ["export", "validate_model_outputs"],
29
+ "features": ["FeaturesManager"],
30
+ "utils": ["ParameterFormat", "compute_serialized_parameters_size"],
31
+ }
32
+
33
+
34
+ if TYPE_CHECKING:
35
+ from .config import (
36
+ EXTERNAL_DATA_FORMAT_SIZE_LIMIT,
37
+ OnnxConfig,
38
+ OnnxConfigWithPast,
39
+ OnnxSeq2SeqConfigWithPast,
40
+ PatchingSpec,
41
+ )
42
+ from .convert import export, validate_model_outputs
43
+ from .features import FeaturesManager
44
+ from .utils import ParameterFormat, compute_serialized_parameters_size
45
+
46
+ else:
47
+ import sys
48
+
49
+ sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__main__.py ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ import subprocess
15
+ import sys
16
+ import warnings
17
+ from argparse import ArgumentParser
18
+ from pathlib import Path
19
+
20
+ from packaging import version
21
+
22
+ from .. import AutoFeatureExtractor, AutoImageProcessor, AutoProcessor, AutoTokenizer
23
+ from ..utils import logging
24
+ from ..utils.import_utils import is_optimum_available
25
+ from .convert import export, validate_model_outputs
26
+ from .features import FeaturesManager
27
+ from .utils import get_preprocessor
28
+
29
+
30
+ MIN_OPTIMUM_VERSION = "1.5.0"
31
+
32
+ ENCODER_DECODER_MODELS = ["vision-encoder-decoder"]
33
+
34
+
35
+ def export_with_optimum(args):
36
+ if is_optimum_available():
37
+ from optimum.version import __version__ as optimum_version
38
+
39
+ parsed_optimum_version = version.parse(optimum_version)
40
+ if parsed_optimum_version < version.parse(MIN_OPTIMUM_VERSION):
41
+ raise RuntimeError(
42
+ f"transformers.onnx requires optimum >= {MIN_OPTIMUM_VERSION} but {optimum_version} is installed. You "
43
+ "can upgrade optimum by running: pip install -U optimum[exporters]"
44
+ )
45
+ else:
46
+ raise RuntimeError(
47
+ "transformers.onnx requires optimum to run, you can install the library by running: pip install "
48
+ "optimum[exporters]"
49
+ )
50
+ cmd_line = [
51
+ sys.executable,
52
+ "-m",
53
+ "optimum.exporters.onnx",
54
+ f"--model {args.model}",
55
+ f"--task {args.feature}",
56
+ f"--framework {args.framework}" if args.framework is not None else "",
57
+ f"{args.output}",
58
+ ]
59
+ proc = subprocess.Popen(cmd_line, stdout=subprocess.PIPE)
60
+ proc.wait()
61
+
62
+ logger.info(
63
+ "The export was done by optimum.exporters.onnx. We recommend using to use this package directly in future, as "
64
+ "transformers.onnx is deprecated, and will be removed in v5. You can find more information here: "
65
+ "https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model."
66
+ )
67
+
68
+
69
+ def export_with_transformers(args):
70
+ args.output = args.output if args.output.is_file() else args.output.joinpath("model.onnx")
71
+ if not args.output.parent.exists():
72
+ args.output.parent.mkdir(parents=True)
73
+
74
+ # Allocate the model
75
+ model = FeaturesManager.get_model_from_feature(
76
+ args.feature, args.model, framework=args.framework, cache_dir=args.cache_dir
77
+ )
78
+
79
+ model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature)
80
+ onnx_config = model_onnx_config(model.config)
81
+
82
+ if model_kind in ENCODER_DECODER_MODELS:
83
+ encoder_model = model.get_encoder()
84
+ decoder_model = model.get_decoder()
85
+
86
+ encoder_onnx_config = onnx_config.get_encoder_config(encoder_model.config)
87
+ decoder_onnx_config = onnx_config.get_decoder_config(
88
+ encoder_model.config, decoder_model.config, feature=args.feature
89
+ )
90
+
91
+ if args.opset is None:
92
+ args.opset = max(encoder_onnx_config.default_onnx_opset, decoder_onnx_config.default_onnx_opset)
93
+
94
+ if args.opset < min(encoder_onnx_config.default_onnx_opset, decoder_onnx_config.default_onnx_opset):
95
+ raise ValueError(
96
+ f"Opset {args.opset} is not sufficient to export {model_kind}. At least "
97
+ f" {min(encoder_onnx_config.default_onnx_opset, decoder_onnx_config.default_onnx_opset)} is required."
98
+ )
99
+
100
+ preprocessor = AutoFeatureExtractor.from_pretrained(args.model)
101
+
102
+ onnx_inputs, onnx_outputs = export(
103
+ preprocessor,
104
+ encoder_model,
105
+ encoder_onnx_config,
106
+ args.opset,
107
+ args.output.parent.joinpath("encoder_model.onnx"),
108
+ )
109
+
110
+ validate_model_outputs(
111
+ encoder_onnx_config,
112
+ preprocessor,
113
+ encoder_model,
114
+ args.output.parent.joinpath("encoder_model.onnx"),
115
+ onnx_outputs,
116
+ args.atol if args.atol else encoder_onnx_config.atol_for_validation,
117
+ )
118
+
119
+ preprocessor = AutoTokenizer.from_pretrained(args.model)
120
+
121
+ onnx_inputs, onnx_outputs = export(
122
+ preprocessor,
123
+ decoder_model,
124
+ decoder_onnx_config,
125
+ args.opset,
126
+ args.output.parent.joinpath("decoder_model.onnx"),
127
+ )
128
+
129
+ validate_model_outputs(
130
+ decoder_onnx_config,
131
+ preprocessor,
132
+ decoder_model,
133
+ args.output.parent.joinpath("decoder_model.onnx"),
134
+ onnx_outputs,
135
+ args.atol if args.atol else decoder_onnx_config.atol_for_validation,
136
+ )
137
+ logger.info(
138
+ f"All good, model saved at: {args.output.parent.joinpath('encoder_model.onnx').as_posix()},"
139
+ f" {args.output.parent.joinpath('decoder_model.onnx').as_posix()}"
140
+ )
141
+
142
+ else:
143
+ # Instantiate the appropriate preprocessor
144
+ if args.preprocessor == "auto":
145
+ preprocessor = get_preprocessor(args.model)
146
+ elif args.preprocessor == "tokenizer":
147
+ preprocessor = AutoTokenizer.from_pretrained(args.model)
148
+ elif args.preprocessor == "image_processor":
149
+ preprocessor = AutoImageProcessor.from_pretrained(args.model)
150
+ elif args.preprocessor == "feature_extractor":
151
+ preprocessor = AutoFeatureExtractor.from_pretrained(args.model)
152
+ elif args.preprocessor == "processor":
153
+ preprocessor = AutoProcessor.from_pretrained(args.model)
154
+ else:
155
+ raise ValueError(f"Unknown preprocessor type '{args.preprocessor}'")
156
+
157
+ # Ensure the requested opset is sufficient
158
+ if args.opset is None:
159
+ args.opset = onnx_config.default_onnx_opset
160
+
161
+ if args.opset < onnx_config.default_onnx_opset:
162
+ raise ValueError(
163
+ f"Opset {args.opset} is not sufficient to export {model_kind}. "
164
+ f"At least {onnx_config.default_onnx_opset} is required."
165
+ )
166
+
167
+ onnx_inputs, onnx_outputs = export(
168
+ preprocessor,
169
+ model,
170
+ onnx_config,
171
+ args.opset,
172
+ args.output,
173
+ )
174
+
175
+ if args.atol is None:
176
+ args.atol = onnx_config.atol_for_validation
177
+
178
+ validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
179
+ logger.info(f"All good, model saved at: {args.output.as_posix()}")
180
+ warnings.warn(
181
+ "The export was done by transformers.onnx which is deprecated and will be removed in v5. We recommend"
182
+ " using optimum.exporters.onnx in future. You can find more information here:"
183
+ " https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model.",
184
+ FutureWarning,
185
+ )
186
+
187
+
188
+ def main():
189
+ parser = ArgumentParser("Hugging Face Transformers ONNX exporter")
190
+ parser.add_argument(
191
+ "-m", "--model", type=str, required=True, help="Model ID on huggingface.co or path on disk to load model from."
192
+ )
193
+ parser.add_argument(
194
+ "--feature",
195
+ default="default",
196
+ help="The type of features to export the model with.",
197
+ )
198
+ parser.add_argument("--opset", type=int, default=None, help="ONNX opset version to export the model with.")
199
+ parser.add_argument(
200
+ "--atol", type=float, default=None, help="Absolute difference tolerance when validating the model."
201
+ )
202
+ parser.add_argument(
203
+ "--framework",
204
+ type=str,
205
+ choices=["pt", "tf"],
206
+ default=None,
207
+ help=(
208
+ "The framework to use for the ONNX export."
209
+ " If not provided, will attempt to use the local checkpoint's original framework"
210
+ " or what is available in the environment."
211
+ ),
212
+ )
213
+ parser.add_argument("output", type=Path, help="Path indicating where to store generated ONNX model.")
214
+ parser.add_argument("--cache_dir", type=str, default=None, help="Path indicating where to store cache.")
215
+ parser.add_argument(
216
+ "--preprocessor",
217
+ type=str,
218
+ choices=["auto", "tokenizer", "feature_extractor", "image_processor", "processor"],
219
+ default="auto",
220
+ help="Which type of preprocessor to use. 'auto' tries to automatically detect it.",
221
+ )
222
+ parser.add_argument(
223
+ "--export_with_transformers",
224
+ action="store_true",
225
+ help=(
226
+ "Whether to use transformers.onnx instead of optimum.exporters.onnx to perform the ONNX export. It can be "
227
+ "useful when exporting a model supported in transformers but not in optimum, otherwise it is not "
228
+ "recommended."
229
+ ),
230
+ )
231
+
232
+ args = parser.parse_args()
233
+ if args.export_with_transformers or not is_optimum_available():
234
+ export_with_transformers(args)
235
+ else:
236
+ export_with_optimum(args)
237
+
238
+
239
+ if __name__ == "__main__":
240
+ logger = logging.get_logger("transformers.onnx") # pylint: disable=invalid-name
241
+ logger.setLevel(logging.INFO)
242
+ main()
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (872 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/__main__.cpython-310.pyc ADDED
Binary file (5.88 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/config.cpython-310.pyc ADDED
Binary file (24.3 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/convert.cpython-310.pyc ADDED
Binary file (13 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/features.cpython-310.pyc ADDED
Binary file (16 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/onnx/__pycache__/utils.cpython-310.pyc ADDED
Binary file (2.97 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/onnx/config.py ADDED
@@ -0,0 +1,741 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ import copy
15
+ import dataclasses
16
+ import warnings
17
+ from abc import ABC, abstractmethod
18
+ from collections import OrderedDict
19
+ from typing import TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple, Union
20
+
21
+ import numpy as np
22
+ from packaging import version
23
+
24
+ from ..utils import TensorType, is_torch_available, is_vision_available, logging
25
+ from .utils import ParameterFormat, compute_effective_axis_dimension, compute_serialized_parameters_size
26
+
27
+
28
+ if TYPE_CHECKING:
29
+ from ..configuration_utils import PretrainedConfig
30
+ from ..feature_extraction_utils import FeatureExtractionMixin
31
+ from ..image_processing_utils import ImageProcessingMixin
32
+ from ..tokenization_utils_base import PreTrainedTokenizerBase
33
+
34
+
35
+ if is_vision_available():
36
+ from PIL import Image
37
+
38
+ logger = logging.get_logger(__name__)
39
+
40
+
41
+ DEFAULT_ONNX_OPSET = 11
42
+
43
+ # 2 Gb
44
+ EXTERNAL_DATA_FORMAT_SIZE_LIMIT = 2 * 1024 * 1024 * 1024
45
+
46
+
47
+ @dataclasses.dataclass
48
+ class PatchingSpec:
49
+ """
50
+ Data class that holds patching specifications.
51
+
52
+ Args:
53
+ o: Module / object where the op to patch is located
54
+ name: Name of the op to monkey patch
55
+ custom_op: Custom op that patches the original op
56
+ orig_op: Original op that is being patched
57
+ op_wrapper: Wrapper (optional) that wraps both the original and custom ops.
58
+ It is useful for ops that are class or static methods for instance.
59
+ """
60
+
61
+ o: Any
62
+ name: str
63
+ custom_op: Callable
64
+ orig_op: Optional[Callable] = None
65
+ op_wrapper: Optional[Callable] = None
66
+
67
+
68
+ class OnnxConfig(ABC):
69
+ """
70
+ Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format.
71
+ """
72
+
73
+ default_fixed_batch = 2
74
+ default_fixed_sequence = 8
75
+ default_fixed_num_choices = 4
76
+ torch_onnx_minimum_version = version.parse("1.8")
77
+ _tasks_to_common_outputs = {
78
+ "causal-lm": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
79
+ "default": OrderedDict({"last_hidden_state": {0: "batch", 1: "sequence"}}),
80
+ "image-classification": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
81
+ "image-segmentation": OrderedDict(
82
+ {
83
+ "logits": {0: "batch", 1: "sequence"},
84
+ "pred_boxes": {0: "batch", 1: "sequence"},
85
+ "pred_masks": {0: "batch", 1: "sequence"},
86
+ }
87
+ ),
88
+ "masked-im": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
89
+ "masked-lm": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
90
+ "multiple-choice": OrderedDict({"logits": {0: "batch"}}),
91
+ "object-detection": OrderedDict(
92
+ {
93
+ "logits": {0: "batch", 1: "sequence"},
94
+ "pred_boxes": {0: "batch", 1: "sequence"},
95
+ }
96
+ ),
97
+ "question-answering": OrderedDict(
98
+ {
99
+ "start_logits": {0: "batch", 1: "sequence"},
100
+ "end_logits": {0: "batch", 1: "sequence"},
101
+ }
102
+ ),
103
+ "semantic-segmentation": OrderedDict({"logits": {0: "batch", 1: "num_labels", 2: "height", 3: "width"}}),
104
+ "seq2seq-lm": OrderedDict({"logits": {0: "batch", 1: "decoder_sequence"}}),
105
+ "sequence-classification": OrderedDict({"logits": {0: "batch"}}),
106
+ "token-classification": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
107
+ "vision2seq-lm": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
108
+ "speech2seq-lm": OrderedDict({"logits": {0: "batch", 1: "sequence"}}),
109
+ }
110
+
111
+ def __init__(self, config: "PretrainedConfig", task: str = "default", patching_specs: List[PatchingSpec] = None):
112
+ self._config = config
113
+
114
+ if task not in self._tasks_to_common_outputs:
115
+ raise ValueError(
116
+ f"{task} is not a supported task, supported tasks: {self._tasks_to_common_outputs.keys()}"
117
+ )
118
+ self.task = task
119
+
120
+ self._patching_specs = []
121
+ for spec in patching_specs if patching_specs is not None else []:
122
+ final_spec = spec
123
+ if spec.orig_op is None:
124
+ final_spec = dataclasses.replace(spec, orig_op=getattr(spec.o, spec.name))
125
+ self._patching_specs.append(final_spec)
126
+
127
+ @classmethod
128
+ def from_model_config(cls, config: "PretrainedConfig", task: str = "default") -> "OnnxConfig":
129
+ """
130
+ Instantiate a OnnxConfig for a specific model
131
+
132
+ Args:
133
+ config: The model's configuration to use when exporting to ONNX
134
+
135
+ Returns:
136
+ OnnxConfig for this model
137
+ """
138
+ return cls(config, task=task)
139
+
140
+ @property
141
+ @abstractmethod
142
+ def inputs(self) -> Mapping[str, Mapping[int, str]]:
143
+ """
144
+ Mapping containing the axis definition of the input tensors to provide to the model
145
+
146
+ Returns:
147
+ For each input: its name associated to the axes symbolic name and the axis position within the tensor
148
+ """
149
+ raise NotImplementedError()
150
+
151
+ @property
152
+ def outputs(self) -> Mapping[str, Mapping[int, str]]:
153
+ """
154
+ Mapping containing the axis definition of the output tensors to provide to the model
155
+
156
+ Returns:
157
+ For each output: its name associated to the axes symbolic name and the axis position within the tensor
158
+ """
159
+ common_outputs = self._tasks_to_common_outputs[self.task]
160
+ return copy.deepcopy(common_outputs)
161
+
162
+ @property
163
+ def values_override(self) -> Optional[Mapping[str, Any]]:
164
+ """
165
+ Dictionary of keys to override in the model's config before exporting
166
+
167
+ Returns:
168
+ Dictionary with the keys (and their corresponding values) to override
169
+ """
170
+ if hasattr(self._config, "use_cache"):
171
+ return {"use_cache": False}
172
+
173
+ return None
174
+
175
+ @property
176
+ def default_batch_size(self) -> int:
177
+ """
178
+ The default batch size to use if no other indication
179
+
180
+ Returns:
181
+ Integer > 0
182
+ """
183
+ # Using 2 avoid ONNX making assumption about single sample batch
184
+ return OnnxConfig.default_fixed_batch
185
+
186
+ @property
187
+ def default_sequence_length(self) -> int:
188
+ """
189
+ The default sequence length to use if no other indication
190
+
191
+ Returns:
192
+ Integer > 0
193
+ """
194
+ return OnnxConfig.default_fixed_sequence
195
+
196
+ @property
197
+ def default_num_choices(self) -> int:
198
+ """
199
+ The default number of choices to use if no other indication
200
+
201
+ Returns:
202
+ Integer > 0
203
+ """
204
+ return OnnxConfig.default_fixed_num_choices
205
+
206
+ @property
207
+ def default_onnx_opset(self) -> int:
208
+ """
209
+ Which onnx opset to use when exporting the model
210
+
211
+ Returns:
212
+ Integer ONNX Opset version
213
+ """
214
+ return DEFAULT_ONNX_OPSET
215
+
216
+ @property
217
+ def atol_for_validation(self) -> float:
218
+ """
219
+ What absolute tolerance value to use during model conversion validation.
220
+
221
+ Returns:
222
+ Float absolute tolerance value.
223
+ """
224
+ return 1e-5
225
+
226
+ @property
227
+ def is_torch_support_available(self) -> bool:
228
+ """
229
+ The minimum PyTorch version required to export the model.
230
+
231
+ Returns:
232
+ `bool`: Whether the installed version of PyTorch is compatible with the model.
233
+ """
234
+ if is_torch_available():
235
+ from transformers.utils import get_torch_version
236
+
237
+ return version.parse(get_torch_version()) >= self.torch_onnx_minimum_version
238
+ else:
239
+ return False
240
+
241
+ @staticmethod
242
+ def use_external_data_format(num_parameters: int) -> bool:
243
+ """
244
+ Flag indicating if the model requires using external data format
245
+
246
+ Args:
247
+ num_parameters: Number of parameter on the model
248
+
249
+ Returns:
250
+ True if model.num_parameters() * size_of(float32) >= 2Gb False otherwise
251
+ """
252
+
253
+ return (
254
+ compute_serialized_parameters_size(num_parameters, ParameterFormat.Float)
255
+ >= EXTERNAL_DATA_FORMAT_SIZE_LIMIT
256
+ )
257
+
258
+ def _generate_dummy_images(
259
+ self, batch_size: int = 2, num_channels: int = 3, image_height: int = 40, image_width: int = 40
260
+ ):
261
+ images = []
262
+ for _ in range(batch_size):
263
+ data = np.random.rand(image_height, image_width, num_channels) * 255
264
+ images.append(Image.fromarray(data.astype("uint8")).convert("RGB"))
265
+ return images
266
+
267
+ def _generate_dummy_audio(
268
+ self, batch_size: int = 2, sampling_rate: int = 22050, time_duration: float = 5.0, frequency: int = 220
269
+ ):
270
+ audio_data = []
271
+ for _ in range(batch_size):
272
+ # time variable
273
+ t = np.linspace(0, time_duration, int(time_duration * sampling_rate), endpoint=False)
274
+
275
+ # generate pure sine wave at `frequency` Hz
276
+ audio_data.append(0.5 * np.sin(2 * np.pi * frequency * t))
277
+
278
+ return audio_data
279
+
280
+ def generate_dummy_inputs(
281
+ self,
282
+ preprocessor: Union["PreTrainedTokenizerBase", "FeatureExtractionMixin", "ImageProcessingMixin"],
283
+ batch_size: int = -1,
284
+ seq_length: int = -1,
285
+ num_choices: int = -1,
286
+ is_pair: bool = False,
287
+ framework: Optional[TensorType] = None,
288
+ num_channels: int = 3,
289
+ image_width: int = 40,
290
+ image_height: int = 40,
291
+ sampling_rate: int = 22050,
292
+ time_duration: float = 5.0,
293
+ frequency: int = 220,
294
+ tokenizer: "PreTrainedTokenizerBase" = None,
295
+ ) -> Mapping[str, Any]:
296
+ """
297
+ Generate inputs to provide to the ONNX exporter for the specific framework
298
+
299
+ Args:
300
+ preprocessor: ([`PreTrainedTokenizerBase`], [`FeatureExtractionMixin`], or [`ImageProcessingMixin`]):
301
+ The preprocessor associated with this model configuration.
302
+ batch_size (`int`, *optional*, defaults to -1):
303
+ The batch size to export the model for (-1 means dynamic axis).
304
+ num_choices (`int`, *optional*, defaults to -1):
305
+ The number of candidate answers provided for multiple choice task (-1 means dynamic axis).
306
+ seq_length (`int`, *optional*, defaults to -1):
307
+ The sequence length to export the model for (-1 means dynamic axis).
308
+ is_pair (`bool`, *optional*, defaults to `False`):
309
+ Indicate if the input is a pair (sentence 1, sentence 2)
310
+ framework (`TensorType`, *optional*, defaults to `None`):
311
+ The framework (PyTorch or TensorFlow) that the tokenizer will generate tensors for.
312
+ num_channels (`int`, *optional*, defaults to 3):
313
+ The number of channels of the generated images.
314
+ image_width (`int`, *optional*, defaults to 40):
315
+ The width of the generated images.
316
+ image_height (`int`, *optional*, defaults to 40):
317
+ The height of the generated images.
318
+ sampling_rate (`int`, *optional* defaults to 22050)
319
+ The sampling rate for audio data generation.
320
+ time_duration (`float`, *optional* defaults to 5.0)
321
+ Total seconds of sampling for audio data generation.
322
+ frequency (`int`, *optional* defaults to 220)
323
+ The desired natural frequency of generated audio.
324
+
325
+ Returns:
326
+ Mapping[str, Tensor] holding the kwargs to provide to the model's forward function
327
+ """
328
+ from ..feature_extraction_utils import FeatureExtractionMixin
329
+ from ..image_processing_utils import ImageProcessingMixin
330
+ from ..tokenization_utils_base import PreTrainedTokenizerBase
331
+
332
+ if isinstance(preprocessor, PreTrainedTokenizerBase) and tokenizer is not None:
333
+ raise ValueError("You cannot provide both a tokenizer and a preprocessor to generate dummy inputs.")
334
+ if tokenizer is not None:
335
+ warnings.warn(
336
+ "The `tokenizer` argument is deprecated and will be removed in version 5 of Transformers. Use"
337
+ " `preprocessor` instead.",
338
+ FutureWarning,
339
+ )
340
+ logger.warning("Overwriting the `preprocessor` argument with `tokenizer` to generate dummmy inputs.")
341
+ preprocessor = tokenizer
342
+ if isinstance(preprocessor, PreTrainedTokenizerBase):
343
+ # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX
344
+ batch_size = compute_effective_axis_dimension(
345
+ batch_size, fixed_dimension=OnnxConfig.default_fixed_batch, num_token_to_add=0
346
+ )
347
+ # If dynamic axis (-1) we forward with a fixed dimension of 8 tokens to avoid optimizations made by ONNX
348
+ token_to_add = preprocessor.num_special_tokens_to_add(is_pair)
349
+ seq_length = compute_effective_axis_dimension(
350
+ seq_length, fixed_dimension=OnnxConfig.default_fixed_sequence, num_token_to_add=token_to_add
351
+ )
352
+ # Generate dummy inputs according to compute batch and sequence
353
+ input_token = (
354
+ preprocessor.unk_token
355
+ if (preprocessor.unk_token is not None and len(preprocessor.unk_token) > 0)
356
+ else "0"
357
+ )
358
+ dummy_input = [" ".join([input_token]) * seq_length] * batch_size
359
+ if self.task == "multiple-choice":
360
+ # If dynamic axis (-1) we forward with a fixed dimension of 4 candidate answers to avoid optimizations
361
+ # made by ONNX
362
+ num_choices = compute_effective_axis_dimension(
363
+ num_choices, fixed_dimension=OnnxConfig.default_fixed_num_choices, num_token_to_add=0
364
+ )
365
+ dummy_input = dummy_input * num_choices
366
+ # The shape of the tokenized inputs values is [batch_size * num_choices, seq_length]
367
+ tokenized_input = preprocessor(dummy_input, text_pair=dummy_input)
368
+ # Unflatten the tokenized inputs values expanding it to the shape [batch_size, num_choices, seq_length]
369
+ for k, v in tokenized_input.items():
370
+ tokenized_input[k] = [v[i : i + num_choices] for i in range(0, len(v), num_choices)]
371
+ return dict(tokenized_input.convert_to_tensors(tensor_type=framework))
372
+ return dict(preprocessor(dummy_input, return_tensors=framework))
373
+ elif isinstance(preprocessor, ImageProcessingMixin):
374
+ if preprocessor.model_input_names[0] != "pixel_values":
375
+ raise ValueError(
376
+ f"The `preprocessor` is an image processor ({preprocessor.__class__.__name__}) and expects"
377
+ f' `model_input_names[0]` to be "pixel_values", but got {preprocessor.model_input_names[0]}'
378
+ )
379
+ # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX
380
+ batch_size = compute_effective_axis_dimension(batch_size, fixed_dimension=OnnxConfig.default_fixed_batch)
381
+ dummy_input = self._generate_dummy_images(batch_size, num_channels, image_height, image_width)
382
+ return dict(preprocessor(images=dummy_input, return_tensors=framework))
383
+ elif isinstance(preprocessor, FeatureExtractionMixin) and preprocessor.model_input_names[0] == "pixel_values":
384
+ # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX
385
+ batch_size = compute_effective_axis_dimension(batch_size, fixed_dimension=OnnxConfig.default_fixed_batch)
386
+ dummy_input = self._generate_dummy_images(batch_size, num_channels, image_height, image_width)
387
+ return dict(preprocessor(images=dummy_input, return_tensors=framework))
388
+ elif (
389
+ isinstance(preprocessor, FeatureExtractionMixin) and preprocessor.model_input_names[0] == "input_features"
390
+ ):
391
+ # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX
392
+ batch_size = compute_effective_axis_dimension(batch_size, fixed_dimension=OnnxConfig.default_fixed_batch)
393
+ dummy_input = self._generate_dummy_audio(batch_size, sampling_rate, time_duration, frequency)
394
+ return dict(preprocessor(dummy_input, return_tensors=framework))
395
+ else:
396
+ raise ValueError(
397
+ "Unable to generate dummy inputs for the model. Please provide a tokenizer or a preprocessor."
398
+ )
399
+
400
+ def generate_dummy_inputs_onnxruntime(self, reference_model_inputs: Mapping[str, Any]) -> Mapping[str, Any]:
401
+ """
402
+ Generate inputs for ONNX Runtime using the reference model inputs. Override this to run inference with seq2seq
403
+ models which have the encoder and decoder exported as separate ONNX files.
404
+
405
+ Args:
406
+ reference_model_inputs ([`Mapping[str, Tensor]`):
407
+ Reference inputs for the model.
408
+
409
+ Returns:
410
+ `Mapping[str, Tensor]`: The mapping holding the kwargs to provide to the model's forward function
411
+ """
412
+ return reference_model_inputs
413
+
414
+ def patch_ops(self):
415
+ for spec in self._patching_specs:
416
+ custom_op = spec.custom_op if spec.op_wrapper is None else spec.op_wrapper(spec.custom_op)
417
+ setattr(spec.o, spec.name, custom_op)
418
+
419
+ def restore_ops(self):
420
+ for spec in self._patching_specs:
421
+ orig_op = spec.orig_op if spec.op_wrapper is None else spec.op_wrapper(spec.orig_op)
422
+ setattr(spec.o, spec.name, orig_op)
423
+
424
+ @classmethod
425
+ def flatten_output_collection_property(cls, name: str, field: Iterable[Any]) -> Dict[str, Any]:
426
+ """
427
+ Flatten any potential nested structure expanding the name of the field with the index of the element within the
428
+ structure.
429
+
430
+ Args:
431
+ name: The name of the nested structure
432
+ field: The structure to, potentially, be flattened
433
+
434
+ Returns:
435
+ (Dict[str, Any]): Outputs with flattened structure and key mapping this new structure.
436
+
437
+ """
438
+ from itertools import chain
439
+
440
+ return {f"{name}.{idx}": item for idx, item in enumerate(chain.from_iterable(field))}
441
+
442
+
443
+ class OnnxConfigWithPast(OnnxConfig, ABC):
444
+ def __init__(
445
+ self,
446
+ config: "PretrainedConfig",
447
+ task: str = "default",
448
+ patching_specs: List[PatchingSpec] = None,
449
+ use_past: bool = False,
450
+ ):
451
+ super().__init__(config, task=task, patching_specs=patching_specs)
452
+ self.use_past = use_past
453
+
454
+ @classmethod
455
+ def with_past(cls, config: "PretrainedConfig", task: str = "default") -> "OnnxConfigWithPast":
456
+ """
457
+ Instantiate a OnnxConfig with `use_past` attribute set to True
458
+
459
+ Args:
460
+ config: The underlying model's config to use when exporting to ONNX
461
+
462
+ Returns:
463
+ OnnxConfig with `.use_past = True`
464
+ """
465
+ return cls(config, task=task, use_past=True)
466
+
467
+ @property
468
+ def outputs(self) -> Mapping[str, Mapping[int, str]]:
469
+ common_outputs = super().outputs
470
+ if self.use_past:
471
+ self.fill_with_past_key_values_(common_outputs, direction="outputs")
472
+
473
+ return common_outputs
474
+
475
+ @property
476
+ def values_override(self) -> Optional[Mapping[str, Any]]:
477
+ if hasattr(self._config, "use_cache"):
478
+ return {"use_cache": self.use_past}
479
+
480
+ return None
481
+
482
+ @property
483
+ def num_layers(self) -> int:
484
+ """
485
+ The number of layers attribute retrieved from the model config. Override this for model configs where the
486
+ number of layers attribute is not called `num_layers`.
487
+ """
488
+ if not hasattr(self._config, "num_layers"):
489
+ raise AttributeError(
490
+ "could not find the number of layers attribute in the model configuration, override the num_layers"
491
+ " property of the model OnnxConfig to solve this"
492
+ )
493
+ return self._config.num_layers
494
+
495
+ @property
496
+ def num_attention_heads(self) -> int:
497
+ """
498
+ The number of attention heads attribute retrieved from the model config. Override this for model configs where
499
+ the number of attention heads attribute is not called `num_attention_heads`.
500
+ """
501
+ if not hasattr(self._config, "num_attention_heads"):
502
+ raise AttributeError(
503
+ "could not find the number of attention heads attribute in the model configuration, override the"
504
+ " num_attention_heads property of the model OnnxConfig to solve this"
505
+ )
506
+ return self._config.num_attention_heads
507
+
508
+ def generate_dummy_inputs(
509
+ self,
510
+ tokenizer: "PreTrainedTokenizerBase",
511
+ batch_size: int = -1,
512
+ seq_length: int = -1,
513
+ is_pair: bool = False,
514
+ framework: Optional[TensorType] = None,
515
+ ) -> Mapping[str, Any]:
516
+ # TODO: should we set seq_length = 1 when self.use_past = True?
517
+ common_inputs = super().generate_dummy_inputs(
518
+ tokenizer, batch_size=batch_size, seq_length=seq_length, is_pair=is_pair, framework=framework
519
+ )
520
+
521
+ if self.use_past:
522
+ if not is_torch_available():
523
+ raise ValueError("Cannot generate dummy past_keys inputs without PyTorch installed.")
524
+ else:
525
+ import torch
526
+
527
+ batch, seqlen = common_inputs["input_ids"].shape
528
+ # Not using the same length for past_key_values
529
+ past_key_values_length = seqlen + 2
530
+ shape = (
531
+ batch,
532
+ self.num_attention_heads,
533
+ past_key_values_length,
534
+ self._config.hidden_size // self.num_attention_heads,
535
+ )
536
+
537
+ if "attention_mask" in common_inputs:
538
+ mask_dtype = common_inputs["attention_mask"].dtype
539
+ common_inputs["attention_mask"] = torch.cat(
540
+ [common_inputs["attention_mask"], torch.ones(batch, past_key_values_length, dtype=mask_dtype)],
541
+ dim=1,
542
+ )
543
+
544
+ common_inputs["past_key_values"] = []
545
+ for _ in range(self.num_layers):
546
+ common_inputs["past_key_values"].append((torch.zeros(shape), torch.zeros(shape)))
547
+
548
+ return common_inputs
549
+
550
+ def fill_with_past_key_values_(
551
+ self, inputs_or_outputs: Mapping[str, Mapping[int, str]], direction: str, inverted_values_shape: bool = False
552
+ ):
553
+ """
554
+ Fill the input_or_outputs mapping with past_key_values dynamic axes considering.
555
+
556
+ Args:
557
+ inputs_or_outputs: The mapping to fill.
558
+ direction: either "inputs" or "outputs", it specifies whether input_or_outputs is the input mapping or the
559
+ output mapping, this is important for axes naming.
560
+ inverted_values_shape:
561
+ If `True`, store values on dynamic axis 1, else on axis 2.
562
+
563
+ """
564
+ if direction not in ["inputs", "outputs"]:
565
+ raise ValueError(f'direction must either be "inputs" or "outputs", but {direction} was given')
566
+
567
+ name = "past_key_values" if direction == "inputs" else "present"
568
+ for i in range(self.num_layers):
569
+ inputs_or_outputs[f"{name}.{i}.key"] = {0: "batch", 2: "past_sequence + sequence"}
570
+ if inverted_values_shape:
571
+ inputs_or_outputs[f"{name}.{i}.value"] = {0: "batch", 1: "past_sequence + sequence"}
572
+ else:
573
+ inputs_or_outputs[f"{name}.{i}.value"] = {0: "batch", 2: "past_sequence + sequence"}
574
+
575
+ def _flatten_past_key_values_(self, flattened_output, name, idx, t):
576
+ flattened_output[f"{name}.{idx}.key"] = t[0]
577
+ flattened_output[f"{name}.{idx}.value"] = t[1]
578
+
579
+ def flatten_output_collection_property(self, name: str, field: Iterable[Any]) -> Dict[str, Any]:
580
+ flattened_output = {}
581
+ if name in ["present", "past_key_values"]:
582
+ for idx, t in enumerate(field):
583
+ self._flatten_past_key_values_(flattened_output, name, idx, t)
584
+ else:
585
+ flattened_output = super().flatten_output_collection_property(name, field)
586
+
587
+ return flattened_output
588
+
589
+
590
+ class OnnxSeq2SeqConfigWithPast(OnnxConfigWithPast):
591
+ @property
592
+ def outputs(self) -> Mapping[str, Mapping[int, str]]:
593
+ common_outputs = super(OnnxConfigWithPast, self).outputs
594
+ # Renaming the outputs axes properly.
595
+ for name, axes_names in common_outputs.items():
596
+ sequence_name = "encoder_sequence" if "encoder" in name else "decoder_sequence"
597
+ for axis_idx, name in axes_names.items():
598
+ if "sequence" in name:
599
+ axes_names[axis_idx] = sequence_name
600
+ # We reset the value as the order in common_outputs (OrderedDict) is lost otherwise
601
+ else:
602
+ axes_names[axis_idx] = name
603
+ if self.use_past:
604
+ self.fill_with_past_key_values_(common_outputs, direction="outputs")
605
+
606
+ return common_outputs
607
+
608
+ @property
609
+ def num_layers(self) -> Tuple[int]:
610
+ try:
611
+ num_layers = super().num_layers
612
+ num_layers = (num_layers, num_layers)
613
+ except AttributeError:
614
+ if hasattr(self._config, "encoder_layers") and hasattr(self._config, "decoder_layers"):
615
+ num_layers = (self._config.encoder_layers, self._config.decoder_layers)
616
+ else:
617
+ raise AttributeError(
618
+ "could not find the number of encoder and decoder layers attributes in the model configuration,"
619
+ " override the num_layers property of the model OnnxConfig to solve this"
620
+ )
621
+
622
+ return num_layers
623
+
624
+ @property
625
+ def num_attention_heads(self) -> Tuple[int]:
626
+ try:
627
+ num_attention_heads = super().num_attention_heads
628
+ num_attention_heads = (num_attention_heads, num_attention_heads)
629
+ except AttributeError:
630
+ if hasattr(self._config, "encoder_attention_heads") and hasattr(self._config, "decoder_attention_heads"):
631
+ num_attention_heads = (self._config.encoder_attention_heads, self._config.decoder_attention_heads)
632
+ else:
633
+ raise AttributeError(
634
+ "could not find the number of attention heads for the encoder and the decoder attributes in the"
635
+ " model configuration, override the num_attention_heads property of the model OnnxConfig to solve"
636
+ " this"
637
+ )
638
+ return num_attention_heads
639
+
640
+ def generate_dummy_inputs(
641
+ self,
642
+ tokenizer: "PreTrainedTokenizerBase",
643
+ batch_size: int = -1,
644
+ seq_length: int = -1,
645
+ is_pair: bool = False,
646
+ framework: Optional[TensorType] = None,
647
+ ) -> Mapping[str, Any]:
648
+ encoder_inputs = super(OnnxConfigWithPast, self).generate_dummy_inputs(
649
+ tokenizer, batch_size=batch_size, seq_length=seq_length, is_pair=is_pair, framework=framework
650
+ )
651
+
652
+ # Generate decoder inputs
653
+ decoder_seq_length = seq_length if not self.use_past else 1
654
+ decoder_inputs = super(OnnxConfigWithPast, self).generate_dummy_inputs(
655
+ tokenizer, batch_size=batch_size, seq_length=decoder_seq_length, is_pair=is_pair, framework=framework
656
+ )
657
+ decoder_inputs = {f"decoder_{name}": tensor for name, tensor in decoder_inputs.items()}
658
+ common_inputs = dict(**encoder_inputs, **decoder_inputs)
659
+
660
+ if self.use_past:
661
+ if not is_torch_available():
662
+ raise ValueError("Cannot generate dummy past_keys inputs without PyTorch installed.")
663
+ else:
664
+ import torch
665
+ batch = common_inputs["input_ids"].shape[0]
666
+ encoder_seq_length = common_inputs["input_ids"].shape[1]
667
+ decoder_seq_length = common_inputs["decoder_input_ids"].shape[1]
668
+ num_encoder_attention_heads, num_decoder_attention_heads = self.num_attention_heads
669
+ encoder_shape = (
670
+ batch,
671
+ num_encoder_attention_heads,
672
+ encoder_seq_length,
673
+ self._config.hidden_size // num_encoder_attention_heads,
674
+ )
675
+ decoder_shape = (
676
+ batch,
677
+ num_decoder_attention_heads,
678
+ # Not using the same length for past_key_values
679
+ decoder_seq_length + 3,
680
+ self._config.hidden_size // num_decoder_attention_heads,
681
+ )
682
+
683
+ common_inputs["past_key_values"] = []
684
+ # If the number of encoder and decoder layers are present in the model configuration, both are considered
685
+ num_encoder_layers, num_decoder_layers = self.num_layers
686
+ min_num_layers = min(num_encoder_layers, num_decoder_layers)
687
+ max_num_layers = max(num_encoder_layers, num_decoder_layers) - min_num_layers
688
+ remaining_side_name = "encoder" if num_encoder_layers > num_decoder_layers else "decoder"
689
+
690
+ for _ in range(min_num_layers):
691
+ # For encoder-decoder models, past_key_values contains pre-computed values for both the encoder and the
692
+ # decoder layers, hence a tuple of 4 tensors instead of 2
693
+ common_inputs["past_key_values"].append(
694
+ (
695
+ torch.zeros(decoder_shape),
696
+ torch.zeros(decoder_shape),
697
+ torch.zeros(encoder_shape),
698
+ torch.zeros(encoder_shape),
699
+ )
700
+ )
701
+
702
+ # TODO: test this.
703
+ shape = encoder_shape if remaining_side_name == "encoder" else decoder_shape
704
+ for _ in range(min_num_layers, max_num_layers):
705
+ common_inputs["past_key_values"].append((torch.zeros(shape), torch.zeros(shape)))
706
+
707
+ return common_inputs
708
+
709
+ def fill_with_past_key_values_(self, inputs_or_outputs: Mapping[str, Mapping[int, str]], direction: str):
710
+ if direction not in ["inputs", "outputs"]:
711
+ raise ValueError(f'direction must either be "inputs" or "outputs", but {direction} was given')
712
+
713
+ name = "past_key_values" if direction == "inputs" else "present"
714
+
715
+ # If the number of encoder and decoder layers are present in the model configuration, both are considered
716
+ num_encoder_layers, num_decoder_layers = self.num_layers
717
+ min_num_layers = min(num_encoder_layers, num_decoder_layers)
718
+ max_num_layers = max(num_encoder_layers, num_decoder_layers) - min_num_layers
719
+ remaining_side_name = "encoder" if num_encoder_layers > num_decoder_layers else "decoder"
720
+
721
+ encoder_sequence = "past_encoder_sequence"
722
+ decoder_sequence = "past_decoder_sequence" if direction == "inputs" else "past_decoder_sequence + sequence"
723
+
724
+ for i in range(min_num_layers):
725
+ inputs_or_outputs[f"{name}.{i}.decoder.key"] = {0: "batch", 2: decoder_sequence}
726
+ inputs_or_outputs[f"{name}.{i}.decoder.value"] = {0: "batch", 2: decoder_sequence}
727
+ inputs_or_outputs[f"{name}.{i}.encoder.key"] = {0: "batch", 2: encoder_sequence}
728
+ inputs_or_outputs[f"{name}.{i}.encoder.value"] = {0: "batch", 2: encoder_sequence}
729
+
730
+ for i in range(min_num_layers, max_num_layers):
731
+ if remaining_side_name == "encoder":
732
+ axes_info = {0: "batch", 2: encoder_sequence}
733
+ else:
734
+ axes_info = {0: "batch", 2: decoder_sequence}
735
+ inputs_or_outputs[f"{name}.{i}.{remaining_side_name}.key"] = axes_info
736
+
737
+ def _flatten_past_key_values_(self, flattened_output, name, idx, t):
738
+ flattened_output[f"{name}.{idx}.decoder.key"] = t[0]
739
+ flattened_output[f"{name}.{idx}.decoder.value"] = t[1]
740
+ flattened_output[f"{name}.{idx}.encoder.key"] = t[2]
741
+ flattened_output[f"{name}.{idx}.encoder.value"] = t[3]
env-llmeval/lib/python3.10/site-packages/transformers/onnx/convert.py ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import warnings
16
+ from inspect import signature
17
+ from itertools import chain
18
+ from pathlib import Path
19
+ from typing import TYPE_CHECKING, Iterable, List, Tuple, Union
20
+
21
+ import numpy as np
22
+ from packaging.version import Version, parse
23
+
24
+ from ..tokenization_utils_base import PreTrainedTokenizerBase
25
+ from ..utils import (
26
+ TensorType,
27
+ is_tf_available,
28
+ is_torch_available,
29
+ logging,
30
+ )
31
+ from .config import OnnxConfig
32
+
33
+
34
+ if is_torch_available():
35
+ from ..modeling_utils import PreTrainedModel
36
+
37
+ if is_tf_available():
38
+ from ..modeling_tf_utils import TFPreTrainedModel
39
+
40
+ if TYPE_CHECKING:
41
+ from ..feature_extraction_utils import FeatureExtractionMixin
42
+ from ..processing_utils import ProcessorMixin
43
+ from ..tokenization_utils import PreTrainedTokenizer
44
+
45
+
46
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
47
+
48
+
49
+ # This is the minimal required version to support some ONNX Runtime features
50
+ ORT_QUANTIZE_MINIMUM_VERSION = parse("1.4.0")
51
+
52
+
53
+ def check_onnxruntime_requirements(minimum_version: Version):
54
+ """
55
+ Check onnxruntime is installed and if the installed version match is recent enough
56
+
57
+ Raises:
58
+ ImportError: If onnxruntime is not installed or too old version is found
59
+ """
60
+ try:
61
+ import onnxruntime
62
+
63
+ # Parse the version of the installed onnxruntime
64
+ ort_version = parse(onnxruntime.__version__)
65
+
66
+ # We require 1.4.0 minimum
67
+ if ort_version < ORT_QUANTIZE_MINIMUM_VERSION:
68
+ raise ImportError(
69
+ f"We found an older version of onnxruntime ({onnxruntime.__version__}) "
70
+ f"but we require onnxruntime to be >= {minimum_version} to enable all the conversions options.\n"
71
+ "Please update onnxruntime by running `pip install --upgrade onnxruntime`"
72
+ )
73
+
74
+ except ImportError:
75
+ raise ImportError(
76
+ "onnxruntime doesn't seem to be currently installed. "
77
+ "Please install the onnxruntime by running `pip install onnxruntime`"
78
+ " and relaunch the conversion."
79
+ )
80
+
81
+
82
+ def export_pytorch(
83
+ preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin", "ProcessorMixin"],
84
+ model: "PreTrainedModel",
85
+ config: OnnxConfig,
86
+ opset: int,
87
+ output: Path,
88
+ tokenizer: "PreTrainedTokenizer" = None,
89
+ device: str = "cpu",
90
+ ) -> Tuple[List[str], List[str]]:
91
+ """
92
+ Export a PyTorch model to an ONNX Intermediate Representation (IR)
93
+
94
+ Args:
95
+ preprocessor: ([`PreTrainedTokenizer`], [`FeatureExtractionMixin`] or [`ProcessorMixin`]):
96
+ The preprocessor used for encoding the data.
97
+ model ([`PreTrainedModel`]):
98
+ The model to export.
99
+ config ([`~onnx.config.OnnxConfig`]):
100
+ The ONNX configuration associated with the exported model.
101
+ opset (`int`):
102
+ The version of the ONNX operator set to use.
103
+ output (`Path`):
104
+ Directory to store the exported ONNX model.
105
+ device (`str`, *optional*, defaults to `cpu`):
106
+ The device on which the ONNX model will be exported. Either `cpu` or `cuda`.
107
+
108
+ Returns:
109
+ `Tuple[List[str], List[str]]`: A tuple with an ordered list of the model's inputs, and the named inputs from
110
+ the ONNX configuration.
111
+ """
112
+
113
+ if isinstance(preprocessor, PreTrainedTokenizerBase) and tokenizer is not None:
114
+ raise ValueError("You cannot provide both a tokenizer and a preprocessor to export the model.")
115
+ if tokenizer is not None:
116
+ warnings.warn(
117
+ "The `tokenizer` argument is deprecated and will be removed in version 5 of Transformers. Use"
118
+ " `preprocessor` instead.",
119
+ FutureWarning,
120
+ )
121
+ logger.info("Overwriting the `preprocessor` argument with `tokenizer` to generate dummmy inputs.")
122
+ preprocessor = tokenizer
123
+
124
+ if issubclass(type(model), PreTrainedModel):
125
+ import torch
126
+ from torch.onnx import export as onnx_export
127
+
128
+ logger.info(f"Using framework PyTorch: {torch.__version__}")
129
+ with torch.no_grad():
130
+ model.config.return_dict = True
131
+ model.eval()
132
+
133
+ # Check if we need to override certain configuration item
134
+ if config.values_override is not None:
135
+ logger.info(f"Overriding {len(config.values_override)} configuration item(s)")
136
+ for override_config_key, override_config_value in config.values_override.items():
137
+ logger.info(f"\t- {override_config_key} -> {override_config_value}")
138
+ setattr(model.config, override_config_key, override_config_value)
139
+
140
+ # Ensure inputs match
141
+ # TODO: Check when exporting QA we provide "is_pair=True"
142
+ model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH)
143
+ device = torch.device(device)
144
+ if device.type == "cuda" and torch.cuda.is_available():
145
+ model.to(device)
146
+ model_inputs_device = {}
147
+ for k, v in model_inputs.items():
148
+ if isinstance(v, Tuple):
149
+ model_inputs_device[k] = tuple(
150
+ x.to(device) if isinstance(x, torch.Tensor) else None for x in v
151
+ )
152
+ elif isinstance(v, List):
153
+ model_inputs_device[k] = [
154
+ tuple(x.to(device) if isinstance(x, torch.Tensor) else None for x in t) for t in v
155
+ ]
156
+ else:
157
+ model_inputs_device[k] = v.to(device)
158
+
159
+ model_inputs = model_inputs_device
160
+
161
+ inputs_match, matched_inputs = ensure_model_and_config_inputs_match(model, model_inputs.keys())
162
+ onnx_outputs = list(config.outputs.keys())
163
+
164
+ if not inputs_match:
165
+ raise ValueError("Model and config inputs doesn't match")
166
+
167
+ config.patch_ops()
168
+
169
+ onnx_export(
170
+ model,
171
+ (model_inputs,),
172
+ f=output.as_posix(),
173
+ input_names=list(config.inputs.keys()),
174
+ output_names=onnx_outputs,
175
+ dynamic_axes=dict(chain(config.inputs.items(), config.outputs.items())),
176
+ do_constant_folding=True,
177
+ opset_version=opset,
178
+ )
179
+
180
+ config.restore_ops()
181
+
182
+ return matched_inputs, onnx_outputs
183
+
184
+
185
+ def export_tensorflow(
186
+ preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin"],
187
+ model: "TFPreTrainedModel",
188
+ config: OnnxConfig,
189
+ opset: int,
190
+ output: Path,
191
+ tokenizer: "PreTrainedTokenizer" = None,
192
+ ) -> Tuple[List[str], List[str]]:
193
+ """
194
+ Export a TensorFlow model to an ONNX Intermediate Representation (IR)
195
+
196
+ Args:
197
+ preprocessor: ([`PreTrainedTokenizer`] or [`FeatureExtractionMixin`]):
198
+ The preprocessor used for encoding the data.
199
+ model ([`TFPreTrainedModel`]):
200
+ The model to export.
201
+ config ([`~onnx.config.OnnxConfig`]):
202
+ The ONNX configuration associated with the exported model.
203
+ opset (`int`):
204
+ The version of the ONNX operator set to use.
205
+ output (`Path`):
206
+ Directory to store the exported ONNX model.
207
+
208
+ Returns:
209
+ `Tuple[List[str], List[str]]`: A tuple with an ordered list of the model's inputs, and the named inputs from
210
+ the ONNX configuration.
211
+ """
212
+ import onnx
213
+ import tensorflow as tf
214
+ import tf2onnx
215
+
216
+ if isinstance(preprocessor, PreTrainedTokenizerBase) and tokenizer is not None:
217
+ raise ValueError("You cannot provide both a tokenizer and preprocessor to export the model.")
218
+ if tokenizer is not None:
219
+ warnings.warn(
220
+ "The `tokenizer` argument is deprecated and will be removed in version 5 of Transformers. Use"
221
+ " `preprocessor` instead.",
222
+ FutureWarning,
223
+ )
224
+ logger.info("Overwriting the `preprocessor` argument with `tokenizer` to generate dummmy inputs.")
225
+ preprocessor = tokenizer
226
+
227
+ model.config.return_dict = True
228
+
229
+ # Check if we need to override certain configuration item
230
+ if config.values_override is not None:
231
+ logger.info(f"Overriding {len(config.values_override)} configuration item(s)")
232
+ for override_config_key, override_config_value in config.values_override.items():
233
+ logger.info(f"\t- {override_config_key} -> {override_config_value}")
234
+ setattr(model.config, override_config_key, override_config_value)
235
+
236
+ # Ensure inputs match
237
+ model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.TENSORFLOW)
238
+ inputs_match, matched_inputs = ensure_model_and_config_inputs_match(model, model_inputs.keys())
239
+ onnx_outputs = list(config.outputs.keys())
240
+
241
+ input_signature = [
242
+ tf.TensorSpec([None] * tensor.ndim, dtype=tensor.dtype, name=key) for key, tensor in model_inputs.items()
243
+ ]
244
+ onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset)
245
+ onnx.save(onnx_model, output.as_posix())
246
+ config.restore_ops()
247
+
248
+ return matched_inputs, onnx_outputs
249
+
250
+
251
+ def export(
252
+ preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin", "ProcessorMixin"],
253
+ model: Union["PreTrainedModel", "TFPreTrainedModel"],
254
+ config: OnnxConfig,
255
+ opset: int,
256
+ output: Path,
257
+ tokenizer: "PreTrainedTokenizer" = None,
258
+ device: str = "cpu",
259
+ ) -> Tuple[List[str], List[str]]:
260
+ """
261
+ Export a Pytorch or TensorFlow model to an ONNX Intermediate Representation (IR)
262
+
263
+ Args:
264
+ preprocessor: ([`PreTrainedTokenizer`], [`FeatureExtractionMixin`] or [`ProcessorMixin`]):
265
+ The preprocessor used for encoding the data.
266
+ model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
267
+ The model to export.
268
+ config ([`~onnx.config.OnnxConfig`]):
269
+ The ONNX configuration associated with the exported model.
270
+ opset (`int`):
271
+ The version of the ONNX operator set to use.
272
+ output (`Path`):
273
+ Directory to store the exported ONNX model.
274
+ device (`str`, *optional*, defaults to `cpu`):
275
+ The device on which the ONNX model will be exported. Either `cpu` or `cuda`. Only PyTorch is supported for
276
+ export on CUDA devices.
277
+
278
+ Returns:
279
+ `Tuple[List[str], List[str]]`: A tuple with an ordered list of the model's inputs, and the named inputs from
280
+ the ONNX configuration.
281
+ """
282
+ if not (is_torch_available() or is_tf_available()):
283
+ raise ImportError(
284
+ "Cannot convert because neither PyTorch nor TensorFlow are not installed. "
285
+ "Please install torch or tensorflow first."
286
+ )
287
+
288
+ if is_tf_available() and isinstance(model, TFPreTrainedModel) and device == "cuda":
289
+ raise RuntimeError("`tf2onnx` does not support export on CUDA device.")
290
+
291
+ if isinstance(preprocessor, PreTrainedTokenizerBase) and tokenizer is not None:
292
+ raise ValueError("You cannot provide both a tokenizer and a preprocessor to export the model.")
293
+ if tokenizer is not None:
294
+ warnings.warn(
295
+ "The `tokenizer` argument is deprecated and will be removed in version 5 of Transformers. Use"
296
+ " `preprocessor` instead.",
297
+ FutureWarning,
298
+ )
299
+ logger.info("Overwriting the `preprocessor` argument with `tokenizer` to generate dummmy inputs.")
300
+ preprocessor = tokenizer
301
+
302
+ if is_torch_available():
303
+ from ..utils import get_torch_version
304
+
305
+ if not config.is_torch_support_available:
306
+ logger.warning(
307
+ f"Unsupported PyTorch version for this model. Minimum required is {config.torch_onnx_minimum_version},"
308
+ f" got: {get_torch_version()}"
309
+ )
310
+
311
+ if is_torch_available() and issubclass(type(model), PreTrainedModel):
312
+ return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
313
+ elif is_tf_available() and issubclass(type(model), TFPreTrainedModel):
314
+ return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer)
315
+
316
+
317
+ def validate_model_outputs(
318
+ config: OnnxConfig,
319
+ preprocessor: Union["PreTrainedTokenizer", "FeatureExtractionMixin", "ProcessorMixin"],
320
+ reference_model: Union["PreTrainedModel", "TFPreTrainedModel"],
321
+ onnx_model: Path,
322
+ onnx_named_outputs: List[str],
323
+ atol: float,
324
+ tokenizer: "PreTrainedTokenizer" = None,
325
+ ):
326
+ from onnxruntime import InferenceSession, SessionOptions
327
+
328
+ logger.info("Validating ONNX model...")
329
+
330
+ if isinstance(preprocessor, PreTrainedTokenizerBase) and tokenizer is not None:
331
+ raise ValueError("You cannot provide both a tokenizer and a preprocessor to validate the model outputs.")
332
+ if tokenizer is not None:
333
+ warnings.warn(
334
+ "The `tokenizer` argument is deprecated and will be removed in version 5 of Transformers. Use"
335
+ " `preprocessor` instead.",
336
+ FutureWarning,
337
+ )
338
+ logger.info("Overwriting the `preprocessor` argument with `tokenizer` to generate dummmy inputs.")
339
+ preprocessor = tokenizer
340
+
341
+ # generate inputs with a different batch_size and seq_len that was used for conversion to properly test
342
+ # dynamic input shapes.
343
+ if is_torch_available() and issubclass(type(reference_model), PreTrainedModel):
344
+ reference_model_inputs = config.generate_dummy_inputs(
345
+ preprocessor,
346
+ batch_size=config.default_fixed_batch + 1,
347
+ seq_length=config.default_fixed_sequence + 1,
348
+ framework=TensorType.PYTORCH,
349
+ )
350
+ else:
351
+ reference_model_inputs = config.generate_dummy_inputs(
352
+ preprocessor,
353
+ batch_size=config.default_fixed_batch + 1,
354
+ seq_length=config.default_fixed_sequence + 1,
355
+ framework=TensorType.TENSORFLOW,
356
+ )
357
+
358
+ # Create ONNX Runtime session
359
+ options = SessionOptions()
360
+ session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
361
+
362
+ # Compute outputs from the reference model
363
+ if is_torch_available() and issubclass(type(reference_model), PreTrainedModel):
364
+ reference_model.to("cpu")
365
+ ref_outputs = reference_model(**reference_model_inputs)
366
+ ref_outputs_dict = {}
367
+
368
+ # We flatten potential collection of outputs (i.e. past_keys) to a flat structure
369
+ for name, value in ref_outputs.items():
370
+ # Overwriting the output name as "present" since it is the name used for the ONNX outputs
371
+ # ("past_key_values" being taken for the ONNX inputs)
372
+ if name == "past_key_values":
373
+ name = "present"
374
+ if isinstance(value, (list, tuple)):
375
+ value = config.flatten_output_collection_property(name, value)
376
+ ref_outputs_dict.update(value)
377
+ else:
378
+ ref_outputs_dict[name] = value
379
+
380
+ # Create onnxruntime inputs from the reference model inputs
381
+ reference_model_inputs_onnxruntime = config.generate_dummy_inputs_onnxruntime(reference_model_inputs)
382
+
383
+ # We flatten potential collection of inputs (i.e. past_keys)
384
+ onnx_inputs = {}
385
+ for name, value in reference_model_inputs_onnxruntime.items():
386
+ if isinstance(value, (list, tuple)):
387
+ value = config.flatten_output_collection_property(name, value)
388
+ onnx_inputs.update({tensor_name: pt_tensor.numpy() for tensor_name, pt_tensor in value.items()})
389
+ else:
390
+ onnx_inputs[name] = value.numpy()
391
+
392
+ # Compute outputs from the ONNX model
393
+ onnx_outputs = session.run(onnx_named_outputs, onnx_inputs)
394
+
395
+ # Check we have a subset of the keys into onnx_outputs against ref_outputs
396
+ ref_outputs_set, onnx_outputs_set = set(ref_outputs_dict.keys()), set(onnx_named_outputs)
397
+ if not onnx_outputs_set.issubset(ref_outputs_set):
398
+ logger.info(
399
+ f"\t-[x] ONNX model output names {onnx_outputs_set} do not match reference model {ref_outputs_set}"
400
+ )
401
+
402
+ raise ValueError(
403
+ "Outputs doesn't match between reference model and ONNX exported model: "
404
+ f"{onnx_outputs_set.difference(ref_outputs_set)}"
405
+ )
406
+ else:
407
+ logger.info(f"\t-[✓] ONNX model output names match reference model ({onnx_outputs_set})")
408
+
409
+ # Check the shape and values match
410
+ for name, ort_value in zip(onnx_named_outputs, onnx_outputs):
411
+ if is_torch_available() and issubclass(type(reference_model), PreTrainedModel):
412
+ ref_value = ref_outputs_dict[name].detach().numpy()
413
+ else:
414
+ ref_value = ref_outputs_dict[name].numpy()
415
+ logger.info(f'\t- Validating ONNX Model output "{name}":')
416
+
417
+ # Shape
418
+ if not ort_value.shape == ref_value.shape:
419
+ logger.info(f"\t\t-[x] shape {ort_value.shape} doesn't match {ref_value.shape}")
420
+ raise ValueError(
421
+ "Outputs shape doesn't match between reference model and ONNX exported model: "
422
+ f"Got {ref_value.shape} (reference) and {ort_value.shape} (ONNX)"
423
+ )
424
+ else:
425
+ logger.info(f"\t\t-[✓] {ort_value.shape} matches {ref_value.shape}")
426
+
427
+ # Values
428
+ if not np.allclose(ref_value, ort_value, atol=atol):
429
+ bad_indices = np.logical_not(np.isclose(ref_value, ort_value, atol=atol))
430
+ logger.info(f"\t\t-[x] values not close enough (atol: {atol})")
431
+ raise ValueError(
432
+ "Outputs values doesn't match between reference model and ONNX exported model: "
433
+ f"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))} for "
434
+ f"{ref_value[bad_indices]} vs {ort_value[bad_indices]}"
435
+ )
436
+ else:
437
+ logger.info(f"\t\t-[✓] all values close (atol: {atol})")
438
+
439
+
440
+ def ensure_model_and_config_inputs_match(
441
+ model: Union["PreTrainedModel", "TFPreTrainedModel"], model_inputs: Iterable[str]
442
+ ) -> Tuple[bool, List[str]]:
443
+ """
444
+
445
+ :param model_inputs: :param config_inputs: :return:
446
+ """
447
+ if is_torch_available() and issubclass(type(model), PreTrainedModel):
448
+ forward_parameters = signature(model.forward).parameters
449
+ else:
450
+ forward_parameters = signature(model.call).parameters
451
+ model_inputs_set = set(model_inputs)
452
+
453
+ # We are fine if config_inputs has more keys than model_inputs
454
+ forward_inputs_set = set(forward_parameters.keys())
455
+ is_ok = model_inputs_set.issubset(forward_inputs_set)
456
+
457
+ # Make sure the input order match (VERY IMPORTANT !!!!)
458
+ matching_inputs = forward_inputs_set.intersection(model_inputs_set)
459
+ ordered_inputs = [parameter for parameter in forward_parameters.keys() if parameter in matching_inputs]
460
+ return is_ok, ordered_inputs
env-llmeval/lib/python3.10/site-packages/transformers/onnx/features.py ADDED
@@ -0,0 +1,749 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from functools import partial, reduce
3
+ from typing import TYPE_CHECKING, Callable, Dict, Optional, Tuple, Type, Union
4
+
5
+ import transformers
6
+
7
+ from .. import PretrainedConfig, is_tf_available, is_torch_available
8
+ from ..utils import TF2_WEIGHTS_NAME, WEIGHTS_NAME, logging
9
+ from .config import OnnxConfig
10
+
11
+
12
+ if TYPE_CHECKING:
13
+ from transformers import PreTrainedModel, TFPreTrainedModel
14
+
15
+
16
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
17
+
18
+ if is_torch_available():
19
+ from transformers.models.auto import (
20
+ AutoModel,
21
+ AutoModelForCausalLM,
22
+ AutoModelForImageClassification,
23
+ AutoModelForImageSegmentation,
24
+ AutoModelForMaskedImageModeling,
25
+ AutoModelForMaskedLM,
26
+ AutoModelForMultipleChoice,
27
+ AutoModelForObjectDetection,
28
+ AutoModelForQuestionAnswering,
29
+ AutoModelForSemanticSegmentation,
30
+ AutoModelForSeq2SeqLM,
31
+ AutoModelForSequenceClassification,
32
+ AutoModelForSpeechSeq2Seq,
33
+ AutoModelForTokenClassification,
34
+ AutoModelForVision2Seq,
35
+ )
36
+ if is_tf_available():
37
+ from transformers.models.auto import (
38
+ TFAutoModel,
39
+ TFAutoModelForCausalLM,
40
+ TFAutoModelForMaskedLM,
41
+ TFAutoModelForMultipleChoice,
42
+ TFAutoModelForQuestionAnswering,
43
+ TFAutoModelForSemanticSegmentation,
44
+ TFAutoModelForSeq2SeqLM,
45
+ TFAutoModelForSequenceClassification,
46
+ TFAutoModelForTokenClassification,
47
+ )
48
+ if not is_torch_available() and not is_tf_available():
49
+ logger.warning(
50
+ "The ONNX export features are only supported for PyTorch or TensorFlow. You will not be able to export models"
51
+ " without one of these libraries installed."
52
+ )
53
+
54
+
55
+ def supported_features_mapping(
56
+ *supported_features: str, onnx_config_cls: str = None
57
+ ) -> Dict[str, Callable[[PretrainedConfig], OnnxConfig]]:
58
+ """
59
+ Generate the mapping between supported the features and their corresponding OnnxConfig for a given model.
60
+
61
+ Args:
62
+ *supported_features: The names of the supported features.
63
+ onnx_config_cls: The OnnxConfig full name corresponding to the model.
64
+
65
+ Returns:
66
+ The dictionary mapping a feature to an OnnxConfig constructor.
67
+ """
68
+ if onnx_config_cls is None:
69
+ raise ValueError("A OnnxConfig class must be provided")
70
+
71
+ config_cls = transformers
72
+ for attr_name in onnx_config_cls.split("."):
73
+ config_cls = getattr(config_cls, attr_name)
74
+ mapping = {}
75
+ for feature in supported_features:
76
+ if "-with-past" in feature:
77
+ task = feature.replace("-with-past", "")
78
+ mapping[feature] = partial(config_cls.with_past, task=task)
79
+ else:
80
+ mapping[feature] = partial(config_cls.from_model_config, task=feature)
81
+
82
+ return mapping
83
+
84
+
85
+ class FeaturesManager:
86
+ _TASKS_TO_AUTOMODELS = {}
87
+ _TASKS_TO_TF_AUTOMODELS = {}
88
+ if is_torch_available():
89
+ _TASKS_TO_AUTOMODELS = {
90
+ "default": AutoModel,
91
+ "masked-lm": AutoModelForMaskedLM,
92
+ "causal-lm": AutoModelForCausalLM,
93
+ "seq2seq-lm": AutoModelForSeq2SeqLM,
94
+ "sequence-classification": AutoModelForSequenceClassification,
95
+ "token-classification": AutoModelForTokenClassification,
96
+ "multiple-choice": AutoModelForMultipleChoice,
97
+ "object-detection": AutoModelForObjectDetection,
98
+ "question-answering": AutoModelForQuestionAnswering,
99
+ "image-classification": AutoModelForImageClassification,
100
+ "image-segmentation": AutoModelForImageSegmentation,
101
+ "masked-im": AutoModelForMaskedImageModeling,
102
+ "semantic-segmentation": AutoModelForSemanticSegmentation,
103
+ "vision2seq-lm": AutoModelForVision2Seq,
104
+ "speech2seq-lm": AutoModelForSpeechSeq2Seq,
105
+ }
106
+ if is_tf_available():
107
+ _TASKS_TO_TF_AUTOMODELS = {
108
+ "default": TFAutoModel,
109
+ "masked-lm": TFAutoModelForMaskedLM,
110
+ "causal-lm": TFAutoModelForCausalLM,
111
+ "seq2seq-lm": TFAutoModelForSeq2SeqLM,
112
+ "sequence-classification": TFAutoModelForSequenceClassification,
113
+ "token-classification": TFAutoModelForTokenClassification,
114
+ "multiple-choice": TFAutoModelForMultipleChoice,
115
+ "question-answering": TFAutoModelForQuestionAnswering,
116
+ "semantic-segmentation": TFAutoModelForSemanticSegmentation,
117
+ }
118
+
119
+ # Set of model topologies we support associated to the features supported by each topology and the factory
120
+ _SUPPORTED_MODEL_TYPE = {
121
+ "albert": supported_features_mapping(
122
+ "default",
123
+ "masked-lm",
124
+ "sequence-classification",
125
+ "multiple-choice",
126
+ "token-classification",
127
+ "question-answering",
128
+ onnx_config_cls="models.albert.AlbertOnnxConfig",
129
+ ),
130
+ "bart": supported_features_mapping(
131
+ "default",
132
+ "default-with-past",
133
+ "causal-lm",
134
+ "causal-lm-with-past",
135
+ "seq2seq-lm",
136
+ "seq2seq-lm-with-past",
137
+ "sequence-classification",
138
+ "question-answering",
139
+ onnx_config_cls="models.bart.BartOnnxConfig",
140
+ ),
141
+ # BEiT cannot be used with the masked image modeling autoclass, so this feature is excluded here
142
+ "beit": supported_features_mapping(
143
+ "default", "image-classification", onnx_config_cls="models.beit.BeitOnnxConfig"
144
+ ),
145
+ "bert": supported_features_mapping(
146
+ "default",
147
+ "masked-lm",
148
+ "causal-lm",
149
+ "sequence-classification",
150
+ "multiple-choice",
151
+ "token-classification",
152
+ "question-answering",
153
+ onnx_config_cls="models.bert.BertOnnxConfig",
154
+ ),
155
+ "big-bird": supported_features_mapping(
156
+ "default",
157
+ "masked-lm",
158
+ "causal-lm",
159
+ "sequence-classification",
160
+ "multiple-choice",
161
+ "token-classification",
162
+ "question-answering",
163
+ onnx_config_cls="models.big_bird.BigBirdOnnxConfig",
164
+ ),
165
+ "bigbird-pegasus": supported_features_mapping(
166
+ "default",
167
+ "default-with-past",
168
+ "causal-lm",
169
+ "causal-lm-with-past",
170
+ "seq2seq-lm",
171
+ "seq2seq-lm-with-past",
172
+ "sequence-classification",
173
+ "question-answering",
174
+ onnx_config_cls="models.bigbird_pegasus.BigBirdPegasusOnnxConfig",
175
+ ),
176
+ "blenderbot": supported_features_mapping(
177
+ "default",
178
+ "default-with-past",
179
+ "causal-lm",
180
+ "causal-lm-with-past",
181
+ "seq2seq-lm",
182
+ "seq2seq-lm-with-past",
183
+ onnx_config_cls="models.blenderbot.BlenderbotOnnxConfig",
184
+ ),
185
+ "blenderbot-small": supported_features_mapping(
186
+ "default",
187
+ "default-with-past",
188
+ "causal-lm",
189
+ "causal-lm-with-past",
190
+ "seq2seq-lm",
191
+ "seq2seq-lm-with-past",
192
+ onnx_config_cls="models.blenderbot_small.BlenderbotSmallOnnxConfig",
193
+ ),
194
+ "bloom": supported_features_mapping(
195
+ "default",
196
+ "default-with-past",
197
+ "causal-lm",
198
+ "causal-lm-with-past",
199
+ "sequence-classification",
200
+ "token-classification",
201
+ onnx_config_cls="models.bloom.BloomOnnxConfig",
202
+ ),
203
+ "camembert": supported_features_mapping(
204
+ "default",
205
+ "masked-lm",
206
+ "causal-lm",
207
+ "sequence-classification",
208
+ "multiple-choice",
209
+ "token-classification",
210
+ "question-answering",
211
+ onnx_config_cls="models.camembert.CamembertOnnxConfig",
212
+ ),
213
+ "clip": supported_features_mapping(
214
+ "default",
215
+ onnx_config_cls="models.clip.CLIPOnnxConfig",
216
+ ),
217
+ "codegen": supported_features_mapping(
218
+ "default",
219
+ "causal-lm",
220
+ onnx_config_cls="models.codegen.CodeGenOnnxConfig",
221
+ ),
222
+ "convbert": supported_features_mapping(
223
+ "default",
224
+ "masked-lm",
225
+ "sequence-classification",
226
+ "multiple-choice",
227
+ "token-classification",
228
+ "question-answering",
229
+ onnx_config_cls="models.convbert.ConvBertOnnxConfig",
230
+ ),
231
+ "convnext": supported_features_mapping(
232
+ "default",
233
+ "image-classification",
234
+ onnx_config_cls="models.convnext.ConvNextOnnxConfig",
235
+ ),
236
+ "data2vec-text": supported_features_mapping(
237
+ "default",
238
+ "masked-lm",
239
+ "sequence-classification",
240
+ "multiple-choice",
241
+ "token-classification",
242
+ "question-answering",
243
+ onnx_config_cls="models.data2vec.Data2VecTextOnnxConfig",
244
+ ),
245
+ "data2vec-vision": supported_features_mapping(
246
+ "default",
247
+ "image-classification",
248
+ # ONNX doesn't support `adaptive_avg_pool2d` yet
249
+ # "semantic-segmentation",
250
+ onnx_config_cls="models.data2vec.Data2VecVisionOnnxConfig",
251
+ ),
252
+ "deberta": supported_features_mapping(
253
+ "default",
254
+ "masked-lm",
255
+ "sequence-classification",
256
+ "token-classification",
257
+ "question-answering",
258
+ onnx_config_cls="models.deberta.DebertaOnnxConfig",
259
+ ),
260
+ "deberta-v2": supported_features_mapping(
261
+ "default",
262
+ "masked-lm",
263
+ "sequence-classification",
264
+ "multiple-choice",
265
+ "token-classification",
266
+ "question-answering",
267
+ onnx_config_cls="models.deberta_v2.DebertaV2OnnxConfig",
268
+ ),
269
+ "deit": supported_features_mapping(
270
+ "default", "image-classification", onnx_config_cls="models.deit.DeiTOnnxConfig"
271
+ ),
272
+ "detr": supported_features_mapping(
273
+ "default",
274
+ "object-detection",
275
+ "image-segmentation",
276
+ onnx_config_cls="models.detr.DetrOnnxConfig",
277
+ ),
278
+ "distilbert": supported_features_mapping(
279
+ "default",
280
+ "masked-lm",
281
+ "sequence-classification",
282
+ "multiple-choice",
283
+ "token-classification",
284
+ "question-answering",
285
+ onnx_config_cls="models.distilbert.DistilBertOnnxConfig",
286
+ ),
287
+ "electra": supported_features_mapping(
288
+ "default",
289
+ "masked-lm",
290
+ "causal-lm",
291
+ "sequence-classification",
292
+ "multiple-choice",
293
+ "token-classification",
294
+ "question-answering",
295
+ onnx_config_cls="models.electra.ElectraOnnxConfig",
296
+ ),
297
+ "flaubert": supported_features_mapping(
298
+ "default",
299
+ "masked-lm",
300
+ "causal-lm",
301
+ "sequence-classification",
302
+ "multiple-choice",
303
+ "token-classification",
304
+ "question-answering",
305
+ onnx_config_cls="models.flaubert.FlaubertOnnxConfig",
306
+ ),
307
+ "gpt2": supported_features_mapping(
308
+ "default",
309
+ "default-with-past",
310
+ "causal-lm",
311
+ "causal-lm-with-past",
312
+ "sequence-classification",
313
+ "token-classification",
314
+ onnx_config_cls="models.gpt2.GPT2OnnxConfig",
315
+ ),
316
+ "gptj": supported_features_mapping(
317
+ "default",
318
+ "default-with-past",
319
+ "causal-lm",
320
+ "causal-lm-with-past",
321
+ "question-answering",
322
+ "sequence-classification",
323
+ onnx_config_cls="models.gptj.GPTJOnnxConfig",
324
+ ),
325
+ "gpt-neo": supported_features_mapping(
326
+ "default",
327
+ "default-with-past",
328
+ "causal-lm",
329
+ "causal-lm-with-past",
330
+ "sequence-classification",
331
+ onnx_config_cls="models.gpt_neo.GPTNeoOnnxConfig",
332
+ ),
333
+ "groupvit": supported_features_mapping(
334
+ "default",
335
+ onnx_config_cls="models.groupvit.GroupViTOnnxConfig",
336
+ ),
337
+ "ibert": supported_features_mapping(
338
+ "default",
339
+ "masked-lm",
340
+ "sequence-classification",
341
+ "multiple-choice",
342
+ "token-classification",
343
+ "question-answering",
344
+ onnx_config_cls="models.ibert.IBertOnnxConfig",
345
+ ),
346
+ "imagegpt": supported_features_mapping(
347
+ "default", "image-classification", onnx_config_cls="models.imagegpt.ImageGPTOnnxConfig"
348
+ ),
349
+ "layoutlm": supported_features_mapping(
350
+ "default",
351
+ "masked-lm",
352
+ "sequence-classification",
353
+ "token-classification",
354
+ onnx_config_cls="models.layoutlm.LayoutLMOnnxConfig",
355
+ ),
356
+ "layoutlmv3": supported_features_mapping(
357
+ "default",
358
+ "question-answering",
359
+ "sequence-classification",
360
+ "token-classification",
361
+ onnx_config_cls="models.layoutlmv3.LayoutLMv3OnnxConfig",
362
+ ),
363
+ "levit": supported_features_mapping(
364
+ "default", "image-classification", onnx_config_cls="models.levit.LevitOnnxConfig"
365
+ ),
366
+ "longt5": supported_features_mapping(
367
+ "default",
368
+ "default-with-past",
369
+ "seq2seq-lm",
370
+ "seq2seq-lm-with-past",
371
+ onnx_config_cls="models.longt5.LongT5OnnxConfig",
372
+ ),
373
+ "longformer": supported_features_mapping(
374
+ "default",
375
+ "masked-lm",
376
+ "multiple-choice",
377
+ "question-answering",
378
+ "sequence-classification",
379
+ "token-classification",
380
+ onnx_config_cls="models.longformer.LongformerOnnxConfig",
381
+ ),
382
+ "marian": supported_features_mapping(
383
+ "default",
384
+ "default-with-past",
385
+ "seq2seq-lm",
386
+ "seq2seq-lm-with-past",
387
+ "causal-lm",
388
+ "causal-lm-with-past",
389
+ onnx_config_cls="models.marian.MarianOnnxConfig",
390
+ ),
391
+ "mbart": supported_features_mapping(
392
+ "default",
393
+ "default-with-past",
394
+ "causal-lm",
395
+ "causal-lm-with-past",
396
+ "seq2seq-lm",
397
+ "seq2seq-lm-with-past",
398
+ "sequence-classification",
399
+ "question-answering",
400
+ onnx_config_cls="models.mbart.MBartOnnxConfig",
401
+ ),
402
+ "mobilebert": supported_features_mapping(
403
+ "default",
404
+ "masked-lm",
405
+ "sequence-classification",
406
+ "multiple-choice",
407
+ "token-classification",
408
+ "question-answering",
409
+ onnx_config_cls="models.mobilebert.MobileBertOnnxConfig",
410
+ ),
411
+ "mobilenet-v1": supported_features_mapping(
412
+ "default",
413
+ "image-classification",
414
+ onnx_config_cls="models.mobilenet_v1.MobileNetV1OnnxConfig",
415
+ ),
416
+ "mobilenet-v2": supported_features_mapping(
417
+ "default",
418
+ "image-classification",
419
+ onnx_config_cls="models.mobilenet_v2.MobileNetV2OnnxConfig",
420
+ ),
421
+ "mobilevit": supported_features_mapping(
422
+ "default",
423
+ "image-classification",
424
+ onnx_config_cls="models.mobilevit.MobileViTOnnxConfig",
425
+ ),
426
+ "mt5": supported_features_mapping(
427
+ "default",
428
+ "default-with-past",
429
+ "seq2seq-lm",
430
+ "seq2seq-lm-with-past",
431
+ onnx_config_cls="models.mt5.MT5OnnxConfig",
432
+ ),
433
+ "m2m-100": supported_features_mapping(
434
+ "default",
435
+ "default-with-past",
436
+ "seq2seq-lm",
437
+ "seq2seq-lm-with-past",
438
+ onnx_config_cls="models.m2m_100.M2M100OnnxConfig",
439
+ ),
440
+ "owlvit": supported_features_mapping(
441
+ "default",
442
+ onnx_config_cls="models.owlvit.OwlViTOnnxConfig",
443
+ ),
444
+ "perceiver": supported_features_mapping(
445
+ "image-classification",
446
+ "masked-lm",
447
+ "sequence-classification",
448
+ onnx_config_cls="models.perceiver.PerceiverOnnxConfig",
449
+ ),
450
+ "poolformer": supported_features_mapping(
451
+ "default", "image-classification", onnx_config_cls="models.poolformer.PoolFormerOnnxConfig"
452
+ ),
453
+ "rembert": supported_features_mapping(
454
+ "default",
455
+ "masked-lm",
456
+ "causal-lm",
457
+ "sequence-classification",
458
+ "multiple-choice",
459
+ "token-classification",
460
+ "question-answering",
461
+ onnx_config_cls="models.rembert.RemBertOnnxConfig",
462
+ ),
463
+ "resnet": supported_features_mapping(
464
+ "default",
465
+ "image-classification",
466
+ onnx_config_cls="models.resnet.ResNetOnnxConfig",
467
+ ),
468
+ "roberta": supported_features_mapping(
469
+ "default",
470
+ "masked-lm",
471
+ "causal-lm",
472
+ "sequence-classification",
473
+ "multiple-choice",
474
+ "token-classification",
475
+ "question-answering",
476
+ onnx_config_cls="models.roberta.RobertaOnnxConfig",
477
+ ),
478
+ "roformer": supported_features_mapping(
479
+ "default",
480
+ "masked-lm",
481
+ "causal-lm",
482
+ "sequence-classification",
483
+ "token-classification",
484
+ "multiple-choice",
485
+ "question-answering",
486
+ "token-classification",
487
+ onnx_config_cls="models.roformer.RoFormerOnnxConfig",
488
+ ),
489
+ "segformer": supported_features_mapping(
490
+ "default",
491
+ "image-classification",
492
+ "semantic-segmentation",
493
+ onnx_config_cls="models.segformer.SegformerOnnxConfig",
494
+ ),
495
+ "squeezebert": supported_features_mapping(
496
+ "default",
497
+ "masked-lm",
498
+ "sequence-classification",
499
+ "multiple-choice",
500
+ "token-classification",
501
+ "question-answering",
502
+ onnx_config_cls="models.squeezebert.SqueezeBertOnnxConfig",
503
+ ),
504
+ "swin": supported_features_mapping(
505
+ "default", "image-classification", onnx_config_cls="models.swin.SwinOnnxConfig"
506
+ ),
507
+ "t5": supported_features_mapping(
508
+ "default",
509
+ "default-with-past",
510
+ "seq2seq-lm",
511
+ "seq2seq-lm-with-past",
512
+ onnx_config_cls="models.t5.T5OnnxConfig",
513
+ ),
514
+ "vision-encoder-decoder": supported_features_mapping(
515
+ "vision2seq-lm", onnx_config_cls="models.vision_encoder_decoder.VisionEncoderDecoderOnnxConfig"
516
+ ),
517
+ "vit": supported_features_mapping(
518
+ "default", "image-classification", onnx_config_cls="models.vit.ViTOnnxConfig"
519
+ ),
520
+ "whisper": supported_features_mapping(
521
+ "default",
522
+ "default-with-past",
523
+ "speech2seq-lm",
524
+ "speech2seq-lm-with-past",
525
+ onnx_config_cls="models.whisper.WhisperOnnxConfig",
526
+ ),
527
+ "xlm": supported_features_mapping(
528
+ "default",
529
+ "masked-lm",
530
+ "causal-lm",
531
+ "sequence-classification",
532
+ "multiple-choice",
533
+ "token-classification",
534
+ "question-answering",
535
+ onnx_config_cls="models.xlm.XLMOnnxConfig",
536
+ ),
537
+ "xlm-roberta": supported_features_mapping(
538
+ "default",
539
+ "masked-lm",
540
+ "causal-lm",
541
+ "sequence-classification",
542
+ "multiple-choice",
543
+ "token-classification",
544
+ "question-answering",
545
+ onnx_config_cls="models.xlm_roberta.XLMRobertaOnnxConfig",
546
+ ),
547
+ "yolos": supported_features_mapping(
548
+ "default",
549
+ "object-detection",
550
+ onnx_config_cls="models.yolos.YolosOnnxConfig",
551
+ ),
552
+ }
553
+
554
+ AVAILABLE_FEATURES = sorted(reduce(lambda s1, s2: s1 | s2, (v.keys() for v in _SUPPORTED_MODEL_TYPE.values())))
555
+
556
+ @staticmethod
557
+ def get_supported_features_for_model_type(
558
+ model_type: str, model_name: Optional[str] = None
559
+ ) -> Dict[str, Callable[[PretrainedConfig], OnnxConfig]]:
560
+ """
561
+ Tries to retrieve the feature -> OnnxConfig constructor map from the model type.
562
+
563
+ Args:
564
+ model_type (`str`):
565
+ The model type to retrieve the supported features for.
566
+ model_name (`str`, *optional*):
567
+ The name attribute of the model object, only used for the exception message.
568
+
569
+ Returns:
570
+ The dictionary mapping each feature to a corresponding OnnxConfig constructor.
571
+ """
572
+ model_type = model_type.lower()
573
+ if model_type not in FeaturesManager._SUPPORTED_MODEL_TYPE:
574
+ model_type_and_model_name = f"{model_type} ({model_name})" if model_name else model_type
575
+ raise KeyError(
576
+ f"{model_type_and_model_name} is not supported yet. "
577
+ f"Only {list(FeaturesManager._SUPPORTED_MODEL_TYPE.keys())} are supported. "
578
+ f"If you want to support {model_type} please propose a PR or open up an issue."
579
+ )
580
+ return FeaturesManager._SUPPORTED_MODEL_TYPE[model_type]
581
+
582
+ @staticmethod
583
+ def feature_to_task(feature: str) -> str:
584
+ return feature.replace("-with-past", "")
585
+
586
+ @staticmethod
587
+ def _validate_framework_choice(framework: str):
588
+ """
589
+ Validates if the framework requested for the export is both correct and available, otherwise throws an
590
+ exception.
591
+ """
592
+ if framework not in ["pt", "tf"]:
593
+ raise ValueError(
594
+ f"Only two frameworks are supported for ONNX export: pt or tf, but {framework} was provided."
595
+ )
596
+ elif framework == "pt" and not is_torch_available():
597
+ raise RuntimeError("Cannot export model to ONNX using PyTorch because no PyTorch package was found.")
598
+ elif framework == "tf" and not is_tf_available():
599
+ raise RuntimeError("Cannot export model to ONNX using TensorFlow because no TensorFlow package was found.")
600
+
601
+ @staticmethod
602
+ def get_model_class_for_feature(feature: str, framework: str = "pt") -> Type:
603
+ """
604
+ Attempts to retrieve an AutoModel class from a feature name.
605
+
606
+ Args:
607
+ feature (`str`):
608
+ The feature required.
609
+ framework (`str`, *optional*, defaults to `"pt"`):
610
+ The framework to use for the export.
611
+
612
+ Returns:
613
+ The AutoModel class corresponding to the feature.
614
+ """
615
+ task = FeaturesManager.feature_to_task(feature)
616
+ FeaturesManager._validate_framework_choice(framework)
617
+ if framework == "pt":
618
+ task_to_automodel = FeaturesManager._TASKS_TO_AUTOMODELS
619
+ else:
620
+ task_to_automodel = FeaturesManager._TASKS_TO_TF_AUTOMODELS
621
+ if task not in task_to_automodel:
622
+ raise KeyError(
623
+ f"Unknown task: {feature}. Possible values are {list(FeaturesManager._TASKS_TO_AUTOMODELS.values())}"
624
+ )
625
+
626
+ return task_to_automodel[task]
627
+
628
+ @staticmethod
629
+ def determine_framework(model: str, framework: str = None) -> str:
630
+ """
631
+ Determines the framework to use for the export.
632
+
633
+ The priority is in the following order:
634
+ 1. User input via `framework`.
635
+ 2. If local checkpoint is provided, use the same framework as the checkpoint.
636
+ 3. Available framework in environment, with priority given to PyTorch
637
+
638
+ Args:
639
+ model (`str`):
640
+ The name of the model to export.
641
+ framework (`str`, *optional*, defaults to `None`):
642
+ The framework to use for the export. See above for priority if none provided.
643
+
644
+ Returns:
645
+ The framework to use for the export.
646
+
647
+ """
648
+ if framework is not None:
649
+ return framework
650
+
651
+ framework_map = {"pt": "PyTorch", "tf": "TensorFlow"}
652
+ exporter_map = {"pt": "torch", "tf": "tf2onnx"}
653
+
654
+ if os.path.isdir(model):
655
+ if os.path.isfile(os.path.join(model, WEIGHTS_NAME)):
656
+ framework = "pt"
657
+ elif os.path.isfile(os.path.join(model, TF2_WEIGHTS_NAME)):
658
+ framework = "tf"
659
+ else:
660
+ raise FileNotFoundError(
661
+ "Cannot determine framework from given checkpoint location."
662
+ f" There should be a {WEIGHTS_NAME} for PyTorch"
663
+ f" or {TF2_WEIGHTS_NAME} for TensorFlow."
664
+ )
665
+ logger.info(f"Local {framework_map[framework]} model found.")
666
+ else:
667
+ if is_torch_available():
668
+ framework = "pt"
669
+ elif is_tf_available():
670
+ framework = "tf"
671
+ else:
672
+ raise EnvironmentError("Neither PyTorch nor TensorFlow found in environment. Cannot export to ONNX.")
673
+
674
+ logger.info(f"Framework not requested. Using {exporter_map[framework]} to export to ONNX.")
675
+
676
+ return framework
677
+
678
+ @staticmethod
679
+ def get_model_from_feature(
680
+ feature: str, model: str, framework: str = None, cache_dir: str = None
681
+ ) -> Union["PreTrainedModel", "TFPreTrainedModel"]:
682
+ """
683
+ Attempts to retrieve a model from a model's name and the feature to be enabled.
684
+
685
+ Args:
686
+ feature (`str`):
687
+ The feature required.
688
+ model (`str`):
689
+ The name of the model to export.
690
+ framework (`str`, *optional*, defaults to `None`):
691
+ The framework to use for the export. See `FeaturesManager.determine_framework` for the priority should
692
+ none be provided.
693
+
694
+ Returns:
695
+ The instance of the model.
696
+
697
+ """
698
+ framework = FeaturesManager.determine_framework(model, framework)
699
+ model_class = FeaturesManager.get_model_class_for_feature(feature, framework)
700
+ try:
701
+ model = model_class.from_pretrained(model, cache_dir=cache_dir)
702
+ except OSError:
703
+ if framework == "pt":
704
+ logger.info("Loading TensorFlow model in PyTorch before exporting to ONNX.")
705
+ model = model_class.from_pretrained(model, from_tf=True, cache_dir=cache_dir)
706
+ else:
707
+ logger.info("Loading PyTorch model in TensorFlow before exporting to ONNX.")
708
+ model = model_class.from_pretrained(model, from_pt=True, cache_dir=cache_dir)
709
+ return model
710
+
711
+ @staticmethod
712
+ def check_supported_model_or_raise(
713
+ model: Union["PreTrainedModel", "TFPreTrainedModel"], feature: str = "default"
714
+ ) -> Tuple[str, Callable]:
715
+ """
716
+ Check whether or not the model has the requested features.
717
+
718
+ Args:
719
+ model: The model to export.
720
+ feature: The name of the feature to check if it is available.
721
+
722
+ Returns:
723
+ (str) The type of the model (OnnxConfig) The OnnxConfig instance holding the model export properties.
724
+
725
+ """
726
+ model_type = model.config.model_type.replace("_", "-")
727
+ model_name = getattr(model, "name", "")
728
+ model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)
729
+ if feature not in model_features:
730
+ raise ValueError(
731
+ f"{model.config.model_type} doesn't support feature {feature}. Supported values are: {model_features}"
732
+ )
733
+
734
+ return model.config.model_type, FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature]
735
+
736
+ def get_config(model_type: str, feature: str) -> OnnxConfig:
737
+ """
738
+ Gets the OnnxConfig for a model_type and feature combination.
739
+
740
+ Args:
741
+ model_type (`str`):
742
+ The model type to retrieve the config for.
743
+ feature (`str`):
744
+ The feature to retrieve the config for.
745
+
746
+ Returns:
747
+ `OnnxConfig`: config for the combination
748
+ """
749
+ return FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature]
env-llmeval/lib/python3.10/site-packages/transformers/onnx/utils.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from ctypes import c_float, sizeof
16
+ from enum import Enum
17
+ from typing import TYPE_CHECKING, Optional, Union
18
+
19
+
20
+ if TYPE_CHECKING:
21
+ from .. import AutoFeatureExtractor, AutoProcessor, AutoTokenizer # tests_ignore
22
+
23
+
24
+ class ParameterFormat(Enum):
25
+ Float = c_float
26
+
27
+ @property
28
+ def size(self) -> int:
29
+ """
30
+ Number of byte required for this data type
31
+
32
+ Returns:
33
+ Integer > 0
34
+ """
35
+ return sizeof(self.value)
36
+
37
+
38
+ def compute_effective_axis_dimension(dimension: int, fixed_dimension: int, num_token_to_add: int = 0) -> int:
39
+ """
40
+
41
+ Args:
42
+ dimension:
43
+ fixed_dimension:
44
+ num_token_to_add:
45
+
46
+ Returns:
47
+
48
+ """
49
+ # < 0 is possible if using a dynamic axis
50
+ if dimension <= 0:
51
+ dimension = fixed_dimension
52
+
53
+ dimension -= num_token_to_add
54
+ return dimension
55
+
56
+
57
+ def compute_serialized_parameters_size(num_parameters: int, dtype: ParameterFormat) -> int:
58
+ """
59
+ Compute the size taken by all the parameters in the given the storage format when serializing the model
60
+
61
+ Args:
62
+ num_parameters: Number of parameters to be saved
63
+ dtype: The data format each parameter will be saved
64
+
65
+ Returns:
66
+ Size (in byte) taken to save all the parameters
67
+ """
68
+ return num_parameters * dtype.size
69
+
70
+
71
+ def get_preprocessor(model_name: str) -> Optional[Union["AutoTokenizer", "AutoFeatureExtractor", "AutoProcessor"]]:
72
+ """
73
+ Gets a preprocessor (tokenizer, feature extractor or processor) that is available for `model_name`.
74
+
75
+ Args:
76
+ model_name (`str`): Name of the model for which a preprocessor are loaded.
77
+
78
+ Returns:
79
+ `Optional[Union[AutoTokenizer, AutoFeatureExtractor, AutoProcessor]]`:
80
+ If a processor is found, it is returned. Otherwise, if a tokenizer or a feature extractor exists, it is
81
+ returned. If both a tokenizer and a feature extractor exist, an error is raised. The function returns
82
+ `None` if no preprocessor is found.
83
+ """
84
+ # Avoid circular imports by only importing this here.
85
+ from .. import AutoFeatureExtractor, AutoProcessor, AutoTokenizer # tests_ignore
86
+
87
+ try:
88
+ return AutoProcessor.from_pretrained(model_name)
89
+ except (ValueError, OSError, KeyError):
90
+ tokenizer, feature_extractor = None, None
91
+ try:
92
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
93
+ except (OSError, KeyError):
94
+ pass
95
+ try:
96
+ feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
97
+ except (OSError, KeyError):
98
+ pass
99
+
100
+ if tokenizer is not None and feature_extractor is not None:
101
+ raise ValueError(
102
+ f"Couldn't auto-detect preprocessor for {model_name}. Found both a tokenizer and a feature extractor."
103
+ )
104
+ elif tokenizer is None and feature_extractor is None:
105
+ return None
106
+ elif tokenizer is not None:
107
+ return tokenizer
108
+ else:
109
+ return feature_extractor
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_detectron2_objects.cpython-310.pyc ADDED
Binary file (789 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects.cpython-310.pyc ADDED
Binary file (1.2 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_flax_objects.cpython-310.pyc ADDED
Binary file (49.4 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_music_objects.cpython-310.pyc ADDED
Binary file (878 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_sentencepiece_objects.cpython-310.pyc ADDED
Binary file (8.54 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_tf_objects.cpython-310.pyc ADDED
Binary file (102 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_torchaudio_objects.cpython-310.pyc ADDED
Binary file (908 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/dummy_vision_objects.cpython-310.pyc ADDED
Binary file (20.8 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/fx.cpython-310.pyc ADDED
Binary file (37.7 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/generic.cpython-310.pyc ADDED
Binary file (22.7 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/hp_naming.cpython-310.pyc ADDED
Binary file (3.8 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/hub.cpython-310.pyc ADDED
Binary file (40 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/import_utils.cpython-310.pyc ADDED
Binary file (43.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/sentencepiece_model_pb2.cpython-310.pyc ADDED
Binary file (21.8 kB). View file
 
env-llmeval/lib/python3.10/site-packages/transformers/utils/__pycache__/versions.cpython-310.pyc ADDED
Binary file (3.15 kB). View file