applied-ai-018 commited on
Commit
eebdf37
·
verified ·
1 Parent(s): a29188b

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. llmeval-env/lib/python3.10/site-packages/absl/__init__.py +13 -0
  2. llmeval-env/lib/python3.10/site-packages/absl/app.py +480 -0
  3. llmeval-env/lib/python3.10/site-packages/absl/app.pyi +99 -0
  4. llmeval-env/lib/python3.10/site-packages/absl/command_name.py +63 -0
  5. llmeval-env/lib/python3.10/site-packages/absl/logging/__init__.py +1281 -0
  6. llmeval-env/lib/python3.10/site-packages/absl/logging/__init__.pyi +290 -0
  7. llmeval-env/lib/python3.10/site-packages/absl/logging/converter.py +214 -0
  8. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__init__.py +158 -0
  9. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/__init__.cpython-310.pyc +0 -0
  10. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/aqlm.cpython-310.pyc +0 -0
  11. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/awq.cpython-310.pyc +0 -0
  12. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/bitsandbytes.cpython-310.pyc +0 -0
  13. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/deepspeed.cpython-310.pyc +0 -0
  14. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/integration_utils.cpython-310.pyc +0 -0
  15. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/peft.cpython-310.pyc +0 -0
  16. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/quanto.cpython-310.pyc +0 -0
  17. llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/tpu.cpython-310.pyc +0 -0
  18. llmeval-env/lib/python3.10/site-packages/transformers/integrations/aqlm.py +99 -0
  19. llmeval-env/lib/python3.10/site-packages/transformers/integrations/awq.py +444 -0
  20. llmeval-env/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py +324 -0
  21. llmeval-env/lib/python3.10/site-packages/transformers/integrations/deepspeed.py +441 -0
  22. llmeval-env/lib/python3.10/site-packages/transformers/integrations/integration_utils.py +1914 -0
  23. llmeval-env/lib/python3.10/site-packages/transformers/integrations/peft.py +476 -0
  24. llmeval-env/lib/python3.10/site-packages/transformers/integrations/quanto.py +94 -0
  25. llmeval-env/lib/python3.10/site-packages/transformers/integrations/tpu.py +36 -0
  26. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__init__.py +1108 -0
  27. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/audio_classification.cpython-310.pyc +0 -0
  28. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/audio_utils.cpython-310.pyc +0 -0
  29. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/base.cpython-310.pyc +0 -0
  30. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/conversational.cpython-310.pyc +0 -0
  31. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/depth_estimation.cpython-310.pyc +0 -0
  32. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/document_question_answering.cpython-310.pyc +0 -0
  33. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_classification.cpython-310.pyc +0 -0
  34. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_feature_extraction.cpython-310.pyc +0 -0
  35. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_segmentation.cpython-310.pyc +0 -0
  36. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_to_image.cpython-310.pyc +0 -0
  37. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_to_text.cpython-310.pyc +0 -0
  38. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/mask_generation.cpython-310.pyc +0 -0
  39. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/object_detection.cpython-310.pyc +0 -0
  40. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/pt_utils.cpython-310.pyc +0 -0
  41. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/question_answering.cpython-310.pyc +0 -0
  42. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/table_question_answering.cpython-310.pyc +0 -0
  43. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/text2text_generation.cpython-310.pyc +0 -0
  44. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/text_classification.cpython-310.pyc +0 -0
  45. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/text_to_audio.cpython-310.pyc +0 -0
  46. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/token_classification.cpython-310.pyc +0 -0
  47. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/visual_question_answering.cpython-310.pyc +0 -0
  48. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/zero_shot_audio_classification.cpython-310.pyc +0 -0
  49. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/zero_shot_classification.cpython-310.pyc +0 -0
  50. llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/zero_shot_image_classification.cpython-310.pyc +0 -0
llmeval-env/lib/python3.10/site-packages/absl/__init__.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The Abseil Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
llmeval-env/lib/python3.10/site-packages/absl/app.py ADDED
@@ -0,0 +1,480 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The Abseil Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Generic entry point for Abseil Python applications.
16
+
17
+ To use this module, define a ``main`` function with a single ``argv`` argument
18
+ and call ``app.run(main)``. For example::
19
+
20
+ def main(argv):
21
+ if len(argv) > 1:
22
+ raise app.UsageError('Too many command-line arguments.')
23
+
24
+ if __name__ == '__main__':
25
+ app.run(main)
26
+ """
27
+
28
+ import collections
29
+ import errno
30
+ import os
31
+ import pdb
32
+ import sys
33
+ import textwrap
34
+ import traceback
35
+
36
+ from absl import command_name
37
+ from absl import flags
38
+ from absl import logging
39
+
40
+ try:
41
+ import faulthandler
42
+ except ImportError:
43
+ faulthandler = None
44
+
45
+ FLAGS = flags.FLAGS
46
+
47
+ flags.DEFINE_boolean('run_with_pdb', False, 'Set to true for PDB debug mode')
48
+ flags.DEFINE_boolean('pdb_post_mortem', False,
49
+ 'Set to true to handle uncaught exceptions with PDB '
50
+ 'post mortem.')
51
+ flags.DEFINE_alias('pdb', 'pdb_post_mortem')
52
+ flags.DEFINE_boolean('run_with_profiling', False,
53
+ 'Set to true for profiling the script. '
54
+ 'Execution will be slower, and the output format might '
55
+ 'change over time.')
56
+ flags.DEFINE_string('profile_file', None,
57
+ 'Dump profile information to a file (for python -m '
58
+ 'pstats). Implies --run_with_profiling.')
59
+ flags.DEFINE_boolean('use_cprofile_for_profiling', True,
60
+ 'Use cProfile instead of the profile module for '
61
+ 'profiling. This has no effect unless '
62
+ '--run_with_profiling is set.')
63
+ flags.DEFINE_boolean('only_check_args', False,
64
+ 'Set to true to validate args and exit.',
65
+ allow_hide_cpp=True)
66
+
67
+
68
+ # If main() exits via an abnormal exception, call into these
69
+ # handlers before exiting.
70
+ EXCEPTION_HANDLERS = []
71
+
72
+
73
+ class Error(Exception):
74
+ pass
75
+
76
+
77
+ class UsageError(Error):
78
+ """Exception raised when the arguments supplied by the user are invalid.
79
+
80
+ Raise this when the arguments supplied are invalid from the point of
81
+ view of the application. For example when two mutually exclusive
82
+ flags have been supplied or when there are not enough non-flag
83
+ arguments. It is distinct from flags.Error which covers the lower
84
+ level of parsing and validating individual flags.
85
+ """
86
+
87
+ def __init__(self, message, exitcode=1):
88
+ super(UsageError, self).__init__(message)
89
+ self.exitcode = exitcode
90
+
91
+
92
+ class HelpFlag(flags.BooleanFlag):
93
+ """Special boolean flag that displays usage and raises SystemExit."""
94
+ NAME = 'help'
95
+ SHORT_NAME = '?'
96
+
97
+ def __init__(self):
98
+ super(HelpFlag, self).__init__(
99
+ self.NAME, False, 'show this help',
100
+ short_name=self.SHORT_NAME, allow_hide_cpp=True)
101
+
102
+ def parse(self, arg):
103
+ if self._parse(arg):
104
+ usage(shorthelp=True, writeto_stdout=True)
105
+ # Advertise --helpfull on stdout, since usage() was on stdout.
106
+ print()
107
+ print('Try --helpfull to get a list of all flags.')
108
+ sys.exit(1)
109
+
110
+
111
+ class HelpshortFlag(HelpFlag):
112
+ """--helpshort is an alias for --help."""
113
+ NAME = 'helpshort'
114
+ SHORT_NAME = None
115
+
116
+
117
+ class HelpfullFlag(flags.BooleanFlag):
118
+ """Display help for flags in the main module and all dependent modules."""
119
+
120
+ def __init__(self):
121
+ super(HelpfullFlag, self).__init__(
122
+ 'helpfull', False, 'show full help', allow_hide_cpp=True)
123
+
124
+ def parse(self, arg):
125
+ if self._parse(arg):
126
+ usage(writeto_stdout=True)
127
+ sys.exit(1)
128
+
129
+
130
+ class HelpXMLFlag(flags.BooleanFlag):
131
+ """Similar to HelpfullFlag, but generates output in XML format."""
132
+
133
+ def __init__(self):
134
+ super(HelpXMLFlag, self).__init__(
135
+ 'helpxml', False, 'like --helpfull, but generates XML output',
136
+ allow_hide_cpp=True)
137
+
138
+ def parse(self, arg):
139
+ if self._parse(arg):
140
+ flags.FLAGS.write_help_in_xml_format(sys.stdout)
141
+ sys.exit(1)
142
+
143
+
144
+ def parse_flags_with_usage(args):
145
+ """Tries to parse the flags, print usage, and exit if unparsable.
146
+
147
+ Args:
148
+ args: [str], a non-empty list of the command line arguments including
149
+ program name.
150
+
151
+ Returns:
152
+ [str], a non-empty list of remaining command line arguments after parsing
153
+ flags, including program name.
154
+ """
155
+ try:
156
+ return FLAGS(args)
157
+ except flags.Error as error:
158
+ message = str(error)
159
+ if '\n' in message:
160
+ final_message = 'FATAL Flags parsing error:\n%s\n' % textwrap.indent(
161
+ message, ' ')
162
+ else:
163
+ final_message = 'FATAL Flags parsing error: %s\n' % message
164
+ sys.stderr.write(final_message)
165
+ sys.stderr.write('Pass --helpshort or --helpfull to see help on flags.\n')
166
+ sys.exit(1)
167
+
168
+
169
+ _define_help_flags_called = False
170
+
171
+
172
+ def define_help_flags():
173
+ """Registers help flags. Idempotent."""
174
+ # Use a global to ensure idempotence.
175
+ global _define_help_flags_called
176
+
177
+ if not _define_help_flags_called:
178
+ flags.DEFINE_flag(HelpFlag())
179
+ flags.DEFINE_flag(HelpshortFlag()) # alias for --help
180
+ flags.DEFINE_flag(HelpfullFlag())
181
+ flags.DEFINE_flag(HelpXMLFlag())
182
+ _define_help_flags_called = True
183
+
184
+
185
+ def _register_and_parse_flags_with_usage(
186
+ argv=None,
187
+ flags_parser=parse_flags_with_usage,
188
+ ):
189
+ """Registers help flags, parses arguments and shows usage if appropriate.
190
+
191
+ This also calls sys.exit(0) if flag --only_check_args is True.
192
+
193
+ Args:
194
+ argv: [str], a non-empty list of the command line arguments including
195
+ program name, sys.argv is used if None.
196
+ flags_parser: Callable[[List[Text]], Any], the function used to parse flags.
197
+ The return value of this function is passed to `main` untouched.
198
+ It must guarantee FLAGS is parsed after this function is called.
199
+
200
+ Returns:
201
+ The return value of `flags_parser`. When using the default `flags_parser`,
202
+ it returns the following:
203
+ [str], a non-empty list of remaining command line arguments after parsing
204
+ flags, including program name.
205
+
206
+ Raises:
207
+ Error: Raised when flags_parser is called, but FLAGS is not parsed.
208
+ SystemError: Raised when it's called more than once.
209
+ """
210
+ if _register_and_parse_flags_with_usage.done:
211
+ raise SystemError('Flag registration can be done only once.')
212
+
213
+ define_help_flags()
214
+
215
+ original_argv = sys.argv if argv is None else argv
216
+ args_to_main = flags_parser(original_argv)
217
+ if not FLAGS.is_parsed():
218
+ raise Error('FLAGS must be parsed after flags_parser is called.')
219
+
220
+ # Exit when told so.
221
+ if FLAGS.only_check_args:
222
+ sys.exit(0)
223
+ # Immediately after flags are parsed, bump verbosity to INFO if the flag has
224
+ # not been set.
225
+ if FLAGS['verbosity'].using_default_value:
226
+ FLAGS.verbosity = 0
227
+ _register_and_parse_flags_with_usage.done = True
228
+
229
+ return args_to_main
230
+
231
+ _register_and_parse_flags_with_usage.done = False
232
+
233
+
234
+ def _run_main(main, argv):
235
+ """Calls main, optionally with pdb or profiler."""
236
+ if FLAGS.run_with_pdb:
237
+ sys.exit(pdb.runcall(main, argv))
238
+ elif FLAGS.run_with_profiling or FLAGS.profile_file:
239
+ # Avoid import overhead since most apps (including performance-sensitive
240
+ # ones) won't be run with profiling.
241
+ # pylint: disable=g-import-not-at-top
242
+ import atexit
243
+ if FLAGS.use_cprofile_for_profiling:
244
+ import cProfile as profile
245
+ else:
246
+ import profile
247
+ profiler = profile.Profile()
248
+ if FLAGS.profile_file:
249
+ atexit.register(profiler.dump_stats, FLAGS.profile_file)
250
+ else:
251
+ atexit.register(profiler.print_stats)
252
+ sys.exit(profiler.runcall(main, argv))
253
+ else:
254
+ sys.exit(main(argv))
255
+
256
+
257
+ def _call_exception_handlers(exception):
258
+ """Calls any installed exception handlers."""
259
+ for handler in EXCEPTION_HANDLERS:
260
+ try:
261
+ if handler.wants(exception):
262
+ handler.handle(exception)
263
+ except: # pylint: disable=bare-except
264
+ try:
265
+ # We don't want to stop for exceptions in the exception handlers but
266
+ # we shouldn't hide them either.
267
+ logging.error(traceback.format_exc())
268
+ except: # pylint: disable=bare-except
269
+ # In case even the logging statement fails, ignore.
270
+ pass
271
+
272
+
273
+ def run(
274
+ main,
275
+ argv=None,
276
+ flags_parser=parse_flags_with_usage,
277
+ ):
278
+ """Begins executing the program.
279
+
280
+ Args:
281
+ main: The main function to execute. It takes an single argument "argv",
282
+ which is a list of command line arguments with parsed flags removed.
283
+ The return value is passed to `sys.exit`, and so for example
284
+ a return value of 0 or None results in a successful termination, whereas
285
+ a return value of 1 results in abnormal termination.
286
+ For more details, see https://docs.python.org/3/library/sys#sys.exit
287
+ argv: A non-empty list of the command line arguments including program name,
288
+ sys.argv is used if None.
289
+ flags_parser: Callable[[List[Text]], Any], the function used to parse flags.
290
+ The return value of this function is passed to `main` untouched.
291
+ It must guarantee FLAGS is parsed after this function is called.
292
+ Should be passed as a keyword-only arg which will become mandatory in a
293
+ future release.
294
+ - Parses command line flags with the flag module.
295
+ - If there are any errors, prints usage().
296
+ - Calls main() with the remaining arguments.
297
+ - If main() raises a UsageError, prints usage and the error message.
298
+ """
299
+ try:
300
+ args = _run_init(
301
+ sys.argv if argv is None else argv,
302
+ flags_parser,
303
+ )
304
+ while _init_callbacks:
305
+ callback = _init_callbacks.popleft()
306
+ callback()
307
+ try:
308
+ _run_main(main, args)
309
+ except UsageError as error:
310
+ usage(shorthelp=True, detailed_error=error, exitcode=error.exitcode)
311
+ except:
312
+ exc = sys.exc_info()[1]
313
+ # Don't try to post-mortem debug successful SystemExits, since those
314
+ # mean there wasn't actually an error. In particular, the test framework
315
+ # raises SystemExit(False) even if all tests passed.
316
+ if isinstance(exc, SystemExit) and not exc.code:
317
+ raise
318
+
319
+ # Check the tty so that we don't hang waiting for input in an
320
+ # non-interactive scenario.
321
+ if FLAGS.pdb_post_mortem and sys.stdout.isatty():
322
+ traceback.print_exc()
323
+ print()
324
+ print(' *** Entering post-mortem debugging ***')
325
+ print()
326
+ pdb.post_mortem()
327
+ raise
328
+ except Exception as e:
329
+ _call_exception_handlers(e)
330
+ raise
331
+
332
+ # Callbacks which have been deferred until after _run_init has been called.
333
+ _init_callbacks = collections.deque()
334
+
335
+
336
+ def call_after_init(callback):
337
+ """Calls the given callback only once ABSL has finished initialization.
338
+
339
+ If ABSL has already finished initialization when ``call_after_init`` is
340
+ called then the callback is executed immediately, otherwise `callback` is
341
+ stored to be executed after ``app.run`` has finished initializing (aka. just
342
+ before the main function is called).
343
+
344
+ If called after ``app.run``, this is equivalent to calling ``callback()`` in
345
+ the caller thread. If called before ``app.run``, callbacks are run
346
+ sequentially (in an undefined order) in the same thread as ``app.run``.
347
+
348
+ Args:
349
+ callback: a callable to be called once ABSL has finished initialization.
350
+ This may be immediate if initialization has already finished. It
351
+ takes no arguments and returns nothing.
352
+ """
353
+ if _run_init.done:
354
+ callback()
355
+ else:
356
+ _init_callbacks.append(callback)
357
+
358
+
359
+ def _run_init(
360
+ argv,
361
+ flags_parser,
362
+ ):
363
+ """Does one-time initialization and re-parses flags on rerun."""
364
+ if _run_init.done:
365
+ return flags_parser(argv)
366
+ command_name.make_process_name_useful()
367
+ # Set up absl logging handler.
368
+ logging.use_absl_handler()
369
+ args = _register_and_parse_flags_with_usage(
370
+ argv=argv,
371
+ flags_parser=flags_parser,
372
+ )
373
+ if faulthandler:
374
+ try:
375
+ faulthandler.enable()
376
+ except Exception: # pylint: disable=broad-except
377
+ # Some tests verify stderr output very closely, so don't print anything.
378
+ # Disabled faulthandler is a low-impact error.
379
+ pass
380
+ _run_init.done = True
381
+ return args
382
+
383
+
384
+ _run_init.done = False
385
+
386
+
387
+ def usage(shorthelp=False, writeto_stdout=False, detailed_error=None,
388
+ exitcode=None):
389
+ """Writes __main__'s docstring to stderr with some help text.
390
+
391
+ Args:
392
+ shorthelp: bool, if True, prints only flags from the main module,
393
+ rather than all flags.
394
+ writeto_stdout: bool, if True, writes help message to stdout,
395
+ rather than to stderr.
396
+ detailed_error: str, additional detail about why usage info was presented.
397
+ exitcode: optional integer, if set, exits with this status code after
398
+ writing help.
399
+ """
400
+ if writeto_stdout:
401
+ stdfile = sys.stdout
402
+ else:
403
+ stdfile = sys.stderr
404
+
405
+ doc = sys.modules['__main__'].__doc__
406
+ if not doc:
407
+ doc = '\nUSAGE: %s [flags]\n' % sys.argv[0]
408
+ doc = flags.text_wrap(doc, indent=' ', firstline_indent='')
409
+ else:
410
+ # Replace all '%s' with sys.argv[0], and all '%%' with '%'.
411
+ num_specifiers = doc.count('%') - 2 * doc.count('%%')
412
+ try:
413
+ doc %= (sys.argv[0],) * num_specifiers
414
+ except (OverflowError, TypeError, ValueError):
415
+ # Just display the docstring as-is.
416
+ pass
417
+ if shorthelp:
418
+ flag_str = FLAGS.main_module_help()
419
+ else:
420
+ flag_str = FLAGS.get_help()
421
+ try:
422
+ stdfile.write(doc)
423
+ if flag_str:
424
+ stdfile.write('\nflags:\n')
425
+ stdfile.write(flag_str)
426
+ stdfile.write('\n')
427
+ if detailed_error is not None:
428
+ stdfile.write('\n%s\n' % detailed_error)
429
+ except IOError as e:
430
+ # We avoid printing a huge backtrace if we get EPIPE, because
431
+ # "foo.par --help | less" is a frequent use case.
432
+ if e.errno != errno.EPIPE:
433
+ raise
434
+ if exitcode is not None:
435
+ sys.exit(exitcode)
436
+
437
+
438
+ class ExceptionHandler(object):
439
+ """Base exception handler from which other may inherit."""
440
+
441
+ def wants(self, exc):
442
+ """Returns whether this handler wants to handle the exception or not.
443
+
444
+ This base class returns True for all exceptions by default. Override in
445
+ subclass if it wants to be more selective.
446
+
447
+ Args:
448
+ exc: Exception, the current exception.
449
+ """
450
+ del exc # Unused.
451
+ return True
452
+
453
+ def handle(self, exc):
454
+ """Do something with the current exception.
455
+
456
+ Args:
457
+ exc: Exception, the current exception
458
+
459
+ This method must be overridden.
460
+ """
461
+ raise NotImplementedError()
462
+
463
+
464
+ def install_exception_handler(handler):
465
+ """Installs an exception handler.
466
+
467
+ Args:
468
+ handler: ExceptionHandler, the exception handler to install.
469
+
470
+ Raises:
471
+ TypeError: Raised when the handler was not of the correct type.
472
+
473
+ All installed exception handlers will be called if main() exits via
474
+ an abnormal exception, i.e. not one of SystemExit, KeyboardInterrupt,
475
+ FlagsError or UsageError.
476
+ """
477
+ if not isinstance(handler, ExceptionHandler):
478
+ raise TypeError('handler of type %s does not inherit from ExceptionHandler'
479
+ % type(handler))
480
+ EXCEPTION_HANDLERS.append(handler)
llmeval-env/lib/python3.10/site-packages/absl/app.pyi ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from typing import Any, Callable, Collection, Iterable, List, NoReturn, Optional, Text, TypeVar, Union, overload
3
+
4
+ from absl.flags import _flag
5
+
6
+
7
+ _MainArgs = TypeVar('_MainArgs')
8
+ _Exc = TypeVar('_Exc', bound=Exception)
9
+
10
+
11
+ class ExceptionHandler():
12
+
13
+ def wants(self, exc: _Exc) -> bool:
14
+ ...
15
+
16
+ def handle(self, exc: _Exc):
17
+ ...
18
+
19
+
20
+ EXCEPTION_HANDLERS: List[ExceptionHandler] = ...
21
+
22
+
23
+ class HelpFlag(_flag.BooleanFlag):
24
+ def __init__(self):
25
+ ...
26
+
27
+
28
+ class HelpshortFlag(HelpFlag):
29
+ ...
30
+
31
+
32
+ class HelpfullFlag(_flag.BooleanFlag):
33
+ def __init__(self):
34
+ ...
35
+
36
+
37
+ class HelpXMLFlag(_flag.BooleanFlag):
38
+ def __init__(self):
39
+ ...
40
+
41
+
42
+ def define_help_flags() -> None:
43
+ ...
44
+
45
+
46
+ @overload
47
+ def usage(shorthelp: Union[bool, int] = ...,
48
+ writeto_stdout: Union[bool, int] = ...,
49
+ detailed_error: Optional[Any] = ...,
50
+ exitcode: None = ...) -> None:
51
+ ...
52
+
53
+
54
+ @overload
55
+ def usage(shorthelp: Union[bool, int] = ...,
56
+ writeto_stdout: Union[bool, int] = ...,
57
+ detailed_error: Optional[Any] = ...,
58
+ exitcode: int = ...) -> NoReturn:
59
+ ...
60
+
61
+
62
+ def install_exception_handler(handler: ExceptionHandler) -> None:
63
+ ...
64
+
65
+
66
+ class Error(Exception):
67
+ ...
68
+
69
+
70
+ class UsageError(Error):
71
+ exitcode: int
72
+
73
+
74
+ def parse_flags_with_usage(args: List[Text]) -> List[Text]:
75
+ ...
76
+
77
+
78
+ def call_after_init(callback: Callable[[], Any]) -> None:
79
+ ...
80
+
81
+
82
+ # Without the flag_parser argument, `main` should require a List[Text].
83
+ @overload
84
+ def run(
85
+ main: Callable[[List[Text]], Any],
86
+ argv: Optional[List[Text]] = ...,
87
+ *,
88
+ ) -> NoReturn:
89
+ ...
90
+
91
+
92
+ @overload
93
+ def run(
94
+ main: Callable[[_MainArgs], Any],
95
+ argv: Optional[List[Text]] = ...,
96
+ *,
97
+ flags_parser: Callable[[List[Text]], _MainArgs],
98
+ ) -> NoReturn:
99
+ ...
llmeval-env/lib/python3.10/site-packages/absl/command_name.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The Abseil Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """A tiny stand alone library to change the kernel process name on Linux."""
16
+
17
+ import os
18
+ import sys
19
+
20
+ # This library must be kept small and stand alone. It is used by small things
21
+ # that require no extension modules.
22
+
23
+
24
+ def make_process_name_useful():
25
+ """Sets the process name to something better than 'python' if possible."""
26
+ set_kernel_process_name(os.path.basename(sys.argv[0]))
27
+
28
+
29
+ def set_kernel_process_name(name):
30
+ """Changes the Kernel's /proc/self/status process name on Linux.
31
+
32
+ The kernel name is NOT what will be shown by the ps or top command.
33
+ It is a 15 character string stored in the kernel's process table that
34
+ is included in the kernel log when a process is OOM killed.
35
+ The first 15 bytes of name are used. Non-ASCII unicode is replaced with '?'.
36
+
37
+ Does nothing if /proc/self/comm cannot be written or prctl() fails.
38
+
39
+ Args:
40
+ name: bytes|unicode, the Linux kernel's command name to set.
41
+ """
42
+ if not isinstance(name, bytes):
43
+ name = name.encode('ascii', 'replace')
44
+ try:
45
+ # This is preferred to using ctypes to try and call prctl() when possible.
46
+ with open('/proc/self/comm', 'wb') as proc_comm:
47
+ proc_comm.write(name[:15])
48
+ except EnvironmentError:
49
+ try:
50
+ import ctypes # pylint: disable=g-import-not-at-top
51
+ except ImportError:
52
+ return # No ctypes.
53
+ try:
54
+ libc = ctypes.CDLL('libc.so.6')
55
+ except EnvironmentError:
56
+ return # No libc.so.6.
57
+ pr_set_name = ctypes.c_ulong(15) # linux/prctl.h PR_SET_NAME value.
58
+ zero = ctypes.c_ulong(0)
59
+ try:
60
+ libc.prctl(pr_set_name, name, zero, zero, zero)
61
+ # Ignore the prctl return value. Nothing we can do if it errored.
62
+ except AttributeError:
63
+ return # No prctl.
llmeval-env/lib/python3.10/site-packages/absl/logging/__init__.py ADDED
@@ -0,0 +1,1281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The Abseil Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Abseil Python logging module implemented on top of standard logging.
16
+
17
+ Simple usage::
18
+
19
+ from absl import logging
20
+
21
+ logging.info('Interesting Stuff')
22
+ logging.info('Interesting Stuff with Arguments: %d', 42)
23
+
24
+ logging.set_verbosity(logging.INFO)
25
+ logging.log(logging.DEBUG, 'This will *not* be printed')
26
+ logging.set_verbosity(logging.DEBUG)
27
+ logging.log(logging.DEBUG, 'This will be printed')
28
+
29
+ logging.warning('Worrying Stuff')
30
+ logging.error('Alarming Stuff')
31
+ logging.fatal('AAAAHHHHH!!!!') # Process exits.
32
+
33
+ Usage note: Do not pre-format the strings in your program code.
34
+ Instead, let the logging module perform argument interpolation.
35
+ This saves cycles because strings that don't need to be printed
36
+ are never formatted. Note that this module does not attempt to
37
+ interpolate arguments when no arguments are given. In other words::
38
+
39
+ logging.info('Interesting Stuff: %s')
40
+
41
+ does not raise an exception because logging.info() has only one
42
+ argument, the message string.
43
+
44
+ "Lazy" evaluation for debugging
45
+ -------------------------------
46
+
47
+ If you do something like this::
48
+
49
+ logging.debug('Thing: %s', thing.ExpensiveOp())
50
+
51
+ then the ExpensiveOp will be evaluated even if nothing
52
+ is printed to the log. To avoid this, use the level_debug() function::
53
+
54
+ if logging.level_debug():
55
+ logging.debug('Thing: %s', thing.ExpensiveOp())
56
+
57
+ Per file level logging is supported by logging.vlog() and
58
+ logging.vlog_is_on(). For example::
59
+
60
+ if logging.vlog_is_on(2):
61
+ logging.vlog(2, very_expensive_debug_message())
62
+
63
+ Notes on Unicode
64
+ ----------------
65
+
66
+ The log output is encoded as UTF-8. Don't pass data in other encodings in
67
+ bytes() instances -- instead pass unicode string instances when you need to
68
+ (for both the format string and arguments).
69
+
70
+ Note on critical and fatal:
71
+ Standard logging module defines fatal as an alias to critical, but it's not
72
+ documented, and it does NOT actually terminate the program.
73
+ This module only defines fatal but not critical, and it DOES terminate the
74
+ program.
75
+
76
+ The differences in behavior are historical and unfortunate.
77
+ """
78
+
79
+ import collections
80
+ from collections import abc
81
+ import getpass
82
+ import io
83
+ import itertools
84
+ import logging
85
+ import os
86
+ import socket
87
+ import struct
88
+ import sys
89
+ import tempfile
90
+ import threading
91
+ import time
92
+ import timeit
93
+ import traceback
94
+ import types
95
+ import warnings
96
+
97
+ from absl import flags
98
+ from absl.logging import converter
99
+
100
+ # pylint: disable=g-import-not-at-top
101
+ try:
102
+ from typing import NoReturn
103
+ except ImportError:
104
+ pass
105
+
106
+ # pylint: enable=g-import-not-at-top
107
+
108
+ FLAGS = flags.FLAGS
109
+
110
+
111
+ # Logging levels.
112
+ FATAL = converter.ABSL_FATAL
113
+ ERROR = converter.ABSL_ERROR
114
+ WARNING = converter.ABSL_WARNING
115
+ WARN = converter.ABSL_WARNING # Deprecated name.
116
+ INFO = converter.ABSL_INFO
117
+ DEBUG = converter.ABSL_DEBUG
118
+
119
+ # Regex to match/parse log line prefixes.
120
+ ABSL_LOGGING_PREFIX_REGEX = (
121
+ r'^(?P<severity>[IWEF])'
122
+ r'(?P<month>\d\d)(?P<day>\d\d) '
123
+ r'(?P<hour>\d\d):(?P<minute>\d\d):(?P<second>\d\d)'
124
+ r'\.(?P<microsecond>\d\d\d\d\d\d) +'
125
+ r'(?P<thread_id>-?\d+) '
126
+ r'(?P<filename>[a-zA-Z<][\w._<>-]+):(?P<line>\d+)')
127
+
128
+
129
+ # Mask to convert integer thread ids to unsigned quantities for logging purposes
130
+ _THREAD_ID_MASK = 2 ** (struct.calcsize('L') * 8) - 1
131
+
132
+ # Extra property set on the LogRecord created by ABSLLogger when its level is
133
+ # CRITICAL/FATAL.
134
+ _ABSL_LOG_FATAL = '_absl_log_fatal'
135
+ # Extra prefix added to the log message when a non-absl logger logs a
136
+ # CRITICAL/FATAL message.
137
+ _CRITICAL_PREFIX = 'CRITICAL - '
138
+
139
+ # Used by findCaller to skip callers from */logging/__init__.py.
140
+ _LOGGING_FILE_PREFIX = os.path.join('logging', '__init__.')
141
+
142
+ # The ABSL logger instance, initialized in _initialize().
143
+ _absl_logger = None
144
+ # The ABSL handler instance, initialized in _initialize().
145
+ _absl_handler = None
146
+
147
+
148
+ _CPP_NAME_TO_LEVELS = {
149
+ 'debug': '0', # Abseil C++ has no DEBUG level, mapping it to INFO here.
150
+ 'info': '0',
151
+ 'warning': '1',
152
+ 'warn': '1',
153
+ 'error': '2',
154
+ 'fatal': '3'
155
+ }
156
+
157
+ _CPP_LEVEL_TO_NAMES = {
158
+ '0': 'info',
159
+ '1': 'warning',
160
+ '2': 'error',
161
+ '3': 'fatal',
162
+ }
163
+
164
+
165
+ class _VerbosityFlag(flags.Flag):
166
+ """Flag class for -v/--verbosity."""
167
+
168
+ def __init__(self, *args, **kwargs):
169
+ super(_VerbosityFlag, self).__init__(
170
+ flags.IntegerParser(),
171
+ flags.ArgumentSerializer(),
172
+ *args, **kwargs)
173
+
174
+ @property
175
+ def value(self):
176
+ return self._value
177
+
178
+ @value.setter
179
+ def value(self, v):
180
+ self._value = v
181
+ self._update_logging_levels()
182
+
183
+ def _update_logging_levels(self):
184
+ """Updates absl logging levels to the current verbosity.
185
+
186
+ Visibility: module-private
187
+ """
188
+ if not _absl_logger:
189
+ return
190
+
191
+ if self._value <= converter.ABSL_DEBUG:
192
+ standard_verbosity = converter.absl_to_standard(self._value)
193
+ else:
194
+ # --verbosity is set to higher than 1 for vlog.
195
+ standard_verbosity = logging.DEBUG - (self._value - 1)
196
+
197
+ # Also update root level when absl_handler is used.
198
+ if _absl_handler in logging.root.handlers:
199
+ # Make absl logger inherit from the root logger. absl logger might have
200
+ # a non-NOTSET value if logging.set_verbosity() is called at import time.
201
+ _absl_logger.setLevel(logging.NOTSET)
202
+ logging.root.setLevel(standard_verbosity)
203
+ else:
204
+ _absl_logger.setLevel(standard_verbosity)
205
+
206
+
207
+ class _LoggerLevelsFlag(flags.Flag):
208
+ """Flag class for --logger_levels."""
209
+
210
+ def __init__(self, *args, **kwargs):
211
+ super(_LoggerLevelsFlag, self).__init__(
212
+ _LoggerLevelsParser(),
213
+ _LoggerLevelsSerializer(),
214
+ *args, **kwargs)
215
+
216
+ @property
217
+ def value(self):
218
+ # For lack of an immutable type, be defensive and return a copy.
219
+ # Modifications to the dict aren't supported and won't have any affect.
220
+ # While Py3 could use MappingProxyType, that isn't deepcopy friendly, so
221
+ # just return a copy.
222
+ return self._value.copy()
223
+
224
+ @value.setter
225
+ def value(self, v):
226
+ self._value = {} if v is None else v
227
+ self._update_logger_levels()
228
+
229
+ def _update_logger_levels(self):
230
+ # Visibility: module-private.
231
+ # This is called by absl.app.run() during initialization.
232
+ for name, level in self._value.items():
233
+ logging.getLogger(name).setLevel(level)
234
+
235
+
236
+ class _LoggerLevelsParser(flags.ArgumentParser):
237
+ """Parser for --logger_levels flag."""
238
+
239
+ def parse(self, value):
240
+ if isinstance(value, abc.Mapping):
241
+ return value
242
+
243
+ pairs = [pair.strip() for pair in value.split(',') if pair.strip()]
244
+
245
+ # Preserve the order so that serialization is deterministic.
246
+ levels = collections.OrderedDict()
247
+ for name_level in pairs:
248
+ name, level = name_level.split(':', 1)
249
+ name = name.strip()
250
+ level = level.strip()
251
+ levels[name] = level
252
+ return levels
253
+
254
+
255
+ class _LoggerLevelsSerializer(object):
256
+ """Serializer for --logger_levels flag."""
257
+
258
+ def serialize(self, value):
259
+ if isinstance(value, str):
260
+ return value
261
+ return ','.join(
262
+ '{}:{}'.format(name, level) for name, level in value.items())
263
+
264
+
265
+ class _StderrthresholdFlag(flags.Flag):
266
+ """Flag class for --stderrthreshold."""
267
+
268
+ def __init__(self, *args, **kwargs):
269
+ super(_StderrthresholdFlag, self).__init__(
270
+ flags.ArgumentParser(),
271
+ flags.ArgumentSerializer(),
272
+ *args, **kwargs)
273
+
274
+ @property
275
+ def value(self):
276
+ return self._value
277
+
278
+ @value.setter
279
+ def value(self, v):
280
+ if v in _CPP_LEVEL_TO_NAMES:
281
+ # --stderrthreshold also accepts numeric strings whose values are
282
+ # Abseil C++ log levels.
283
+ cpp_value = int(v)
284
+ v = _CPP_LEVEL_TO_NAMES[v] # Normalize to strings.
285
+ elif v.lower() in _CPP_NAME_TO_LEVELS:
286
+ v = v.lower()
287
+ if v == 'warn':
288
+ v = 'warning' # Use 'warning' as the canonical name.
289
+ cpp_value = int(_CPP_NAME_TO_LEVELS[v])
290
+ else:
291
+ raise ValueError(
292
+ '--stderrthreshold must be one of (case-insensitive) '
293
+ "'debug', 'info', 'warning', 'error', 'fatal', "
294
+ "or '0', '1', '2', '3', not '%s'" % v)
295
+
296
+ self._value = v
297
+
298
+
299
+ LOGTOSTDERR = flags.DEFINE_boolean(
300
+ 'logtostderr',
301
+ False,
302
+ 'Should only log to stderr?',
303
+ allow_override_cpp=True,
304
+ )
305
+ ALSOLOGTOSTDERR = flags.DEFINE_boolean(
306
+ 'alsologtostderr',
307
+ False,
308
+ 'also log to stderr?',
309
+ allow_override_cpp=True,
310
+ )
311
+ LOG_DIR = flags.DEFINE_string(
312
+ 'log_dir',
313
+ os.getenv('TEST_TMPDIR', ''),
314
+ 'directory to write logfiles into',
315
+ allow_override_cpp=True,
316
+ )
317
+ VERBOSITY = flags.DEFINE_flag(
318
+ _VerbosityFlag(
319
+ 'verbosity',
320
+ -1,
321
+ (
322
+ 'Logging verbosity level. Messages logged at this level or lower'
323
+ ' will be included. Set to 1 for debug logging. If the flag was not'
324
+ ' set or supplied, the value will be changed from the default of -1'
325
+ ' (warning) to 0 (info) after flags are parsed.'
326
+ ),
327
+ short_name='v',
328
+ allow_hide_cpp=True,
329
+ )
330
+ )
331
+ LOGGER_LEVELS = flags.DEFINE_flag(
332
+ _LoggerLevelsFlag(
333
+ 'logger_levels',
334
+ {},
335
+ (
336
+ 'Specify log level of loggers. The format is a CSV list of '
337
+ '`name:level`. Where `name` is the logger name used with '
338
+ '`logging.getLogger()`, and `level` is a level name (INFO, DEBUG, '
339
+ 'etc). e.g. `myapp.foo:INFO,other.logger:DEBUG`'
340
+ ),
341
+ )
342
+ )
343
+ STDERRTHRESHOLD = flags.DEFINE_flag(
344
+ _StderrthresholdFlag(
345
+ 'stderrthreshold',
346
+ 'fatal',
347
+ (
348
+ 'log messages at this level, or more severe, to stderr in '
349
+ 'addition to the logfile. Possible values are '
350
+ "'debug', 'info', 'warning', 'error', and 'fatal'. "
351
+ 'Obsoletes --alsologtostderr. Using --alsologtostderr '
352
+ 'cancels the effect of this flag. Please also note that '
353
+ 'this flag is subject to --verbosity and requires logfile '
354
+ 'not be stderr.'
355
+ ),
356
+ allow_hide_cpp=True,
357
+ )
358
+ )
359
+ SHOWPREFIXFORINFO = flags.DEFINE_boolean(
360
+ 'showprefixforinfo',
361
+ True,
362
+ (
363
+ 'If False, do not prepend prefix to info messages '
364
+ "when it's logged to stderr, "
365
+ '--verbosity is set to INFO level, '
366
+ 'and python logging is used.'
367
+ ),
368
+ )
369
+
370
+
371
+ def get_verbosity():
372
+ """Returns the logging verbosity."""
373
+ return FLAGS['verbosity'].value
374
+
375
+
376
+ def set_verbosity(v):
377
+ """Sets the logging verbosity.
378
+
379
+ Causes all messages of level <= v to be logged,
380
+ and all messages of level > v to be silently discarded.
381
+
382
+ Args:
383
+ v: int|str, the verbosity level as an integer or string. Legal string values
384
+ are those that can be coerced to an integer as well as case-insensitive
385
+ 'debug', 'info', 'warning', 'error', and 'fatal'.
386
+ """
387
+ try:
388
+ new_level = int(v)
389
+ except ValueError:
390
+ new_level = converter.ABSL_NAMES[v.upper()]
391
+ FLAGS.verbosity = new_level
392
+
393
+
394
+ def set_stderrthreshold(s):
395
+ """Sets the stderr threshold to the value passed in.
396
+
397
+ Args:
398
+ s: str|int, valid strings values are case-insensitive 'debug',
399
+ 'info', 'warning', 'error', and 'fatal'; valid integer values are
400
+ logging.DEBUG|INFO|WARNING|ERROR|FATAL.
401
+
402
+ Raises:
403
+ ValueError: Raised when s is an invalid value.
404
+ """
405
+ if s in converter.ABSL_LEVELS:
406
+ FLAGS.stderrthreshold = converter.ABSL_LEVELS[s]
407
+ elif isinstance(s, str) and s.upper() in converter.ABSL_NAMES:
408
+ FLAGS.stderrthreshold = s
409
+ else:
410
+ raise ValueError(
411
+ 'set_stderrthreshold only accepts integer absl logging level '
412
+ 'from -3 to 1, or case-insensitive string values '
413
+ "'debug', 'info', 'warning', 'error', and 'fatal'. "
414
+ 'But found "{}" ({}).'.format(s, type(s)))
415
+
416
+
417
+ def fatal(msg, *args, **kwargs):
418
+ # type: (Any, Any, Any) -> NoReturn
419
+ """Logs a fatal message."""
420
+ log(FATAL, msg, *args, **kwargs)
421
+
422
+
423
+ def error(msg, *args, **kwargs):
424
+ """Logs an error message."""
425
+ log(ERROR, msg, *args, **kwargs)
426
+
427
+
428
+ def warning(msg, *args, **kwargs):
429
+ """Logs a warning message."""
430
+ log(WARNING, msg, *args, **kwargs)
431
+
432
+
433
+ def warn(msg, *args, **kwargs):
434
+ """Deprecated, use 'warning' instead."""
435
+ warnings.warn("The 'warn' function is deprecated, use 'warning' instead",
436
+ DeprecationWarning, 2)
437
+ log(WARNING, msg, *args, **kwargs)
438
+
439
+
440
+ def info(msg, *args, **kwargs):
441
+ """Logs an info message."""
442
+ log(INFO, msg, *args, **kwargs)
443
+
444
+
445
+ def debug(msg, *args, **kwargs):
446
+ """Logs a debug message."""
447
+ log(DEBUG, msg, *args, **kwargs)
448
+
449
+
450
+ def exception(msg, *args, exc_info=True, **kwargs):
451
+ """Logs an exception, with traceback and message."""
452
+ error(msg, *args, exc_info=exc_info, **kwargs)
453
+
454
+
455
+ # Counter to keep track of number of log entries per token.
456
+ _log_counter_per_token = {}
457
+
458
+
459
+ def _get_next_log_count_per_token(token):
460
+ """Wrapper for _log_counter_per_token. Thread-safe.
461
+
462
+ Args:
463
+ token: The token for which to look up the count.
464
+
465
+ Returns:
466
+ The number of times this function has been called with
467
+ *token* as an argument (starting at 0).
468
+ """
469
+ # Can't use a defaultdict because defaultdict isn't atomic, whereas
470
+ # setdefault is.
471
+ return next(_log_counter_per_token.setdefault(token, itertools.count()))
472
+
473
+
474
+ def log_every_n(level, msg, n, *args):
475
+ """Logs ``msg % args`` at level 'level' once per 'n' times.
476
+
477
+ Logs the 1st call, (N+1)st call, (2N+1)st call, etc.
478
+ Not threadsafe.
479
+
480
+ Args:
481
+ level: int, the absl logging level at which to log.
482
+ msg: str, the message to be logged.
483
+ n: int, the number of times this should be called before it is logged.
484
+ *args: The args to be substituted into the msg.
485
+ """
486
+ count = _get_next_log_count_per_token(get_absl_logger().findCaller())
487
+ log_if(level, msg, not (count % n), *args)
488
+
489
+
490
+ # Keeps track of the last log time of the given token.
491
+ # Note: must be a dict since set/get is atomic in CPython.
492
+ # Note: entries are never released as their number is expected to be low.
493
+ _log_timer_per_token = {}
494
+
495
+
496
+ def _seconds_have_elapsed(token, num_seconds):
497
+ """Tests if 'num_seconds' have passed since 'token' was requested.
498
+
499
+ Not strictly thread-safe - may log with the wrong frequency if called
500
+ concurrently from multiple threads. Accuracy depends on resolution of
501
+ 'timeit.default_timer()'.
502
+
503
+ Always returns True on the first call for a given 'token'.
504
+
505
+ Args:
506
+ token: The token for which to look up the count.
507
+ num_seconds: The number of seconds to test for.
508
+
509
+ Returns:
510
+ Whether it has been >= 'num_seconds' since 'token' was last requested.
511
+ """
512
+ now = timeit.default_timer()
513
+ then = _log_timer_per_token.get(token, None)
514
+ if then is None or (now - then) >= num_seconds:
515
+ _log_timer_per_token[token] = now
516
+ return True
517
+ else:
518
+ return False
519
+
520
+
521
+ def log_every_n_seconds(level, msg, n_seconds, *args):
522
+ """Logs ``msg % args`` at level ``level`` iff ``n_seconds`` elapsed since last call.
523
+
524
+ Logs the first call, logs subsequent calls if 'n' seconds have elapsed since
525
+ the last logging call from the same call site (file + line). Not thread-safe.
526
+
527
+ Args:
528
+ level: int, the absl logging level at which to log.
529
+ msg: str, the message to be logged.
530
+ n_seconds: float or int, seconds which should elapse before logging again.
531
+ *args: The args to be substituted into the msg.
532
+ """
533
+ should_log = _seconds_have_elapsed(get_absl_logger().findCaller(), n_seconds)
534
+ log_if(level, msg, should_log, *args)
535
+
536
+
537
+ def log_first_n(level, msg, n, *args):
538
+ """Logs ``msg % args`` at level ``level`` only first ``n`` times.
539
+
540
+ Not threadsafe.
541
+
542
+ Args:
543
+ level: int, the absl logging level at which to log.
544
+ msg: str, the message to be logged.
545
+ n: int, the maximal number of times the message is logged.
546
+ *args: The args to be substituted into the msg.
547
+ """
548
+ count = _get_next_log_count_per_token(get_absl_logger().findCaller())
549
+ log_if(level, msg, count < n, *args)
550
+
551
+
552
+ def log_if(level, msg, condition, *args):
553
+ """Logs ``msg % args`` at level ``level`` only if condition is fulfilled."""
554
+ if condition:
555
+ log(level, msg, *args)
556
+
557
+
558
+ def log(level, msg, *args, **kwargs):
559
+ """Logs ``msg % args`` at absl logging level ``level``.
560
+
561
+ If no args are given just print msg, ignoring any interpolation specifiers.
562
+
563
+ Args:
564
+ level: int, the absl logging level at which to log the message
565
+ (logging.DEBUG|INFO|WARNING|ERROR|FATAL). While some C++ verbose logging
566
+ level constants are also supported, callers should prefer explicit
567
+ logging.vlog() calls for such purpose.
568
+
569
+ msg: str, the message to be logged.
570
+ *args: The args to be substituted into the msg.
571
+ **kwargs: May contain exc_info to add exception traceback to message.
572
+ """
573
+ if level > converter.ABSL_DEBUG:
574
+ # Even though this function supports level that is greater than 1, users
575
+ # should use logging.vlog instead for such cases.
576
+ # Treat this as vlog, 1 is equivalent to DEBUG.
577
+ standard_level = converter.STANDARD_DEBUG - (level - 1)
578
+ else:
579
+ if level < converter.ABSL_FATAL:
580
+ level = converter.ABSL_FATAL
581
+ standard_level = converter.absl_to_standard(level)
582
+
583
+ # Match standard logging's behavior. Before use_absl_handler() and
584
+ # logging is configured, there is no handler attached on _absl_logger nor
585
+ # logging.root. So logs go no where.
586
+ if not logging.root.handlers:
587
+ logging.basicConfig()
588
+
589
+ _absl_logger.log(standard_level, msg, *args, **kwargs)
590
+
591
+
592
+ def vlog(level, msg, *args, **kwargs):
593
+ """Log ``msg % args`` at C++ vlog level ``level``.
594
+
595
+ Args:
596
+ level: int, the C++ verbose logging level at which to log the message,
597
+ e.g. 1, 2, 3, 4... While absl level constants are also supported,
598
+ callers should prefer logging.log|debug|info|... calls for such purpose.
599
+ msg: str, the message to be logged.
600
+ *args: The args to be substituted into the msg.
601
+ **kwargs: May contain exc_info to add exception traceback to message.
602
+ """
603
+ log(level, msg, *args, **kwargs)
604
+
605
+
606
+ def vlog_is_on(level):
607
+ """Checks if vlog is enabled for the given level in caller's source file.
608
+
609
+ Args:
610
+ level: int, the C++ verbose logging level at which to log the message,
611
+ e.g. 1, 2, 3, 4... While absl level constants are also supported,
612
+ callers should prefer level_debug|level_info|... calls for
613
+ checking those.
614
+
615
+ Returns:
616
+ True if logging is turned on for that level.
617
+ """
618
+
619
+ if level > converter.ABSL_DEBUG:
620
+ # Even though this function supports level that is greater than 1, users
621
+ # should use logging.vlog instead for such cases.
622
+ # Treat this as vlog, 1 is equivalent to DEBUG.
623
+ standard_level = converter.STANDARD_DEBUG - (level - 1)
624
+ else:
625
+ if level < converter.ABSL_FATAL:
626
+ level = converter.ABSL_FATAL
627
+ standard_level = converter.absl_to_standard(level)
628
+ return _absl_logger.isEnabledFor(standard_level)
629
+
630
+
631
+ def flush():
632
+ """Flushes all log files."""
633
+ get_absl_handler().flush()
634
+
635
+
636
+ def level_debug():
637
+ """Returns True if debug logging is turned on."""
638
+ return get_verbosity() >= DEBUG
639
+
640
+
641
+ def level_info():
642
+ """Returns True if info logging is turned on."""
643
+ return get_verbosity() >= INFO
644
+
645
+
646
+ def level_warning():
647
+ """Returns True if warning logging is turned on."""
648
+ return get_verbosity() >= WARNING
649
+
650
+
651
+ level_warn = level_warning # Deprecated function.
652
+
653
+
654
+ def level_error():
655
+ """Returns True if error logging is turned on."""
656
+ return get_verbosity() >= ERROR
657
+
658
+
659
+ def get_log_file_name(level=INFO):
660
+ """Returns the name of the log file.
661
+
662
+ For Python logging, only one file is used and level is ignored. And it returns
663
+ empty string if it logs to stderr/stdout or the log stream has no `name`
664
+ attribute.
665
+
666
+ Args:
667
+ level: int, the absl.logging level.
668
+
669
+ Raises:
670
+ ValueError: Raised when `level` has an invalid value.
671
+ """
672
+ if level not in converter.ABSL_LEVELS:
673
+ raise ValueError('Invalid absl.logging level {}'.format(level))
674
+ stream = get_absl_handler().python_handler.stream
675
+ if (stream == sys.stderr or stream == sys.stdout or
676
+ not hasattr(stream, 'name')):
677
+ return ''
678
+ else:
679
+ return stream.name
680
+
681
+
682
+ def find_log_dir_and_names(program_name=None, log_dir=None):
683
+ """Computes the directory and filename prefix for log file.
684
+
685
+ Args:
686
+ program_name: str|None, the filename part of the path to the program that
687
+ is running without its extension. e.g: if your program is called
688
+ ``usr/bin/foobar.py`` this method should probably be called with
689
+ ``program_name='foobar`` However, this is just a convention, you can
690
+ pass in any string you want, and it will be used as part of the
691
+ log filename. If you don't pass in anything, the default behavior
692
+ is as described in the example. In python standard logging mode,
693
+ the program_name will be prepended with ``py_`` if it is the
694
+ ``program_name`` argument is omitted.
695
+ log_dir: str|None, the desired log directory.
696
+
697
+ Returns:
698
+ (log_dir, file_prefix, symlink_prefix)
699
+
700
+ Raises:
701
+ FileNotFoundError: raised in Python 3 when it cannot find a log directory.
702
+ OSError: raised in Python 2 when it cannot find a log directory.
703
+ """
704
+ if not program_name:
705
+ # Strip the extension (foobar.par becomes foobar, and
706
+ # fubar.py becomes fubar). We do this so that the log
707
+ # file names are similar to C++ log file names.
708
+ program_name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
709
+
710
+ # Prepend py_ to files so that python code gets a unique file, and
711
+ # so that C++ libraries do not try to write to the same log files as us.
712
+ program_name = 'py_%s' % program_name
713
+
714
+ actual_log_dir = find_log_dir(log_dir=log_dir)
715
+
716
+ try:
717
+ username = getpass.getuser()
718
+ except KeyError:
719
+ # This can happen, e.g. when running under docker w/o passwd file.
720
+ if hasattr(os, 'getuid'):
721
+ # Windows doesn't have os.getuid
722
+ username = str(os.getuid())
723
+ else:
724
+ username = 'unknown'
725
+ hostname = socket.gethostname()
726
+ file_prefix = '%s.%s.%s.log' % (program_name, hostname, username)
727
+
728
+ return actual_log_dir, file_prefix, program_name
729
+
730
+
731
+ def find_log_dir(log_dir=None):
732
+ """Returns the most suitable directory to put log files into.
733
+
734
+ Args:
735
+ log_dir: str|None, if specified, the logfile(s) will be created in that
736
+ directory. Otherwise if the --log_dir command-line flag is provided,
737
+ the logfile will be created in that directory. Otherwise the logfile
738
+ will be created in a standard location.
739
+
740
+ Raises:
741
+ FileNotFoundError: raised in Python 3 when it cannot find a log directory.
742
+ OSError: raised in Python 2 when it cannot find a log directory.
743
+ """
744
+ # Get a list of possible log dirs (will try to use them in order).
745
+ # NOTE: Google's internal implementation has a special handling for Google
746
+ # machines, which uses a list of directories. Hence the following uses `dirs`
747
+ # instead of a single directory.
748
+ if log_dir:
749
+ # log_dir was explicitly specified as an arg, so use it and it alone.
750
+ dirs = [log_dir]
751
+ elif FLAGS['log_dir'].value:
752
+ # log_dir flag was provided, so use it and it alone (this mimics the
753
+ # behavior of the same flag in logging.cc).
754
+ dirs = [FLAGS['log_dir'].value]
755
+ else:
756
+ dirs = [tempfile.gettempdir()]
757
+
758
+ # Find the first usable log dir.
759
+ for d in dirs:
760
+ if os.path.isdir(d) and os.access(d, os.W_OK):
761
+ return d
762
+ raise FileNotFoundError(
763
+ "Can't find a writable directory for logs, tried %s" % dirs)
764
+
765
+
766
+ def get_absl_log_prefix(record):
767
+ """Returns the absl log prefix for the log record.
768
+
769
+ Args:
770
+ record: logging.LogRecord, the record to get prefix for.
771
+ """
772
+ created_tuple = time.localtime(record.created)
773
+ created_microsecond = int(record.created % 1.0 * 1e6)
774
+
775
+ critical_prefix = ''
776
+ level = record.levelno
777
+ if _is_non_absl_fatal_record(record):
778
+ # When the level is FATAL, but not logged from absl, lower the level so
779
+ # it's treated as ERROR.
780
+ level = logging.ERROR
781
+ critical_prefix = _CRITICAL_PREFIX
782
+ severity = converter.get_initial_for_level(level)
783
+
784
+ return '%c%02d%02d %02d:%02d:%02d.%06d %5d %s:%d] %s' % (
785
+ severity,
786
+ created_tuple.tm_mon,
787
+ created_tuple.tm_mday,
788
+ created_tuple.tm_hour,
789
+ created_tuple.tm_min,
790
+ created_tuple.tm_sec,
791
+ created_microsecond,
792
+ _get_thread_id(),
793
+ record.filename,
794
+ record.lineno,
795
+ critical_prefix)
796
+
797
+
798
+ def skip_log_prefix(func):
799
+ """Skips reporting the prefix of a given function or name by :class:`~absl.logging.ABSLLogger`.
800
+
801
+ This is a convenience wrapper function / decorator for
802
+ :meth:`~absl.logging.ABSLLogger.register_frame_to_skip`.
803
+
804
+ If a callable function is provided, only that function will be skipped.
805
+ If a function name is provided, all functions with the same name in the
806
+ file that this is called in will be skipped.
807
+
808
+ This can be used as a decorator of the intended function to be skipped.
809
+
810
+ Args:
811
+ func: Callable function or its name as a string.
812
+
813
+ Returns:
814
+ func (the input, unchanged).
815
+
816
+ Raises:
817
+ ValueError: The input is callable but does not have a function code object.
818
+ TypeError: The input is neither callable nor a string.
819
+ """
820
+ if callable(func):
821
+ func_code = getattr(func, '__code__', None)
822
+ if func_code is None:
823
+ raise ValueError('Input callable does not have a function code object.')
824
+ file_name = func_code.co_filename
825
+ func_name = func_code.co_name
826
+ func_lineno = func_code.co_firstlineno
827
+ elif isinstance(func, str):
828
+ file_name = get_absl_logger().findCaller()[0]
829
+ func_name = func
830
+ func_lineno = None
831
+ else:
832
+ raise TypeError('Input is neither callable nor a string.')
833
+ ABSLLogger.register_frame_to_skip(file_name, func_name, func_lineno)
834
+ return func
835
+
836
+
837
+ def _is_non_absl_fatal_record(log_record):
838
+ return (log_record.levelno >= logging.FATAL and
839
+ not log_record.__dict__.get(_ABSL_LOG_FATAL, False))
840
+
841
+
842
+ def _is_absl_fatal_record(log_record):
843
+ return (log_record.levelno >= logging.FATAL and
844
+ log_record.__dict__.get(_ABSL_LOG_FATAL, False))
845
+
846
+
847
+ # Indicates if we still need to warn about pre-init logs going to stderr.
848
+ _warn_preinit_stderr = True
849
+
850
+
851
+ class PythonHandler(logging.StreamHandler):
852
+ """The handler class used by Abseil Python logging implementation."""
853
+
854
+ def __init__(self, stream=None, formatter=None):
855
+ super(PythonHandler, self).__init__(stream)
856
+ self.setFormatter(formatter or PythonFormatter())
857
+
858
+ def start_logging_to_file(self, program_name=None, log_dir=None):
859
+ """Starts logging messages to files instead of standard error."""
860
+ FLAGS.logtostderr = False
861
+
862
+ actual_log_dir, file_prefix, symlink_prefix = find_log_dir_and_names(
863
+ program_name=program_name, log_dir=log_dir)
864
+
865
+ basename = '%s.INFO.%s.%d' % (
866
+ file_prefix,
867
+ time.strftime('%Y%m%d-%H%M%S', time.localtime(time.time())),
868
+ os.getpid())
869
+ filename = os.path.join(actual_log_dir, basename)
870
+
871
+ self.stream = open(filename, 'a', encoding='utf-8')
872
+
873
+ # os.symlink is not available on Windows Python 2.
874
+ if getattr(os, 'symlink', None):
875
+ # Create a symlink to the log file with a canonical name.
876
+ symlink = os.path.join(actual_log_dir, symlink_prefix + '.INFO')
877
+ try:
878
+ if os.path.islink(symlink):
879
+ os.unlink(symlink)
880
+ os.symlink(os.path.basename(filename), symlink)
881
+ except EnvironmentError:
882
+ # If it fails, we're sad but it's no error. Commonly, this
883
+ # fails because the symlink was created by another user and so
884
+ # we can't modify it
885
+ pass
886
+
887
+ def use_absl_log_file(self, program_name=None, log_dir=None):
888
+ """Conditionally logs to files, based on --logtostderr."""
889
+ if FLAGS['logtostderr'].value:
890
+ self.stream = sys.stderr
891
+ else:
892
+ self.start_logging_to_file(program_name=program_name, log_dir=log_dir)
893
+
894
+ def flush(self):
895
+ """Flushes all log files."""
896
+ self.acquire()
897
+ try:
898
+ if self.stream and hasattr(self.stream, 'flush'):
899
+ self.stream.flush()
900
+ except (EnvironmentError, ValueError):
901
+ # A ValueError is thrown if we try to flush a closed file.
902
+ pass
903
+ finally:
904
+ self.release()
905
+
906
+ def _log_to_stderr(self, record):
907
+ """Emits the record to stderr.
908
+
909
+ This temporarily sets the handler stream to stderr, calls
910
+ StreamHandler.emit, then reverts the stream back.
911
+
912
+ Args:
913
+ record: logging.LogRecord, the record to log.
914
+ """
915
+ # emit() is protected by a lock in logging.Handler, so we don't need to
916
+ # protect here again.
917
+ old_stream = self.stream
918
+ self.stream = sys.stderr
919
+ try:
920
+ super(PythonHandler, self).emit(record)
921
+ finally:
922
+ self.stream = old_stream
923
+
924
+ def emit(self, record):
925
+ """Prints a record out to some streams.
926
+
927
+ 1. If ``FLAGS.logtostderr`` is set, it will print to ``sys.stderr`` ONLY.
928
+ 2. If ``FLAGS.alsologtostderr`` is set, it will print to ``sys.stderr``.
929
+ 3. If ``FLAGS.logtostderr`` is not set, it will log to the stream
930
+ associated with the current thread.
931
+
932
+ Args:
933
+ record: :class:`logging.LogRecord`, the record to emit.
934
+ """
935
+ # People occasionally call logging functions at import time before
936
+ # our flags may have even been defined yet, let alone even parsed, as we
937
+ # rely on the C++ side to define some flags for us and app init to
938
+ # deal with parsing. Match the C++ library behavior of notify and emit
939
+ # such messages to stderr. It encourages people to clean-up and does
940
+ # not hide the message.
941
+ level = record.levelno
942
+ if not FLAGS.is_parsed(): # Also implies "before flag has been defined".
943
+ global _warn_preinit_stderr
944
+ if _warn_preinit_stderr:
945
+ sys.stderr.write(
946
+ 'WARNING: Logging before flag parsing goes to stderr.\n')
947
+ _warn_preinit_stderr = False
948
+ self._log_to_stderr(record)
949
+ elif FLAGS['logtostderr'].value:
950
+ self._log_to_stderr(record)
951
+ else:
952
+ super(PythonHandler, self).emit(record)
953
+ stderr_threshold = converter.string_to_standard(
954
+ FLAGS['stderrthreshold'].value)
955
+ if ((FLAGS['alsologtostderr'].value or level >= stderr_threshold) and
956
+ self.stream != sys.stderr):
957
+ self._log_to_stderr(record)
958
+ # Die when the record is created from ABSLLogger and level is FATAL.
959
+ if _is_absl_fatal_record(record):
960
+ self.flush() # Flush the log before dying.
961
+
962
+ # In threaded python, sys.exit() from a non-main thread only
963
+ # exits the thread in question.
964
+ os.abort()
965
+
966
+ def close(self):
967
+ """Closes the stream to which we are writing."""
968
+ self.acquire()
969
+ try:
970
+ self.flush()
971
+ try:
972
+ # Do not close the stream if it's sys.stderr|stdout. They may be
973
+ # redirected or overridden to files, which should be managed by users
974
+ # explicitly.
975
+ user_managed = sys.stderr, sys.stdout, sys.__stderr__, sys.__stdout__
976
+ if self.stream not in user_managed and (
977
+ not hasattr(self.stream, 'isatty') or not self.stream.isatty()):
978
+ self.stream.close()
979
+ except ValueError:
980
+ # A ValueError is thrown if we try to run isatty() on a closed file.
981
+ pass
982
+ super(PythonHandler, self).close()
983
+ finally:
984
+ self.release()
985
+
986
+
987
+ class ABSLHandler(logging.Handler):
988
+ """Abseil Python logging module's log handler."""
989
+
990
+ def __init__(self, python_logging_formatter):
991
+ super(ABSLHandler, self).__init__()
992
+
993
+ self._python_handler = PythonHandler(formatter=python_logging_formatter)
994
+ self.activate_python_handler()
995
+
996
+ def format(self, record):
997
+ return self._current_handler.format(record)
998
+
999
+ def setFormatter(self, fmt):
1000
+ self._current_handler.setFormatter(fmt)
1001
+
1002
+ def emit(self, record):
1003
+ self._current_handler.emit(record)
1004
+
1005
+ def flush(self):
1006
+ self._current_handler.flush()
1007
+
1008
+ def close(self):
1009
+ super(ABSLHandler, self).close()
1010
+ self._current_handler.close()
1011
+
1012
+ def handle(self, record):
1013
+ rv = self.filter(record)
1014
+ if rv:
1015
+ return self._current_handler.handle(record)
1016
+ return rv
1017
+
1018
+ @property
1019
+ def python_handler(self):
1020
+ return self._python_handler
1021
+
1022
+ def activate_python_handler(self):
1023
+ """Uses the Python logging handler as the current logging handler."""
1024
+ self._current_handler = self._python_handler
1025
+
1026
+ def use_absl_log_file(self, program_name=None, log_dir=None):
1027
+ self._current_handler.use_absl_log_file(program_name, log_dir)
1028
+
1029
+ def start_logging_to_file(self, program_name=None, log_dir=None):
1030
+ self._current_handler.start_logging_to_file(program_name, log_dir)
1031
+
1032
+
1033
+ class PythonFormatter(logging.Formatter):
1034
+ """Formatter class used by :class:`~absl.logging.PythonHandler`."""
1035
+
1036
+ def format(self, record):
1037
+ """Appends the message from the record to the results of the prefix.
1038
+
1039
+ Args:
1040
+ record: logging.LogRecord, the record to be formatted.
1041
+
1042
+ Returns:
1043
+ The formatted string representing the record.
1044
+ """
1045
+ if (not FLAGS['showprefixforinfo'].value and
1046
+ FLAGS['verbosity'].value == converter.ABSL_INFO and
1047
+ record.levelno == logging.INFO and
1048
+ _absl_handler.python_handler.stream == sys.stderr):
1049
+ prefix = ''
1050
+ else:
1051
+ prefix = get_absl_log_prefix(record)
1052
+ return prefix + super(PythonFormatter, self).format(record)
1053
+
1054
+
1055
+ class ABSLLogger(logging.getLoggerClass()):
1056
+ """A logger that will create LogRecords while skipping some stack frames.
1057
+
1058
+ This class maintains an internal list of filenames and method names
1059
+ for use when determining who called the currently executing stack
1060
+ frame. Any method names from specific source files are skipped when
1061
+ walking backwards through the stack.
1062
+
1063
+ Client code should use the register_frame_to_skip method to let the
1064
+ ABSLLogger know which method from which file should be
1065
+ excluded from the walk backwards through the stack.
1066
+ """
1067
+ _frames_to_skip = set()
1068
+
1069
+ def findCaller(self, stack_info=False, stacklevel=1):
1070
+ """Finds the frame of the calling method on the stack.
1071
+
1072
+ This method skips any frames registered with the
1073
+ ABSLLogger and any methods from this file, and whatever
1074
+ method is currently being used to generate the prefix for the log
1075
+ line. Then it returns the file name, line number, and method name
1076
+ of the calling method. An optional fourth item may be returned,
1077
+ callers who only need things from the first three are advised to
1078
+ always slice or index the result rather than using direct unpacking
1079
+ assignment.
1080
+
1081
+ Args:
1082
+ stack_info: bool, when True, include the stack trace as a fourth item
1083
+ returned. On Python 3 there are always four items returned - the
1084
+ fourth will be None when this is False. On Python 2 the stdlib
1085
+ base class API only returns three items. We do the same when this
1086
+ new parameter is unspecified or False for compatibility.
1087
+
1088
+ Returns:
1089
+ (filename, lineno, methodname[, sinfo]) of the calling method.
1090
+ """
1091
+ f_to_skip = ABSLLogger._frames_to_skip
1092
+ # Use sys._getframe(2) instead of logging.currentframe(), it's slightly
1093
+ # faster because there is one less frame to traverse.
1094
+ frame = sys._getframe(2) # pylint: disable=protected-access
1095
+
1096
+ while frame:
1097
+ code = frame.f_code
1098
+ if (_LOGGING_FILE_PREFIX not in code.co_filename and
1099
+ (code.co_filename, code.co_name,
1100
+ code.co_firstlineno) not in f_to_skip and
1101
+ (code.co_filename, code.co_name) not in f_to_skip):
1102
+ sinfo = None
1103
+ if stack_info:
1104
+ out = io.StringIO()
1105
+ out.write(u'Stack (most recent call last):\n')
1106
+ traceback.print_stack(frame, file=out)
1107
+ sinfo = out.getvalue().rstrip(u'\n')
1108
+ return (code.co_filename, frame.f_lineno, code.co_name, sinfo)
1109
+ frame = frame.f_back
1110
+
1111
+ def critical(self, msg, *args, **kwargs):
1112
+ """Logs ``msg % args`` with severity ``CRITICAL``."""
1113
+ self.log(logging.CRITICAL, msg, *args, **kwargs)
1114
+
1115
+ def fatal(self, msg, *args, **kwargs):
1116
+ """Logs ``msg % args`` with severity ``FATAL``."""
1117
+ self.log(logging.FATAL, msg, *args, **kwargs)
1118
+
1119
+ def error(self, msg, *args, **kwargs):
1120
+ """Logs ``msg % args`` with severity ``ERROR``."""
1121
+ self.log(logging.ERROR, msg, *args, **kwargs)
1122
+
1123
+ def warn(self, msg, *args, **kwargs):
1124
+ """Logs ``msg % args`` with severity ``WARN``."""
1125
+ warnings.warn("The 'warn' method is deprecated, use 'warning' instead",
1126
+ DeprecationWarning, 2)
1127
+ self.log(logging.WARN, msg, *args, **kwargs)
1128
+
1129
+ def warning(self, msg, *args, **kwargs):
1130
+ """Logs ``msg % args`` with severity ``WARNING``."""
1131
+ self.log(logging.WARNING, msg, *args, **kwargs)
1132
+
1133
+ def info(self, msg, *args, **kwargs):
1134
+ """Logs ``msg % args`` with severity ``INFO``."""
1135
+ self.log(logging.INFO, msg, *args, **kwargs)
1136
+
1137
+ def debug(self, msg, *args, **kwargs):
1138
+ """Logs ``msg % args`` with severity ``DEBUG``."""
1139
+ self.log(logging.DEBUG, msg, *args, **kwargs)
1140
+
1141
+ def log(self, level, msg, *args, **kwargs):
1142
+ """Logs a message at a cetain level substituting in the supplied arguments.
1143
+
1144
+ This method behaves differently in python and c++ modes.
1145
+
1146
+ Args:
1147
+ level: int, the standard logging level at which to log the message.
1148
+ msg: str, the text of the message to log.
1149
+ *args: The arguments to substitute in the message.
1150
+ **kwargs: The keyword arguments to substitute in the message.
1151
+ """
1152
+ if level >= logging.FATAL:
1153
+ # Add property to the LogRecord created by this logger.
1154
+ # This will be used by the ABSLHandler to determine whether it should
1155
+ # treat CRITICAL/FATAL logs as really FATAL.
1156
+ extra = kwargs.setdefault('extra', {})
1157
+ extra[_ABSL_LOG_FATAL] = True
1158
+ super(ABSLLogger, self).log(level, msg, *args, **kwargs)
1159
+
1160
+ def handle(self, record):
1161
+ """Calls handlers without checking ``Logger.disabled``.
1162
+
1163
+ Non-root loggers are set to disabled after setup with :func:`logging.config`
1164
+ if it's not explicitly specified. Historically, absl logging will not be
1165
+ disabled by that. To maintaining this behavior, this function skips
1166
+ checking the ``Logger.disabled`` bit.
1167
+
1168
+ This logger can still be disabled by adding a filter that filters out
1169
+ everything.
1170
+
1171
+ Args:
1172
+ record: logging.LogRecord, the record to handle.
1173
+ """
1174
+ if self.filter(record):
1175
+ self.callHandlers(record)
1176
+
1177
+ @classmethod
1178
+ def register_frame_to_skip(cls, file_name, function_name, line_number=None):
1179
+ """Registers a function name to skip when walking the stack.
1180
+
1181
+ The :class:`~absl.logging.ABSLLogger` sometimes skips method calls on the
1182
+ stack to make the log messages meaningful in their appropriate context.
1183
+ This method registers a function from a particular file as one
1184
+ which should be skipped.
1185
+
1186
+ Args:
1187
+ file_name: str, the name of the file that contains the function.
1188
+ function_name: str, the name of the function to skip.
1189
+ line_number: int, if provided, only the function with this starting line
1190
+ number will be skipped. Otherwise, all functions with the same name
1191
+ in the file will be skipped.
1192
+ """
1193
+ if line_number is not None:
1194
+ cls._frames_to_skip.add((file_name, function_name, line_number))
1195
+ else:
1196
+ cls._frames_to_skip.add((file_name, function_name))
1197
+
1198
+
1199
+ def _get_thread_id():
1200
+ """Gets id of current thread, suitable for logging as an unsigned quantity.
1201
+
1202
+ If pywrapbase is linked, returns GetTID() for the thread ID to be
1203
+ consistent with C++ logging. Otherwise, returns the numeric thread id.
1204
+ The quantities are made unsigned by masking with 2*sys.maxint + 1.
1205
+
1206
+ Returns:
1207
+ Thread ID unique to this process (unsigned)
1208
+ """
1209
+ thread_id = threading.get_ident()
1210
+ return thread_id & _THREAD_ID_MASK
1211
+
1212
+
1213
+ def get_absl_logger():
1214
+ """Returns the absl logger instance."""
1215
+ assert _absl_logger is not None
1216
+ return _absl_logger
1217
+
1218
+
1219
+ def get_absl_handler():
1220
+ """Returns the absl handler instance."""
1221
+ assert _absl_handler is not None
1222
+ return _absl_handler
1223
+
1224
+
1225
+ def use_python_logging(quiet=False):
1226
+ """Uses the python implementation of the logging code.
1227
+
1228
+ Args:
1229
+ quiet: No logging message about switching logging type.
1230
+ """
1231
+ get_absl_handler().activate_python_handler()
1232
+ if not quiet:
1233
+ info('Restoring pure python logging')
1234
+
1235
+
1236
+ _attempted_to_remove_stderr_stream_handlers = False
1237
+
1238
+
1239
+ def use_absl_handler():
1240
+ """Uses the ABSL logging handler for logging.
1241
+
1242
+ This method is called in :func:`app.run()<absl.app.run>` so the absl handler
1243
+ is used in absl apps.
1244
+ """
1245
+ global _attempted_to_remove_stderr_stream_handlers
1246
+ if not _attempted_to_remove_stderr_stream_handlers:
1247
+ # The absl handler logs to stderr by default. To prevent double logging to
1248
+ # stderr, the following code tries its best to remove other handlers that
1249
+ # emit to stderr. Those handlers are most commonly added when
1250
+ # logging.info/debug is called before calling use_absl_handler().
1251
+ handlers = [
1252
+ h for h in logging.root.handlers
1253
+ if isinstance(h, logging.StreamHandler) and h.stream == sys.stderr]
1254
+ for h in handlers:
1255
+ logging.root.removeHandler(h)
1256
+ _attempted_to_remove_stderr_stream_handlers = True
1257
+
1258
+ absl_handler = get_absl_handler()
1259
+ if absl_handler not in logging.root.handlers:
1260
+ logging.root.addHandler(absl_handler)
1261
+ FLAGS['verbosity']._update_logging_levels() # pylint: disable=protected-access
1262
+ FLAGS['logger_levels']._update_logger_levels() # pylint: disable=protected-access
1263
+
1264
+
1265
+ def _initialize():
1266
+ """Initializes loggers and handlers."""
1267
+ global _absl_logger, _absl_handler
1268
+
1269
+ if _absl_logger:
1270
+ return
1271
+
1272
+ original_logger_class = logging.getLoggerClass()
1273
+ logging.setLoggerClass(ABSLLogger)
1274
+ _absl_logger = logging.getLogger('absl')
1275
+ logging.setLoggerClass(original_logger_class)
1276
+
1277
+ python_logging_formatter = PythonFormatter()
1278
+ _absl_handler = ABSLHandler(python_logging_formatter)
1279
+
1280
+
1281
+ _initialize()
llmeval-env/lib/python3.10/site-packages/absl/logging/__init__.pyi ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The Abseil Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import logging
16
+ from typing import Any, Callable, Dict, NoReturn, Optional, Tuple, TypeVar, Union
17
+
18
+ from absl import flags
19
+
20
+ # Logging levels.
21
+ FATAL: int
22
+ ERROR: int
23
+ WARNING: int
24
+ WARN: int # Deprecated name.
25
+ INFO: int
26
+ DEBUG: int
27
+
28
+ ABSL_LOGGING_PREFIX_REGEX: str
29
+
30
+ LOGTOSTDERR: flags.FlagHolder[bool]
31
+ ALSOLOGTOSTDERR: flags.FlagHolder[bool]
32
+ LOG_DIR: flags.FlagHolder[str]
33
+ VERBOSITY: flags.FlagHolder[int]
34
+ LOGGER_LEVELS: flags.FlagHolder[Dict[str, str]]
35
+ STDERRTHRESHOLD: flags.FlagHolder[str]
36
+ SHOWPREFIXFORINFO: flags.FlagHolder[bool]
37
+
38
+
39
+ def get_verbosity() -> int:
40
+ ...
41
+
42
+
43
+ def set_verbosity(v: Union[int, str]) -> None:
44
+ ...
45
+
46
+
47
+ def set_stderrthreshold(s: Union[int, str]) -> None:
48
+ ...
49
+
50
+
51
+ # TODO(b/277607978): Provide actual args+kwargs shadowing stdlib's logging functions.
52
+ def fatal(msg: Any, *args: Any, **kwargs: Any) -> NoReturn:
53
+ ...
54
+
55
+
56
+ def error(msg: Any, *args: Any, **kwargs: Any) -> None:
57
+ ...
58
+
59
+
60
+ def warning(msg: Any, *args: Any, **kwargs: Any) -> None:
61
+ ...
62
+
63
+
64
+ def warn(msg: Any, *args: Any, **kwargs: Any) -> None:
65
+ ...
66
+
67
+
68
+ def info(msg: Any, *args: Any, **kwargs: Any) -> None:
69
+ ...
70
+
71
+
72
+ def debug(msg: Any, *args: Any, **kwargs: Any) -> None:
73
+ ...
74
+
75
+
76
+ def exception(msg: Any, *args: Any, **kwargs: Any) -> None:
77
+ ...
78
+
79
+
80
+ def log_every_n(level: int, msg: Any, n: int, *args: Any) -> None:
81
+ ...
82
+
83
+
84
+ def log_every_n_seconds(
85
+ level: int, msg: Any, n_seconds: float, *args: Any
86
+ ) -> None:
87
+ ...
88
+
89
+
90
+ def log_first_n(level: int, msg: Any, n: int, *args: Any) -> None:
91
+ ...
92
+
93
+
94
+ def log_if(level: int, msg: Any, condition: Any, *args: Any) -> None:
95
+ ...
96
+
97
+
98
+ def log(level: int, msg: Any, *args: Any, **kwargs: Any) -> None:
99
+ ...
100
+
101
+
102
+ def vlog(level: int, msg: Any, *args: Any, **kwargs: Any) -> None:
103
+ ...
104
+
105
+
106
+ def vlog_is_on(level: int) -> bool:
107
+ ...
108
+
109
+
110
+ def flush() -> None:
111
+ ...
112
+
113
+
114
+ def level_debug() -> bool:
115
+ ...
116
+
117
+
118
+ def level_info() -> bool:
119
+ ...
120
+
121
+
122
+ def level_warning() -> bool:
123
+ ...
124
+
125
+
126
+ level_warn = level_warning # Deprecated function.
127
+
128
+
129
+ def level_error() -> bool:
130
+ ...
131
+
132
+
133
+ def get_log_file_name(level: int = ...) -> str:
134
+ ...
135
+
136
+
137
+ def find_log_dir_and_names(
138
+ program_name: Optional[str] = ..., log_dir: Optional[str] = ...
139
+ ) -> Tuple[str, str, str]:
140
+ ...
141
+
142
+
143
+ def find_log_dir(log_dir: Optional[str] = ...) -> str:
144
+ ...
145
+
146
+
147
+ def get_absl_log_prefix(record: logging.LogRecord) -> str:
148
+ ...
149
+
150
+
151
+ _SkipLogT = TypeVar('_SkipLogT', str, Callable[..., Any])
152
+
153
+ def skip_log_prefix(func: _SkipLogT) -> _SkipLogT:
154
+ ...
155
+
156
+
157
+ _StreamT = TypeVar("_StreamT")
158
+
159
+
160
+ class PythonHandler(logging.StreamHandler[_StreamT]):
161
+
162
+ def __init__(
163
+ self,
164
+ stream: Optional[_StreamT] = ...,
165
+ formatter: Optional[logging.Formatter] = ...,
166
+ ) -> None:
167
+ ...
168
+
169
+ def start_logging_to_file(
170
+ self, program_name: Optional[str] = ..., log_dir: Optional[str] = ...
171
+ ) -> None:
172
+ ...
173
+
174
+ def use_absl_log_file(
175
+ self, program_name: Optional[str] = ..., log_dir: Optional[str] = ...
176
+ ) -> None:
177
+ ...
178
+
179
+ def flush(self) -> None:
180
+ ...
181
+
182
+ def emit(self, record: logging.LogRecord) -> None:
183
+ ...
184
+
185
+ def close(self) -> None:
186
+ ...
187
+
188
+
189
+ class ABSLHandler(logging.Handler):
190
+
191
+ def __init__(self, python_logging_formatter: PythonFormatter) -> None:
192
+ ...
193
+
194
+ def format(self, record: logging.LogRecord) -> str:
195
+ ...
196
+
197
+ def setFormatter(self, fmt) -> None:
198
+ ...
199
+
200
+ def emit(self, record: logging.LogRecord) -> None:
201
+ ...
202
+
203
+ def flush(self) -> None:
204
+ ...
205
+
206
+ def close(self) -> None:
207
+ ...
208
+
209
+ def handle(self, record: logging.LogRecord) -> bool:
210
+ ...
211
+
212
+ @property
213
+ def python_handler(self) -> PythonHandler:
214
+ ...
215
+
216
+ def activate_python_handler(self) -> None:
217
+ ...
218
+
219
+ def use_absl_log_file(
220
+ self, program_name: Optional[str] = ..., log_dir: Optional[str] = ...
221
+ ) -> None:
222
+ ...
223
+
224
+ def start_logging_to_file(self, program_name=None, log_dir=None) -> None:
225
+ ...
226
+
227
+
228
+ class PythonFormatter(logging.Formatter):
229
+
230
+ def format(self, record: logging.LogRecord) -> str:
231
+ ...
232
+
233
+
234
+ class ABSLLogger(logging.Logger):
235
+
236
+ def findCaller(
237
+ self, stack_info: bool = ..., stacklevel: int = ...
238
+ ) -> Tuple[str, int, str, Optional[str]]:
239
+ ...
240
+
241
+ def critical(self, msg: Any, *args: Any, **kwargs: Any) -> None:
242
+ ...
243
+
244
+ def fatal(self, msg: Any, *args: Any, **kwargs: Any) -> NoReturn:
245
+ ...
246
+
247
+ def error(self, msg: Any, *args: Any, **kwargs: Any) -> None:
248
+ ...
249
+
250
+ def warn(self, msg: Any, *args: Any, **kwargs: Any) -> None:
251
+ ...
252
+
253
+ def warning(self, msg: Any, *args: Any, **kwargs: Any) -> None:
254
+ ...
255
+
256
+ def info(self, msg: Any, *args: Any, **kwargs: Any) -> None:
257
+ ...
258
+
259
+ def debug(self, msg: Any, *args: Any, **kwargs: Any) -> None:
260
+ ...
261
+
262
+ def log(self, level: int, msg: Any, *args: Any, **kwargs: Any) -> None:
263
+ ...
264
+
265
+ def handle(self, record: logging.LogRecord) -> None:
266
+ ...
267
+
268
+ @classmethod
269
+ def register_frame_to_skip(
270
+ cls, file_name: str, function_name: str, line_number: Optional[int] = ...
271
+ ) -> None:
272
+ ...
273
+
274
+
275
+ # NOTE: Returns None before _initialize called but shouldn't occur after import.
276
+ def get_absl_logger() -> ABSLLogger:
277
+ ...
278
+
279
+
280
+ # NOTE: Returns None before _initialize called but shouldn't occur after import.
281
+ def get_absl_handler() -> ABSLHandler:
282
+ ...
283
+
284
+
285
+ def use_python_logging(quiet: bool = ...) -> None:
286
+ ...
287
+
288
+
289
+ def use_absl_handler() -> None:
290
+ ...
llmeval-env/lib/python3.10/site-packages/absl/logging/converter.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The Abseil Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Module to convert log levels between Abseil Python, C++, and Python standard.
16
+
17
+ This converter has to convert (best effort) between three different
18
+ logging level schemes:
19
+
20
+ * **cpp**: The C++ logging level scheme used in Abseil C++.
21
+ * **absl**: The absl.logging level scheme used in Abseil Python.
22
+ * **standard**: The python standard library logging level scheme.
23
+
24
+ Here is a handy ascii chart for easy mental mapping::
25
+
26
+ LEVEL | cpp | absl | standard |
27
+ ---------+-----+--------+----------+
28
+ DEBUG | 0 | 1 | 10 |
29
+ INFO | 0 | 0 | 20 |
30
+ WARNING | 1 | -1 | 30 |
31
+ ERROR | 2 | -2 | 40 |
32
+ CRITICAL | 3 | -3 | 50 |
33
+ FATAL | 3 | -3 | 50 |
34
+
35
+ Note: standard logging ``CRITICAL`` is mapped to absl/cpp ``FATAL``.
36
+ However, only ``CRITICAL`` logs from the absl logger (or absl.logging.fatal)
37
+ will terminate the program. ``CRITICAL`` logs from non-absl loggers are treated
38
+ as error logs with a message prefix ``"CRITICAL - "``.
39
+
40
+ Converting from standard to absl or cpp is a lossy conversion.
41
+ Converting back to standard will lose granularity. For this reason,
42
+ users should always try to convert to standard, the richest
43
+ representation, before manipulating the levels, and then only to cpp
44
+ or absl if those level schemes are absolutely necessary.
45
+ """
46
+
47
+ import logging
48
+
49
+ STANDARD_CRITICAL = logging.CRITICAL
50
+ STANDARD_ERROR = logging.ERROR
51
+ STANDARD_WARNING = logging.WARNING
52
+ STANDARD_INFO = logging.INFO
53
+ STANDARD_DEBUG = logging.DEBUG
54
+
55
+ # These levels are also used to define the constants
56
+ # FATAL, ERROR, WARNING, INFO, and DEBUG in the
57
+ # absl.logging module.
58
+ ABSL_FATAL = -3
59
+ ABSL_ERROR = -2
60
+ ABSL_WARNING = -1
61
+ ABSL_WARN = -1 # Deprecated name.
62
+ ABSL_INFO = 0
63
+ ABSL_DEBUG = 1
64
+
65
+ ABSL_LEVELS = {ABSL_FATAL: 'FATAL',
66
+ ABSL_ERROR: 'ERROR',
67
+ ABSL_WARNING: 'WARNING',
68
+ ABSL_INFO: 'INFO',
69
+ ABSL_DEBUG: 'DEBUG'}
70
+
71
+ # Inverts the ABSL_LEVELS dictionary
72
+ ABSL_NAMES = {'FATAL': ABSL_FATAL,
73
+ 'ERROR': ABSL_ERROR,
74
+ 'WARNING': ABSL_WARNING,
75
+ 'WARN': ABSL_WARNING, # Deprecated name.
76
+ 'INFO': ABSL_INFO,
77
+ 'DEBUG': ABSL_DEBUG}
78
+
79
+ ABSL_TO_STANDARD = {ABSL_FATAL: STANDARD_CRITICAL,
80
+ ABSL_ERROR: STANDARD_ERROR,
81
+ ABSL_WARNING: STANDARD_WARNING,
82
+ ABSL_INFO: STANDARD_INFO,
83
+ ABSL_DEBUG: STANDARD_DEBUG}
84
+
85
+ # Inverts the ABSL_TO_STANDARD
86
+ STANDARD_TO_ABSL = dict((v, k) for (k, v) in ABSL_TO_STANDARD.items())
87
+
88
+
89
+ def get_initial_for_level(level):
90
+ """Gets the initial that should start the log line for the given level.
91
+
92
+ It returns:
93
+
94
+ * ``'I'`` when: ``level < STANDARD_WARNING``.
95
+ * ``'W'`` when: ``STANDARD_WARNING <= level < STANDARD_ERROR``.
96
+ * ``'E'`` when: ``STANDARD_ERROR <= level < STANDARD_CRITICAL``.
97
+ * ``'F'`` when: ``level >= STANDARD_CRITICAL``.
98
+
99
+ Args:
100
+ level: int, a Python standard logging level.
101
+
102
+ Returns:
103
+ The first initial as it would be logged by the C++ logging module.
104
+ """
105
+ if level < STANDARD_WARNING:
106
+ return 'I'
107
+ elif level < STANDARD_ERROR:
108
+ return 'W'
109
+ elif level < STANDARD_CRITICAL:
110
+ return 'E'
111
+ else:
112
+ return 'F'
113
+
114
+
115
+ def absl_to_cpp(level):
116
+ """Converts an absl log level to a cpp log level.
117
+
118
+ Args:
119
+ level: int, an absl.logging level.
120
+
121
+ Raises:
122
+ TypeError: Raised when level is not an integer.
123
+
124
+ Returns:
125
+ The corresponding integer level for use in Abseil C++.
126
+ """
127
+ if not isinstance(level, int):
128
+ raise TypeError('Expect an int level, found {}'.format(type(level)))
129
+ if level >= 0:
130
+ # C++ log levels must be >= 0
131
+ return 0
132
+ else:
133
+ return -level
134
+
135
+
136
+ def absl_to_standard(level):
137
+ """Converts an integer level from the absl value to the standard value.
138
+
139
+ Args:
140
+ level: int, an absl.logging level.
141
+
142
+ Raises:
143
+ TypeError: Raised when level is not an integer.
144
+
145
+ Returns:
146
+ The corresponding integer level for use in standard logging.
147
+ """
148
+ if not isinstance(level, int):
149
+ raise TypeError('Expect an int level, found {}'.format(type(level)))
150
+ if level < ABSL_FATAL:
151
+ level = ABSL_FATAL
152
+ if level <= ABSL_DEBUG:
153
+ return ABSL_TO_STANDARD[level]
154
+ # Maps to vlog levels.
155
+ return STANDARD_DEBUG - level + 1
156
+
157
+
158
+ def string_to_standard(level):
159
+ """Converts a string level to standard logging level value.
160
+
161
+ Args:
162
+ level: str, case-insensitive ``'debug'``, ``'info'``, ``'warning'``,
163
+ ``'error'``, ``'fatal'``.
164
+
165
+ Returns:
166
+ The corresponding integer level for use in standard logging.
167
+ """
168
+ return absl_to_standard(ABSL_NAMES.get(level.upper()))
169
+
170
+
171
+ def standard_to_absl(level):
172
+ """Converts an integer level from the standard value to the absl value.
173
+
174
+ Args:
175
+ level: int, a Python standard logging level.
176
+
177
+ Raises:
178
+ TypeError: Raised when level is not an integer.
179
+
180
+ Returns:
181
+ The corresponding integer level for use in absl logging.
182
+ """
183
+ if not isinstance(level, int):
184
+ raise TypeError('Expect an int level, found {}'.format(type(level)))
185
+ if level < 0:
186
+ level = 0
187
+ if level < STANDARD_DEBUG:
188
+ # Maps to vlog levels.
189
+ return STANDARD_DEBUG - level + 1
190
+ elif level < STANDARD_INFO:
191
+ return ABSL_DEBUG
192
+ elif level < STANDARD_WARNING:
193
+ return ABSL_INFO
194
+ elif level < STANDARD_ERROR:
195
+ return ABSL_WARNING
196
+ elif level < STANDARD_CRITICAL:
197
+ return ABSL_ERROR
198
+ else:
199
+ return ABSL_FATAL
200
+
201
+
202
+ def standard_to_cpp(level):
203
+ """Converts an integer level from the standard value to the cpp value.
204
+
205
+ Args:
206
+ level: int, a Python standard logging level.
207
+
208
+ Raises:
209
+ TypeError: Raised when level is not an integer.
210
+
211
+ Returns:
212
+ The corresponding integer level for use in cpp logging.
213
+ """
214
+ return absl_to_cpp(standard_to_absl(level))
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__init__.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ from typing import TYPE_CHECKING
15
+
16
+ from ..utils import _LazyModule
17
+
18
+
19
+ _import_structure = {
20
+ "aqlm": ["replace_with_aqlm_linear"],
21
+ "awq": [
22
+ "fuse_awq_modules",
23
+ "post_init_awq_exllama_modules",
24
+ "replace_with_awq_linear",
25
+ ],
26
+ "bitsandbytes": [
27
+ "get_keys_to_not_convert",
28
+ "replace_8bit_linear",
29
+ "replace_with_bnb_linear",
30
+ "set_module_8bit_tensor_to_device",
31
+ "set_module_quantized_tensor_to_device",
32
+ ],
33
+ "deepspeed": [
34
+ "HfDeepSpeedConfig",
35
+ "HfTrainerDeepSpeedConfig",
36
+ "deepspeed_config",
37
+ "deepspeed_init",
38
+ "deepspeed_load_checkpoint",
39
+ "deepspeed_optim_sched",
40
+ "is_deepspeed_available",
41
+ "is_deepspeed_zero3_enabled",
42
+ "set_hf_deepspeed_config",
43
+ "unset_hf_deepspeed_config",
44
+ ],
45
+ "integration_utils": [
46
+ "INTEGRATION_TO_CALLBACK",
47
+ "AzureMLCallback",
48
+ "ClearMLCallback",
49
+ "CodeCarbonCallback",
50
+ "CometCallback",
51
+ "DagsHubCallback",
52
+ "DVCLiveCallback",
53
+ "FlyteCallback",
54
+ "MLflowCallback",
55
+ "NeptuneCallback",
56
+ "NeptuneMissingConfiguration",
57
+ "TensorBoardCallback",
58
+ "WandbCallback",
59
+ "get_available_reporting_integrations",
60
+ "get_reporting_integration_callbacks",
61
+ "hp_params",
62
+ "is_azureml_available",
63
+ "is_clearml_available",
64
+ "is_codecarbon_available",
65
+ "is_comet_available",
66
+ "is_dagshub_available",
67
+ "is_dvclive_available",
68
+ "is_flyte_deck_standard_available",
69
+ "is_flytekit_available",
70
+ "is_mlflow_available",
71
+ "is_neptune_available",
72
+ "is_optuna_available",
73
+ "is_ray_available",
74
+ "is_ray_tune_available",
75
+ "is_sigopt_available",
76
+ "is_tensorboard_available",
77
+ "is_wandb_available",
78
+ "rewrite_logs",
79
+ "run_hp_search_optuna",
80
+ "run_hp_search_ray",
81
+ "run_hp_search_sigopt",
82
+ "run_hp_search_wandb",
83
+ ],
84
+ "peft": ["PeftAdapterMixin"],
85
+ "quanto": ["replace_with_quanto_layers"],
86
+ }
87
+
88
+ if TYPE_CHECKING:
89
+ from .aqlm import replace_with_aqlm_linear
90
+ from .awq import (
91
+ fuse_awq_modules,
92
+ post_init_awq_exllama_modules,
93
+ replace_with_awq_linear,
94
+ )
95
+ from .bitsandbytes import (
96
+ get_keys_to_not_convert,
97
+ replace_8bit_linear,
98
+ replace_with_bnb_linear,
99
+ set_module_8bit_tensor_to_device,
100
+ set_module_quantized_tensor_to_device,
101
+ )
102
+ from .deepspeed import (
103
+ HfDeepSpeedConfig,
104
+ HfTrainerDeepSpeedConfig,
105
+ deepspeed_config,
106
+ deepspeed_init,
107
+ deepspeed_load_checkpoint,
108
+ deepspeed_optim_sched,
109
+ is_deepspeed_available,
110
+ is_deepspeed_zero3_enabled,
111
+ set_hf_deepspeed_config,
112
+ unset_hf_deepspeed_config,
113
+ )
114
+ from .integration_utils import (
115
+ INTEGRATION_TO_CALLBACK,
116
+ AzureMLCallback,
117
+ ClearMLCallback,
118
+ CodeCarbonCallback,
119
+ CometCallback,
120
+ DagsHubCallback,
121
+ DVCLiveCallback,
122
+ FlyteCallback,
123
+ MLflowCallback,
124
+ NeptuneCallback,
125
+ NeptuneMissingConfiguration,
126
+ TensorBoardCallback,
127
+ WandbCallback,
128
+ get_available_reporting_integrations,
129
+ get_reporting_integration_callbacks,
130
+ hp_params,
131
+ is_azureml_available,
132
+ is_clearml_available,
133
+ is_codecarbon_available,
134
+ is_comet_available,
135
+ is_dagshub_available,
136
+ is_dvclive_available,
137
+ is_flyte_deck_standard_available,
138
+ is_flytekit_available,
139
+ is_mlflow_available,
140
+ is_neptune_available,
141
+ is_optuna_available,
142
+ is_ray_available,
143
+ is_ray_tune_available,
144
+ is_sigopt_available,
145
+ is_tensorboard_available,
146
+ is_wandb_available,
147
+ rewrite_logs,
148
+ run_hp_search_optuna,
149
+ run_hp_search_ray,
150
+ run_hp_search_sigopt,
151
+ run_hp_search_wandb,
152
+ )
153
+ from .peft import PeftAdapterMixin
154
+ from .quanto import replace_with_quanto_layers
155
+ else:
156
+ import sys
157
+
158
+ sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (2.53 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/aqlm.cpython-310.pyc ADDED
Binary file (2.77 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/awq.cpython-310.pyc ADDED
Binary file (11.6 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/bitsandbytes.cpython-310.pyc ADDED
Binary file (10 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/deepspeed.cpython-310.pyc ADDED
Binary file (12.1 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/integration_utils.cpython-310.pyc ADDED
Binary file (63.4 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/peft.cpython-310.pyc ADDED
Binary file (17.2 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/quanto.cpython-310.pyc ADDED
Binary file (2.84 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/__pycache__/tpu.cpython-310.pyc ADDED
Binary file (873 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/integrations/aqlm.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ "AQLM (Additive Quantization of Language Model) integration file"
15
+
16
+
17
+ from ..utils import is_accelerate_available, is_aqlm_available, is_torch_available
18
+
19
+
20
+ if is_torch_available():
21
+ import torch.nn as nn
22
+
23
+
24
+ def replace_with_aqlm_linear(
25
+ model,
26
+ quantization_config=None,
27
+ linear_weights_not_to_quantize=None,
28
+ current_key_name=None,
29
+ has_been_replaced=False,
30
+ ):
31
+ """
32
+ Public method that recursively replaces the Linear layers of the given model with AQLM quantized layers.
33
+ `accelerate` is needed to use this method. Returns the converted model and a boolean that indicates if the
34
+ conversion has been successfull or not.
35
+
36
+ Args:
37
+ model (`torch.nn.Module`):
38
+ The model to convert, can be any `torch.nn.Module` instance.
39
+ quantization_config (`AqlmConfig`):
40
+ The quantization config object that contains the quantization parameters.
41
+ linear_weights_not_to_quantize (`list[str]`, *optional*):
42
+ A list of nn.Linear weights to not convert. If a parameter path is in the list (e.g. `lm_head.weight`), the corresponding module will not be
43
+ converted.
44
+ current_key_name (`list`, *optional*):
45
+ A list that contains the current key name. This is used for recursion and should not be passed by the user.
46
+ has_been_replaced (`bool`, *optional*):
47
+ A boolean that indicates if the conversion has been successful or not. This is used for recursion and
48
+ should not be passed by the user.
49
+ """
50
+ if not is_aqlm_available():
51
+ raise ValueError("AQLM is not available. Please install it with `pip install aqlm[cpu,gpu]`")
52
+
53
+ if not is_accelerate_available():
54
+ raise ValueError("AQLM requires Accelerate to be installed: `pip install accelerate`")
55
+
56
+ if linear_weights_not_to_quantize is None:
57
+ linear_weights_not_to_quantize = []
58
+
59
+ from accelerate import init_empty_weights
60
+ from aqlm import QuantizedLinear
61
+
62
+ for name, module in model.named_children():
63
+ if current_key_name is None:
64
+ current_key_name = []
65
+ current_key_name.append(name)
66
+
67
+ if isinstance(module, nn.Linear):
68
+ # Check if the current key is not in the `linear_weights_not_to_quantize`
69
+ if ".".join(current_key_name) + ".weight" not in linear_weights_not_to_quantize:
70
+ with init_empty_weights():
71
+ in_features = module.in_features
72
+ out_features = module.out_features
73
+
74
+ model._modules[name] = QuantizedLinear(
75
+ in_features,
76
+ out_features,
77
+ bias=module.bias is not None,
78
+ in_group_size=quantization_config.in_group_size,
79
+ out_group_size=quantization_config.out_group_size,
80
+ num_codebooks=quantization_config.num_codebooks,
81
+ nbits_per_codebook=quantization_config.nbits_per_codebook,
82
+ )
83
+ has_been_replaced = True
84
+
85
+ # Store the module class in case we need to transpose the weight later
86
+ model._modules[name].source_cls = type(module)
87
+ # Force requires grad to False to avoid unexpected errors
88
+ model._modules[name].requires_grad_(False)
89
+ if len(list(module.children())) > 0:
90
+ _, has_been_replaced = replace_with_aqlm_linear(
91
+ module,
92
+ quantization_config=quantization_config,
93
+ linear_weights_not_to_quantize=linear_weights_not_to_quantize,
94
+ current_key_name=current_key_name,
95
+ has_been_replaced=has_been_replaced,
96
+ )
97
+ # Remove the last key for recursion
98
+ current_key_name.pop(-1)
99
+ return model, has_been_replaced
llmeval-env/lib/python3.10/site-packages/transformers/integrations/awq.py ADDED
@@ -0,0 +1,444 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ "AWQ (Activation aware Weight Quantization) integration file"
15
+ from ..activations import ACT2FN
16
+ from ..modeling_utils import PreTrainedModel
17
+ from ..utils import is_auto_awq_available, is_torch_available
18
+ from ..utils.quantization_config import (
19
+ AwqBackendPackingMethod,
20
+ AwqConfig,
21
+ AWQLinearVersion,
22
+ ExllamaVersion,
23
+ )
24
+
25
+
26
+ if is_torch_available():
27
+ import torch
28
+ import torch.nn as nn
29
+
30
+
31
+ AWQ_FUSED_MAPPINGS = {
32
+ "mistral": {
33
+ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"],
34
+ "mlp": ["gate_proj", "up_proj", "down_proj"],
35
+ "layernorm": ["input_layernorm", "post_attention_layernorm", "norm"],
36
+ "use_alibi": False,
37
+ },
38
+ "mixtral": {
39
+ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"],
40
+ "mlp": ["w1", "w3", "w2"],
41
+ "layernorm": ["input_layernorm", "post_attention_layernorm", "norm"],
42
+ "use_alibi": False,
43
+ "rope_theta": 1000000.0,
44
+ },
45
+ "llama": {
46
+ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"],
47
+ "mlp": ["gate_proj", "up_proj", "down_proj"],
48
+ "layernorm": ["input_layernorm", "post_attention_layernorm", "norm"],
49
+ "use_alibi": False,
50
+ },
51
+ "llava": {
52
+ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"],
53
+ "mlp": ["gate_proj", "up_proj", "down_proj"],
54
+ "layernorm": ["input_layernorm", "post_attention_layernorm", "norm"],
55
+ "use_alibi": False,
56
+ },
57
+ }
58
+
59
+
60
+ def replace_with_awq_linear(
61
+ model,
62
+ modules_to_not_convert=None,
63
+ quantization_config=None,
64
+ current_key_name=None,
65
+ has_been_replaced=False,
66
+ ) -> bool:
67
+ """
68
+ Public method that recursively replaces the Linear layers of the given model with AWQ quantized layers.
69
+ `accelerate` is needed to use this method. Returns the converted model and a boolean that indicates if the
70
+ conversion has been successfull or not.
71
+
72
+ During the module replacement, we also infer the backend to use through the `quantization_config` object.
73
+
74
+ Args:
75
+ model (`torch.nn.Module`):
76
+ The model to convert, can be any `torch.nn.Module` instance.
77
+ quantization_config (`AwqConfig`):
78
+ The quantization config object that contains the quantization parameters.
79
+ modules_to_not_convert (`list`, *optional*):
80
+ A list of modules to not convert. If a module name is in the list (e.g. `lm_head`), it will not be
81
+ converted.
82
+ current_key_name (`list`, *optional*):
83
+ A list that contains the current key name. This is used for recursion and should not be passed by the user.
84
+ has_been_replaced (`bool`, *optional*):
85
+ A boolean that indicates if the conversion has been successful or not. This is used for recursion and
86
+ should not be passed by the user.
87
+ """
88
+ if modules_to_not_convert is None:
89
+ modules_to_not_convert = []
90
+
91
+ backend = quantization_config.backend
92
+
93
+ if not is_auto_awq_available():
94
+ raise ValueError(
95
+ "AWQ (either `autoawq` or `llmawq`) is not available. Please install it with `pip install autoawq` or check out the installation guide in https://github.com/mit-han-lab/llm-awq"
96
+ )
97
+
98
+ if backend == AwqBackendPackingMethod.AUTOAWQ:
99
+ if quantization_config.version == AWQLinearVersion.GEMM:
100
+ from awq.modules.linear.gemm import WQLinear_GEMM
101
+
102
+ target_cls = WQLinear_GEMM
103
+ elif quantization_config.version == AWQLinearVersion.GEMV:
104
+ from awq.modules.linear.gemv import WQLinear_GEMV
105
+
106
+ target_cls = WQLinear_GEMV
107
+ elif quantization_config.version == AWQLinearVersion.EXLLAMA:
108
+ if quantization_config.exllama_config["version"] == ExllamaVersion.ONE:
109
+ from awq.modules.linear.exllama import WQLinear_Exllama
110
+
111
+ target_cls = WQLinear_Exllama
112
+ elif quantization_config.exllama_config["version"] == ExllamaVersion.TWO:
113
+ from awq.modules.linear.exllamav2 import WQLinear_ExllamaV2
114
+
115
+ target_cls = WQLinear_ExllamaV2
116
+ else:
117
+ raise ValueError(f"Unrecognized Exllama version: {quantization_config.exllama_config['version']}")
118
+ else:
119
+ raise ValueError(f"Unrecognized AWQ version: {quantization_config.version}")
120
+ else:
121
+ from awq.quantize.qmodule import WQLinear
122
+
123
+ target_cls = WQLinear
124
+
125
+ for name, module in model.named_children():
126
+ if current_key_name is None:
127
+ current_key_name = []
128
+ current_key_name.append(name)
129
+
130
+ if isinstance(module, nn.Linear) and name not in modules_to_not_convert:
131
+ # Check if the current key is not in the `modules_to_not_convert`
132
+ if not any(key in ".".join(current_key_name) for key in modules_to_not_convert):
133
+ in_features = module.in_features
134
+ out_features = module.out_features
135
+
136
+ model._modules[name] = target_cls(
137
+ w_bit=quantization_config.bits,
138
+ group_size=quantization_config.group_size,
139
+ in_features=in_features,
140
+ out_features=out_features,
141
+ bias=module.bias is not None,
142
+ dev=module.weight.device,
143
+ )
144
+ has_been_replaced = True
145
+
146
+ # Force requires grad to False to avoid unexpected errors
147
+ model._modules[name].requires_grad_(False)
148
+ if len(list(module.children())) > 0:
149
+ _, has_been_replaced = replace_with_awq_linear(
150
+ module,
151
+ modules_to_not_convert=modules_to_not_convert,
152
+ current_key_name=current_key_name,
153
+ quantization_config=quantization_config,
154
+ has_been_replaced=has_been_replaced,
155
+ )
156
+ # Remove the last key for recursion
157
+ current_key_name.pop(-1)
158
+ return model, has_been_replaced
159
+
160
+
161
+ def get_modules_to_fuse(model, quantization_config):
162
+ """
163
+ Returns the fusing mapping given the quantization config and the model
164
+
165
+ Args:
166
+ model (`~PreTrainedModel`):
167
+ The model to fuse - note this model should have been converted into AWQ format beforehand.
168
+ quantization_config (`~transformers.quantization_config.AWQConfig`):
169
+ The quantization configuration to use.
170
+ """
171
+ if not isinstance(model, PreTrainedModel):
172
+ raise ValueError(f"The model should be an instance of `PreTrainedModel`, got {model.__class__.__name__}")
173
+
174
+ # Always default to `quantization_config.modules_to_fuse`
175
+ if quantization_config.modules_to_fuse is not None:
176
+ current_fused_mapping = quantization_config.modules_to_fuse
177
+ current_fused_mapping["max_seq_len"] = quantization_config.fuse_max_seq_len
178
+ elif model.config.model_type in AWQ_FUSED_MAPPINGS:
179
+ current_fused_mapping = AWQ_FUSED_MAPPINGS[model.config.model_type]
180
+
181
+ # Properly deal with the case where we have a multi-modal model as well (e.g. Llava)
182
+ if not hasattr(model.config, "text_config"):
183
+ config = model.config
184
+ else:
185
+ config = model.config.text_config
186
+
187
+ # Handle hidden_size, num_attention_heads, num_key_value_heads on our own.
188
+ hidden_size = config.hidden_size
189
+ num_attention_heads = config.num_attention_heads
190
+ num_key_value_heads = getattr(config, "num_key_value_heads", num_attention_heads)
191
+
192
+ # Fill `current_fused_mapping` with the expected values
193
+ current_fused_mapping["hidden_size"] = hidden_size
194
+ current_fused_mapping["num_attention_heads"] = num_attention_heads
195
+ current_fused_mapping["num_key_value_heads"] = num_key_value_heads
196
+ current_fused_mapping["max_seq_len"] = quantization_config.fuse_max_seq_len
197
+ else:
198
+ raise ValueError(
199
+ "Fusing mapping not found either on the quantization config or the supported `AWQ_FUSED_MAPPINGS`. Please pass a `fused_mapping` argument"
200
+ " in the `quantization_config` or raise an issue on transformers https://github.com/huggingface/transformers to add its support."
201
+ )
202
+ return current_fused_mapping
203
+
204
+
205
+ def fuse_awq_modules(model, quantization_config):
206
+ """
207
+ Optionally fuse some modules in the model to speedup inference.
208
+
209
+ Args:
210
+ model (`~PreTrainedModel`):
211
+ The model to fuse - note this model should have been converted into AWQ format beforehand.
212
+ quantization_config (`Union[AwqConfig, dict]`):
213
+ The quantization configuration to use.
214
+ """
215
+ # We need to convert it from dict in order to get an AwqConfig object
216
+ # otherwise the fields `backend` etc. will not be available
217
+ # https://github.com/huggingface/transformers/pull/27411#discussion_r1414044495
218
+ if isinstance(quantization_config, dict):
219
+ quantization_config = AwqConfig.from_dict(quantization_config)
220
+ backend = quantization_config.backend
221
+
222
+ modules_to_fuse = get_modules_to_fuse(model, quantization_config)
223
+ modules_to_not_convert = getattr(quantization_config, "modules_to_not_convert", None)
224
+
225
+ if backend == AwqBackendPackingMethod.AUTOAWQ:
226
+ from awq.modules.fused.attn import QuantAttentionFused
227
+ from awq.modules.fused.mlp import QuantFusedMLP
228
+ from awq.modules.fused.norm import FasterTransformerRMSNorm
229
+ else:
230
+ raise ValueError("Fusing is only supported for the AutoAWQ backend")
231
+
232
+ fused_attention_modules = []
233
+
234
+ for name, module in model.named_modules():
235
+ if modules_to_not_convert is not None:
236
+ if any(module_name_to_not_convert in name for module_name_to_not_convert in modules_to_not_convert):
237
+ continue
238
+
239
+ # Replace layer norms
240
+ _fuse_awq_layernorm(modules_to_fuse["layernorm"], module, FasterTransformerRMSNorm)
241
+
242
+ # Replace MLP layers
243
+ _fuse_awq_mlp(model, name, modules_to_fuse["mlp"], module, QuantFusedMLP)
244
+
245
+ # Replace attention layers
246
+ attention_has_been_fused = _fuse_awq_attention_layers(
247
+ model, module, modules_to_fuse, name, QuantAttentionFused
248
+ )
249
+
250
+ if attention_has_been_fused:
251
+ fused_attention_modules.append(name.split(".")[0])
252
+
253
+ # For AWQ fused + Llama we need to set `config._attn_implementation` = "custom" to avoid unexpected behavior and pass
254
+ # `None` attention mask to the fused attention modules as now the attention mask is dropped by our models and dealt
255
+ # by the `AttentionMaskConverter` module.
256
+ if len(fused_attention_modules) > 0:
257
+ for module_name, module in model.named_modules():
258
+ if any(
259
+ module_name in fused_attention_modules for fused_attention_parent_module in fused_attention_modules
260
+ ):
261
+ if hasattr(module, "config") and hasattr(module.config, "_attn_implementation"):
262
+ module.config._attn_implementation = "custom"
263
+ return model
264
+
265
+
266
+ def _fuse_awq_layernorm(fuse_module_names, module, target_cls):
267
+ """
268
+ Fuse the LayerNorm layers into a target class using autoawq
269
+
270
+ Args:
271
+ fuse_module_names (`List[str]`):
272
+ The list of module names to fuse
273
+ module (`nn.Module`):
274
+ The pytorch parent module that has layernorm modules to fuse
275
+ target_cls (`~autoawq.FasterTransformerRMSNorm`):
276
+ The `FasterTransformerRMSNorm` class as it only supports that class
277
+ for now.
278
+ """
279
+ for module_name in fuse_module_names:
280
+ if hasattr(module, module_name):
281
+ old_module = getattr(module, module_name)
282
+ module._modules[module_name] = target_cls(
283
+ old_module.weight,
284
+ old_module.variance_epsilon,
285
+ ).to(old_module.weight.device)
286
+ del old_module
287
+
288
+
289
+ def _fuse_awq_mlp(model, current_module_name, fuse_module_names, module, target_cls):
290
+ """
291
+ Fuse the MLP layers into a target class using autoawq
292
+
293
+ Args:
294
+ model (`~PreTrainedModel`):
295
+ The input pretrained model
296
+ current_module_name (`str`):
297
+ The current submodule name
298
+ fuse_module_names (`List[str]`):
299
+ The list of module names to fuse. For the MLP layers it has to be an array
300
+ of length 3 that consists of the 3 MLP layers in the order (gate (dense layer post-attention) / up / down layers)
301
+ module (`nn.Module`):
302
+ The pytorch parent module that has layernorm modules to fuse
303
+ target_cls (`~autoawq.QuantFusedMLP`):
304
+ The `QuantFusedMLP` class as it only supports that class
305
+ for now.
306
+ """
307
+ if len(fuse_module_names) == 0:
308
+ return
309
+
310
+ if hasattr(module, fuse_module_names[0]):
311
+ gate_proj = getattr(module, fuse_module_names[0])
312
+ up_proj = getattr(module, fuse_module_names[1])
313
+ down_proj = getattr(module, fuse_module_names[2])
314
+
315
+ previous_device = gate_proj.qweight.device
316
+
317
+ # Deal also with the case model has `text_config` attribute
318
+ hidden_act = (
319
+ model.config.hidden_act
320
+ if not hasattr(model.config, "text_config")
321
+ else model.config.text_config.hidden_act
322
+ )
323
+ activation_fn = ACT2FN[hidden_act]
324
+ new_module = target_cls(gate_proj, down_proj, up_proj, activation_fn)
325
+
326
+ parent_name, child_name = current_module_name.rsplit(".", 1)
327
+ parent = model.get_submodule(parent_name)
328
+ setattr(parent, child_name, new_module.to(previous_device))
329
+
330
+ del gate_proj, up_proj, down_proj
331
+
332
+
333
+ def _fuse_awq_attention_layers(model, module, modules_to_fuse, current_module_name, target_cls):
334
+ """
335
+ Fuse the Attention layers into a target class using autoawq
336
+
337
+ Args:
338
+ model (`~PreTrainedModel`):
339
+ The input pretrained model
340
+ module (`nn.Module`):
341
+ The pytorch parent module that has layernorm modules to fuse
342
+ modules_to_fuse (`List[str]`):
343
+ The module fusing mapping. The dictionary has to contain a field `attention` with attention module names
344
+ in the correct order: q, k, v, o layer
345
+ current_module_name (`str`):
346
+ The current submodule name
347
+ target_cls (`~autoawq.QuantAttentionFused`):
348
+ The `QuantAttentionFused` class as it only supports that class
349
+ for now.
350
+ """
351
+ from awq.modules.linear import WQLinear_GEMM, WQLinear_GEMV
352
+
353
+ module_has_been_fused = False
354
+
355
+ if len(modules_to_fuse["attention"]) == 0:
356
+ return module_has_been_fused
357
+
358
+ if hasattr(module, modules_to_fuse["attention"][0]):
359
+ # First, we pack the QKV layers together
360
+ q_proj = getattr(module, modules_to_fuse["attention"][0])
361
+
362
+ if isinstance(q_proj, WQLinear_GEMV):
363
+ linear_target_cls = WQLinear_GEMV
364
+ cat_dim = 0
365
+ elif isinstance(q_proj, WQLinear_GEMM):
366
+ linear_target_cls = WQLinear_GEMM
367
+ cat_dim = 1
368
+ else:
369
+ raise ValueError("Unsupported q_proj type: {type(q_proj)}")
370
+
371
+ previous_device = q_proj.qweight.device
372
+
373
+ k_proj = getattr(module, modules_to_fuse["attention"][1])
374
+ v_proj = getattr(module, modules_to_fuse["attention"][2])
375
+ o_proj = getattr(module, modules_to_fuse["attention"][3])
376
+
377
+ bias = torch.cat([q_proj.bias, k_proj.bias, v_proj.bias], dim=0) if q_proj.bias is not None else None
378
+
379
+ qkv_layer = linear_target_cls(
380
+ q_proj.w_bit,
381
+ q_proj.group_size,
382
+ q_proj.in_features,
383
+ q_proj.out_features + k_proj.out_features + v_proj.out_features,
384
+ q_proj.bias is not None,
385
+ next(iter(module.state_dict().values())).device,
386
+ )
387
+
388
+ qkv_layer.qweight = torch.cat([q_proj.qweight, k_proj.qweight, v_proj.qweight], dim=cat_dim)
389
+ qkv_layer.qzeros = torch.cat([q_proj.qzeros, k_proj.qzeros, v_proj.qzeros], dim=cat_dim)
390
+ qkv_layer.scales = torch.cat([q_proj.scales, k_proj.scales, v_proj.scales], dim=cat_dim)
391
+
392
+ if isinstance(qkv_layer, WQLinear_GEMV):
393
+ qkv_layer.split_k_iters = q_proj.split_k_iters
394
+
395
+ qkv_layer.bias = bias
396
+
397
+ fused_attention_layer = target_cls(
398
+ modules_to_fuse["hidden_size"],
399
+ modules_to_fuse["num_attention_heads"],
400
+ modules_to_fuse["num_key_value_heads"],
401
+ qkv_layer,
402
+ o_proj,
403
+ previous_device,
404
+ modules_to_fuse["max_seq_len"],
405
+ use_alibi=modules_to_fuse["use_alibi"],
406
+ # The default value in autoawq is set to 10000.0
407
+ rope_theta=modules_to_fuse.get("rope_theta", 10000.0),
408
+ )
409
+
410
+ fused_attention_layer.is_hf_transformers = True
411
+
412
+ parent_name, child_name = current_module_name.rsplit(".", 1)
413
+ parent = model.get_submodule(parent_name)
414
+ setattr(parent, child_name, fused_attention_layer.to(previous_device))
415
+
416
+ del q_proj, k_proj, v_proj, o_proj
417
+ module_has_been_fused = True
418
+
419
+ return module_has_been_fused
420
+
421
+
422
+ def post_init_awq_exllama_modules(model, exllama_config):
423
+ """
424
+ Runs post init for Exllama layers which performs:
425
+ - Weights unpacking, reordering and repacking
426
+ - Devices scratch space allocation
427
+ """
428
+
429
+ if exllama_config["version"] == ExllamaVersion.ONE:
430
+ from awq.modules.linear.exllama import exllama_post_init
431
+
432
+ model = exllama_post_init(model)
433
+ elif exllama_config["version"] == ExllamaVersion.TWO:
434
+ from awq.modules.linear.exllamav2 import exllamav2_post_init
435
+
436
+ model = exllamav2_post_init(
437
+ model,
438
+ max_input_len=exllama_config["max_input_len"],
439
+ max_batch_size=exllama_config["max_batch_size"],
440
+ )
441
+ else:
442
+ raise ValueError(f"Unrecognized Exllama version: {exllama_config['version']}")
443
+
444
+ return model
llmeval-env/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib.metadata
2
+ import warnings
3
+ from copy import deepcopy
4
+ from inspect import signature
5
+
6
+ from packaging import version
7
+
8
+ from ..utils import is_accelerate_available, is_bitsandbytes_available, logging
9
+
10
+
11
+ if is_bitsandbytes_available():
12
+ import bitsandbytes as bnb
13
+ import torch
14
+ import torch.nn as nn
15
+
16
+ from ..pytorch_utils import Conv1D
17
+
18
+ if is_accelerate_available():
19
+ from accelerate import init_empty_weights
20
+ from accelerate.utils import find_tied_parameters
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+
25
+ def set_module_quantized_tensor_to_device(module, tensor_name, device, value=None, quantized_stats=None):
26
+ """
27
+ A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing
28
+ `param.to(device)` creates a new tensor not linked to the parameter, which is why we need this function). The
29
+ function is adapted from `set_module_tensor_to_device` function from accelerate that is adapted to support the
30
+ class `Int8Params` from `bitsandbytes`.
31
+
32
+ Args:
33
+ module (`torch.nn.Module`):
34
+ The module in which the tensor we want to move lives.
35
+ tensor_name (`str`):
36
+ The full name of the parameter/buffer.
37
+ device (`int`, `str` or `torch.device`):
38
+ The device on which to set the tensor.
39
+ value (`torch.Tensor`, *optional*):
40
+ The value of the tensor (useful when going from the meta device to any other device).
41
+ quantized_stats (`dict[str, Any]`, *optional*):
42
+ Dict with items for either 4-bit or 8-bit serialization
43
+ """
44
+ # Recurse if needed
45
+ if "." in tensor_name:
46
+ splits = tensor_name.split(".")
47
+ for split in splits[:-1]:
48
+ new_module = getattr(module, split)
49
+ if new_module is None:
50
+ raise ValueError(f"{module} has no attribute {split}.")
51
+ module = new_module
52
+ tensor_name = splits[-1]
53
+
54
+ if tensor_name not in module._parameters and tensor_name not in module._buffers:
55
+ raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.")
56
+ is_buffer = tensor_name in module._buffers
57
+ old_value = getattr(module, tensor_name)
58
+
59
+ if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None:
60
+ raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.")
61
+
62
+ prequantized_loading = quantized_stats is not None
63
+ if is_buffer or not is_bitsandbytes_available():
64
+ is_8bit = False
65
+ is_4bit = False
66
+ else:
67
+ is_4bit = hasattr(bnb.nn, "Params4bit") and isinstance(module._parameters[tensor_name], bnb.nn.Params4bit)
68
+ is_8bit = isinstance(module._parameters[tensor_name], bnb.nn.Int8Params)
69
+
70
+ if is_8bit or is_4bit:
71
+ param = module._parameters[tensor_name]
72
+ if param.device.type != "cuda":
73
+ if value is None:
74
+ new_value = old_value.to(device)
75
+ elif isinstance(value, torch.Tensor):
76
+ new_value = value.to("cpu")
77
+ else:
78
+ new_value = torch.tensor(value, device="cpu")
79
+
80
+ # Support models using `Conv1D` in place of `nn.Linear` (e.g. openai-community/gpt2) by transposing the weight matrix prior to quantization.
81
+ # Since weights are saved in the correct "orientation", we skip transposing when loading.
82
+ if issubclass(module.source_cls, Conv1D) and not prequantized_loading:
83
+ new_value = new_value.T
84
+
85
+ kwargs = old_value.__dict__
86
+
87
+ if prequantized_loading != (new_value.dtype in (torch.int8, torch.uint8)):
88
+ raise ValueError(
89
+ f"Value dtype `{new_value.dtype}` is not compatible with parameter quantization status."
90
+ )
91
+
92
+ if is_8bit:
93
+ is_8bit_serializable = version.parse(importlib.metadata.version("bitsandbytes")) > version.parse(
94
+ "0.37.2"
95
+ )
96
+ if new_value.dtype in (torch.int8, torch.uint8) and not is_8bit_serializable:
97
+ raise ValueError(
98
+ "Detected int8 weights but the version of bitsandbytes is not compatible with int8 serialization. "
99
+ "Make sure to download the latest `bitsandbytes` version. `pip install --upgrade bitsandbytes`."
100
+ )
101
+ new_value = bnb.nn.Int8Params(new_value, requires_grad=False, **kwargs).to(device)
102
+ if prequantized_loading:
103
+ setattr(new_value, "SCB", quantized_stats["SCB"].to(device))
104
+ elif is_4bit:
105
+ if prequantized_loading:
106
+ is_4bit_serializable = version.parse(importlib.metadata.version("bitsandbytes")) >= version.parse(
107
+ "0.41.3"
108
+ )
109
+ if new_value.dtype in (torch.int8, torch.uint8) and not is_4bit_serializable:
110
+ raise ValueError(
111
+ "Detected 4-bit weights but the version of bitsandbytes is not compatible with 4-bit serialization. "
112
+ "Make sure to download the latest `bitsandbytes` version. `pip install --upgrade bitsandbytes`."
113
+ )
114
+ new_value = bnb.nn.Params4bit.from_prequantized(
115
+ data=new_value,
116
+ quantized_stats=quantized_stats,
117
+ requires_grad=False,
118
+ device=device,
119
+ **kwargs,
120
+ )
121
+ else:
122
+ new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(device)
123
+ module._parameters[tensor_name] = new_value
124
+
125
+ else:
126
+ if value is None:
127
+ new_value = old_value.to(device)
128
+ elif isinstance(value, torch.Tensor):
129
+ new_value = value.to(device)
130
+ else:
131
+ new_value = torch.tensor(value, device=device)
132
+
133
+ if is_buffer:
134
+ module._buffers[tensor_name] = new_value
135
+ else:
136
+ new_value = nn.Parameter(new_value, requires_grad=old_value.requires_grad)
137
+ module._parameters[tensor_name] = new_value
138
+
139
+
140
+ def _replace_with_bnb_linear(
141
+ model,
142
+ modules_to_not_convert=None,
143
+ current_key_name=None,
144
+ quantization_config=None,
145
+ has_been_replaced=False,
146
+ ):
147
+ """
148
+ Private method that wraps the recursion for module replacement.
149
+
150
+ Returns the converted model and a boolean that indicates if the conversion has been successfull or not.
151
+ """
152
+ for name, module in model.named_children():
153
+ if current_key_name is None:
154
+ current_key_name = []
155
+ current_key_name.append(name)
156
+
157
+ if (isinstance(module, nn.Linear) or isinstance(module, Conv1D)) and name not in modules_to_not_convert:
158
+ # Check if the current key is not in the `modules_to_not_convert`
159
+ current_key_name_str = ".".join(current_key_name)
160
+ if not any(
161
+ (key + "." in current_key_name_str) or (key == current_key_name_str) for key in modules_to_not_convert
162
+ ):
163
+ with init_empty_weights():
164
+ if isinstance(module, Conv1D):
165
+ in_features, out_features = module.weight.shape
166
+ else:
167
+ in_features = module.in_features
168
+ out_features = module.out_features
169
+
170
+ if quantization_config.quantization_method() == "llm_int8":
171
+ model._modules[name] = bnb.nn.Linear8bitLt(
172
+ in_features,
173
+ out_features,
174
+ module.bias is not None,
175
+ has_fp16_weights=quantization_config.llm_int8_has_fp16_weight,
176
+ threshold=quantization_config.llm_int8_threshold,
177
+ )
178
+ has_been_replaced = True
179
+ else:
180
+ if (
181
+ quantization_config.llm_int8_skip_modules is not None
182
+ and name in quantization_config.llm_int8_skip_modules
183
+ ):
184
+ pass
185
+ else:
186
+ extra_kwargs = (
187
+ {"quant_storage": quantization_config.bnb_4bit_quant_storage}
188
+ if "quant_storage" in list(signature(bnb.nn.Linear4bit).parameters)
189
+ else {}
190
+ )
191
+ model._modules[name] = bnb.nn.Linear4bit(
192
+ in_features,
193
+ out_features,
194
+ module.bias is not None,
195
+ quantization_config.bnb_4bit_compute_dtype,
196
+ compress_statistics=quantization_config.bnb_4bit_use_double_quant,
197
+ quant_type=quantization_config.bnb_4bit_quant_type,
198
+ **extra_kwargs,
199
+ )
200
+ has_been_replaced = True
201
+ # Store the module class in case we need to transpose the weight later
202
+ model._modules[name].source_cls = type(module)
203
+ # Force requires grad to False to avoid unexpected errors
204
+ model._modules[name].requires_grad_(False)
205
+ if len(list(module.children())) > 0:
206
+ _, has_been_replaced = _replace_with_bnb_linear(
207
+ module,
208
+ modules_to_not_convert,
209
+ current_key_name,
210
+ quantization_config,
211
+ has_been_replaced=has_been_replaced,
212
+ )
213
+ # Remove the last key for recursion
214
+ current_key_name.pop(-1)
215
+ return model, has_been_replaced
216
+
217
+
218
+ def replace_with_bnb_linear(model, modules_to_not_convert=None, current_key_name=None, quantization_config=None):
219
+ """
220
+ A helper function to replace all `torch.nn.Linear` modules by `bnb.nn.Linear8bit` modules from the `bitsandbytes`
221
+ library. This will enable running your models using mixed int8 precision as described by the paper `LLM.int8():
222
+ 8-bit Matrix Multiplication for Transformers at Scale`. Make sure `bitsandbytes` compiled with the correct CUDA
223
+ version of your hardware is installed before running this function. `pip install -i https://test.pypi.org/simple/
224
+ bitsandbytes`
225
+
226
+ The function will be run recursively and replace all `torch.nn.Linear` modules except for the `lm_head` that should
227
+ be kept as a `torch.nn.Linear` module. The replacement is done under `init_empty_weights` context manager so no
228
+ CPU/GPU memory is required to run this function. Int8 mixed-precision matrix decomposition works by separating a
229
+ matrix multiplication into two streams: (1) and systematic feature outlier stream matrix multiplied in fp16
230
+ (0.01%), (2) a regular stream of int8 matrix multiplication (99.9%). With this method, int8 inference with no
231
+ predictive degradation is possible for very large models (>=176B parameters).
232
+
233
+ Parameters:
234
+ model (`torch.nn.Module`):
235
+ Input model or `torch.nn.Module` as the function is run recursively.
236
+ modules_to_not_convert (`List[`str`]`, *optional*, defaults to `["lm_head"]`):
237
+ Names of the modules to not convert in `Linear8bitLt`. In practice we keep the `lm_head` in full precision
238
+ for numerical stability reasons.
239
+ current_key_name (`List[`str`]`, *optional*):
240
+ An array to track the current key of the recursion. This is used to check whether the current key (part of
241
+ it) is not in the list of modules to not convert (for instances modules that are offloaded to `cpu` or
242
+ `disk`).
243
+ """
244
+ modules_to_not_convert = ["lm_head"] if modules_to_not_convert is None else modules_to_not_convert
245
+ model, has_been_replaced = _replace_with_bnb_linear(
246
+ model, modules_to_not_convert, current_key_name, quantization_config
247
+ )
248
+
249
+ if not has_been_replaced:
250
+ logger.warning(
251
+ "You are loading your model in 8bit or 4bit but no linear modules were found in your model."
252
+ " Please double check your model architecture, or submit an issue on github if you think this is"
253
+ " a bug."
254
+ )
255
+
256
+ return model
257
+
258
+
259
+ # For backward compatibility
260
+ def replace_8bit_linear(*args, **kwargs):
261
+ warnings.warn(
262
+ "`replace_8bit_linear` will be deprecated in a future version, please use `replace_with_bnb_linear` instead",
263
+ FutureWarning,
264
+ )
265
+ return replace_with_bnb_linear(*args, **kwargs)
266
+
267
+
268
+ # For backward compatiblity
269
+ def set_module_8bit_tensor_to_device(*args, **kwargs):
270
+ warnings.warn(
271
+ "`set_module_8bit_tensor_to_device` will be deprecated in a future version, please use `set_module_quantized_tensor_to_device` instead",
272
+ FutureWarning,
273
+ )
274
+ return set_module_quantized_tensor_to_device(*args, **kwargs)
275
+
276
+
277
+ def get_keys_to_not_convert(model):
278
+ r"""
279
+ An utility function to get the key of the module to keep in full precision if any For example for CausalLM modules
280
+ we may want to keep the lm_head in full precision for numerical stability reasons. For other architectures, we want
281
+ to keep the tied weights of the model. The function will return a list of the keys of the modules to not convert in
282
+ int8.
283
+
284
+ Parameters:
285
+ model (`torch.nn.Module`):
286
+ Input model
287
+ """
288
+ # Create a copy of the model and tie the weights, then
289
+ # check if it contains tied weights
290
+ tied_model = deepcopy(model) # this has 0 cost since it is done inside `init_empty_weights` context manager`
291
+ tied_model.tie_weights()
292
+
293
+ tied_params = find_tied_parameters(tied_model)
294
+ # For compatibility with Accelerate < 0.18
295
+ if isinstance(tied_params, dict):
296
+ tied_keys = sum(list(tied_params.values()), []) + list(tied_params.keys())
297
+ else:
298
+ tied_keys = sum(tied_params, [])
299
+ has_tied_params = len(tied_keys) > 0
300
+
301
+ # If there is not tied weights, we want to keep the lm_head(output_embedding) in full precision
302
+ if not has_tied_params:
303
+ output_emb = model.get_output_embeddings()
304
+ if output_emb is not None:
305
+ list_last_module = [name for name, module in model.named_modules() if id(module) == id(output_emb)]
306
+ return list_last_module
307
+
308
+ # otherwise, no tied weights, no output embedding defined, simply keep the last module in full precision
309
+ list_modules = list(model.named_parameters())
310
+ list_last_module = [list_modules[-1][0]]
311
+ # add last module together with tied weights
312
+ intersection = set(list_last_module) - set(tied_keys)
313
+ list_untouched = list(set(tied_keys)) + list(intersection)
314
+
315
+ # remove ".weight" from the keys
316
+ names_to_remove = [".weight", ".bias"]
317
+ filtered_module_names = []
318
+ for name in list_untouched:
319
+ for name_to_remove in names_to_remove:
320
+ if name_to_remove in name:
321
+ name = name.replace(name_to_remove, "")
322
+ filtered_module_names.append(name)
323
+
324
+ return filtered_module_names
llmeval-env/lib/python3.10/site-packages/transformers/integrations/deepspeed.py ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """
15
+ Integration with Deepspeed
16
+ """
17
+ import copy
18
+ import importlib.metadata as importlib_metadata
19
+ import importlib.util
20
+ import weakref
21
+ from functools import partialmethod
22
+
23
+ from ..dependency_versions_check import dep_version_check
24
+ from ..utils import is_accelerate_available, is_torch_available, is_torch_mlu_available, logging
25
+
26
+
27
+ if is_torch_available():
28
+ import torch
29
+
30
+
31
+ logger = logging.get_logger(__name__)
32
+
33
+
34
+ def is_deepspeed_available():
35
+ package_exists = importlib.util.find_spec("deepspeed") is not None
36
+
37
+ # Check we're not importing a "deepspeed" directory somewhere but the actual library by trying to grab the version
38
+ # AND checking it has an author field in the metadata that is HuggingFace.
39
+ if package_exists:
40
+ try:
41
+ if is_torch_mlu_available():
42
+ _ = importlib_metadata.metadata("deepspeed-mlu")
43
+ return True
44
+ _ = importlib_metadata.metadata("deepspeed")
45
+ return True
46
+ except importlib_metadata.PackageNotFoundError:
47
+ return False
48
+
49
+
50
+ if is_accelerate_available() and is_deepspeed_available():
51
+ from accelerate.utils.deepspeed import HfDeepSpeedConfig as DeepSpeedConfig
52
+ else:
53
+ # Inherits from a dummy `object` if accelerate is not available, so that python succeeds to import this file.
54
+ # Deepspeed glue code will never inherit this dummy object as it checks if accelerate is available.
55
+ from builtins import object as DeepSpeedConfig
56
+
57
+
58
+ class HfDeepSpeedConfig(DeepSpeedConfig):
59
+ """
60
+ This object contains a DeepSpeed configuration dictionary and can be quickly queried for things like zero stage.
61
+
62
+ A `weakref` of this object is stored in the module's globals to be able to access the config from areas where
63
+ things like the Trainer object is not available (e.g. `from_pretrained` and `_get_resized_embeddings`). Therefore
64
+ it's important that this object remains alive while the program is still running.
65
+
66
+ [`Trainer`] uses the `HfTrainerDeepSpeedConfig` subclass instead. That subclass has logic to sync the configuration
67
+ with values of [`TrainingArguments`] by replacing special placeholder values: `"auto"`. Without this special logic
68
+ the DeepSpeed configuration is not modified in any way.
69
+
70
+ Args:
71
+ config_file_or_dict (`Union[str, Dict]`): path to DeepSpeed config file or dict.
72
+
73
+ """
74
+
75
+ def __init__(self, config_file_or_dict):
76
+ # set global weakref object
77
+ set_hf_deepspeed_config(self)
78
+ dep_version_check("accelerate")
79
+ dep_version_check("deepspeed")
80
+ super().__init__(config_file_or_dict)
81
+
82
+
83
+ class HfTrainerDeepSpeedConfig(HfDeepSpeedConfig):
84
+ """
85
+ The `HfTrainerDeepSpeedConfig` object is meant to be created during `TrainingArguments` object creation and has the
86
+ same lifespan as the latter.
87
+ """
88
+
89
+ def __init__(self, config_file_or_dict):
90
+ super().__init__(config_file_or_dict)
91
+ self._dtype = None
92
+ self.mismatches = []
93
+
94
+ def dtype(self):
95
+ if self._dtype is None:
96
+ raise ValueError("trainer_config_process() wasn't called yet to tell dtype")
97
+ return self._dtype
98
+
99
+ def is_auto(self, ds_key_long):
100
+ val = self.get_value(ds_key_long)
101
+ if val is None:
102
+ return False
103
+ else:
104
+ return val == "auto"
105
+
106
+ def fill_match(self, ds_key_long, hf_val, hf_key=None, must_match=True):
107
+ """
108
+ A utility method that massages the config file and can optionally verify that the values match.
109
+
110
+ 1. Replace "auto" values with `TrainingArguments` value.
111
+
112
+ 2. If it wasn't "auto" and `must_match` is true, then check that DS config matches Trainer
113
+ config values and if mismatched add the entry to `self.mismatched` - will assert during
114
+ `trainer_config_finalize` for one or more mismatches.
115
+
116
+ """
117
+ config, ds_key = self.find_config_node(ds_key_long)
118
+ if config is None:
119
+ return
120
+
121
+ if config.get(ds_key) == "auto":
122
+ config[ds_key] = hf_val
123
+ return
124
+
125
+ if not must_match:
126
+ return
127
+
128
+ ds_val = config.get(ds_key)
129
+ if ds_val is not None and ds_val != hf_val:
130
+ self.mismatches.append(f"- ds {ds_key_long}={ds_val} vs hf {hf_key}={hf_val}")
131
+
132
+ fill_only = partialmethod(fill_match, must_match=False)
133
+
134
+ def trainer_config_process(self, args, auto_find_batch_size=False):
135
+ """
136
+ Adjust the config with `TrainingArguments` values. This stage is run during `TrainingArguments` object
137
+ creation.
138
+ """
139
+ # DeepSpeed does:
140
+ # train_batch_size = world_size * train_micro_batch_size_per_gpu * gradient_accumulation_steps
141
+ train_batch_size = args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps
142
+ self.fill_match(
143
+ "train_micro_batch_size_per_gpu",
144
+ args.per_device_train_batch_size,
145
+ "per_device_train_batch_size",
146
+ not auto_find_batch_size,
147
+ )
148
+ self.fill_match(
149
+ "gradient_accumulation_steps",
150
+ args.gradient_accumulation_steps,
151
+ "gradient_accumulation_steps",
152
+ )
153
+ self.fill_match(
154
+ "train_batch_size",
155
+ train_batch_size,
156
+ "train_batch_size (calculated)",
157
+ not auto_find_batch_size,
158
+ )
159
+ self.fill_match("gradient_clipping", args.max_grad_norm, "max_grad_norm")
160
+
161
+ self.fill_match("optimizer.params.lr", args.learning_rate, "learning_rate")
162
+ self.fill_match(
163
+ "optimizer.params.betas",
164
+ [args.adam_beta1, args.adam_beta2],
165
+ "adam_beta1+adam_beta2",
166
+ )
167
+ self.fill_match("optimizer.params.eps", args.adam_epsilon, "adam_epsilon")
168
+ self.fill_match("optimizer.params.weight_decay", args.weight_decay, "weight_decay")
169
+
170
+ self.fill_only("scheduler.params.warmup_min_lr", 0) # not a trainer arg
171
+ self.fill_match("scheduler.params.warmup_max_lr", args.learning_rate, "learning_rate")
172
+ # total_num_steps - will get set in trainer_config_finalize
173
+
174
+ # fp16
175
+ if args.fp16 or args.fp16_full_eval:
176
+ fp16_backend = "apex" if args.fp16_backend == "apex" else "amp"
177
+ else:
178
+ fp16_backend = None
179
+
180
+ if args.save_on_each_node:
181
+ # deepspeed uses shared storage by default. Let's override this setting if save_on_each_node == True
182
+ self.config["checkpoint"] = self.config.get("checkpoint", {})
183
+ self.config["checkpoint"]["use_node_local_storage"] = args.save_on_each_node
184
+
185
+ # amp: similar to the pytorch native amp - it has a bunch of optional params but we won't set
186
+ # any here unless the user did the work
187
+ self.fill_match(
188
+ "fp16.enabled",
189
+ ((args.fp16 or args.fp16_full_eval) and fp16_backend == "amp"),
190
+ "fp16|fp16_full_eval+fp16_backend(amp)",
191
+ )
192
+
193
+ # apex: delegates amp work to apex (which needs to be available), but it cannot be used with any
194
+ # ZeRO features
195
+ self.fill_match("amp.enabled", fp16_backend == "apex", "fp16+fp16_backend(apex)")
196
+ self.fill_match("amp.opt_level", args.fp16_opt_level, "fp16_opt_level")
197
+
198
+ self.fill_match("bf16.enabled", (args.bf16 or args.bf16_full_eval), "bf16|bf16_full_eval")
199
+
200
+ # deepspeed's default mode is fp16 unless there is a config that says differently
201
+ if self.is_true("bf16.enabled"):
202
+ self._dtype = torch.bfloat16
203
+ elif self.is_false("fp16.enabled"):
204
+ self._dtype = torch.float32
205
+ else:
206
+ self._dtype = torch.float16
207
+
208
+ def trainer_config_finalize(self, args, model, num_training_steps):
209
+ """
210
+ This stage is run after we have the model and know num_training_steps.
211
+
212
+ Now we can complete the configuration process.
213
+ """
214
+ # zero
215
+
216
+ # deal with config keys that use `auto` value and rely on model's hidden_size
217
+ hidden_size_based_keys = [
218
+ "zero_optimization.reduce_bucket_size",
219
+ "zero_optimization.stage3_prefetch_bucket_size",
220
+ "zero_optimization.stage3_param_persistence_threshold",
221
+ ]
222
+ hidden_size_auto_keys = [x for x in hidden_size_based_keys if self.is_auto(x)]
223
+
224
+ if len(hidden_size_auto_keys) > 0:
225
+ if hasattr(model.config, "hidden_size"):
226
+ hidden_size = model.config.hidden_size
227
+ elif hasattr(model.config, "hidden_sizes"):
228
+ # if there are many hidden sizes pick the largest one
229
+ hidden_size = max(model.config.hidden_sizes)
230
+ else:
231
+ raise ValueError(
232
+ "The model's config file has neither `hidden_size` nor `hidden_sizes` entry, "
233
+ "therefore it's not possible to automatically fill out the following `auto` entries "
234
+ f"in the DeepSpeed config file: {hidden_size_auto_keys}. You can fix that by replacing "
235
+ "`auto` values for these keys with an integer value of your choice."
236
+ )
237
+
238
+ self.fill_only("zero_optimization.reduce_bucket_size", hidden_size * hidden_size)
239
+ if self.is_zero3():
240
+ # automatically assign the optimal config values based on model config
241
+ self.fill_only(
242
+ "zero_optimization.stage3_prefetch_bucket_size",
243
+ 0.9 * hidden_size * hidden_size,
244
+ )
245
+ self.fill_only(
246
+ "zero_optimization.stage3_param_persistence_threshold",
247
+ 10 * hidden_size,
248
+ )
249
+
250
+ # scheduler
251
+ self.fill_match(
252
+ "scheduler.params.total_num_steps",
253
+ num_training_steps,
254
+ "num_training_steps (calculated)",
255
+ )
256
+ self.fill_match(
257
+ "scheduler.params.warmup_num_steps",
258
+ args.get_warmup_steps(num_training_steps),
259
+ "warmup_steps",
260
+ )
261
+
262
+ if len(self.mismatches) > 0:
263
+ mismatches = "\n".join(self.mismatches)
264
+ raise ValueError(
265
+ "Please correct the following DeepSpeed config values that mismatch TrainingArguments"
266
+ f" values:\n{mismatches}\nThe easiest method is to set these DeepSpeed config values to 'auto'."
267
+ )
268
+
269
+
270
+ # keep the config object global to be able to access it anywhere during TrainingArguments life-cycle
271
+ _hf_deepspeed_config_weak_ref = None
272
+
273
+
274
+ def set_hf_deepspeed_config(hf_deepspeed_config_obj):
275
+ # this is a special weakref global object to allow us to get to Deepspeed config from APIs
276
+ # that don't have an easy way to get to the Deepspeed config outside of the Trainer domain.
277
+ global _hf_deepspeed_config_weak_ref
278
+ # will go away automatically when HfDeepSpeedConfig is destroyed (when TrainingArguments is destroyed)
279
+ _hf_deepspeed_config_weak_ref = weakref.ref(hf_deepspeed_config_obj)
280
+
281
+
282
+ def unset_hf_deepspeed_config():
283
+ # useful for unit tests to ensure the global state doesn't leak - call from `tearDown` method
284
+ global _hf_deepspeed_config_weak_ref
285
+ _hf_deepspeed_config_weak_ref = None
286
+
287
+
288
+ def is_deepspeed_zero3_enabled():
289
+ if _hf_deepspeed_config_weak_ref is not None and _hf_deepspeed_config_weak_ref() is not None:
290
+ return _hf_deepspeed_config_weak_ref().is_zero3()
291
+ else:
292
+ return False
293
+
294
+
295
+ def deepspeed_config():
296
+ if _hf_deepspeed_config_weak_ref is not None and _hf_deepspeed_config_weak_ref() is not None:
297
+ return _hf_deepspeed_config_weak_ref().config
298
+ else:
299
+ return None
300
+
301
+
302
+ def deepspeed_optim_sched(trainer, hf_deepspeed_config, args, num_training_steps, model_parameters):
303
+ """
304
+ A convenience wrapper that deals with optimizer and lr scheduler configuration.
305
+ """
306
+ from accelerate.utils import DummyOptim, DummyScheduler
307
+
308
+ config = hf_deepspeed_config.config
309
+
310
+ # Mixing and matching DS schedulers and optimizers is supported unless Offload is enabled in which case it's:
311
+ # 1. DS scheduler + DS optimizer: Yes
312
+ # 2. HF scheduler + HF optimizer: Mostly*
313
+ # 3. DS scheduler + HF optimizer: Mostly*
314
+ # 4. HF scheduler + DS optimizer: Yes
315
+ #
316
+ # Mostly*: All non-native DeepSpeed optimizers that have both CPU and GPU implementation should work (except LAMB)
317
+
318
+ optimizer = None
319
+ if "optimizer" in config:
320
+ if args.adafactor:
321
+ raise ValueError(
322
+ "--adafactor was passed, but also found `optimizer` configured in the DeepSpeed config. "
323
+ "Only one optimizer can be configured."
324
+ )
325
+ optimizer = DummyOptim(params=model_parameters)
326
+ else:
327
+ if hf_deepspeed_config.is_offload():
328
+ logger.info(
329
+ "Detected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the"
330
+ " custom optimizer has both CPU and GPU implementation (except LAMB)"
331
+ )
332
+
333
+ # ds supports Adam, OneBitAdam, and Lamb optimizers and can import other optimizers from torch.
334
+ # But trainer uses AdamW by default.
335
+ optimizer = trainer.create_optimizer()
336
+ # To use other optimizers requires voiding warranty with: `zero_allow_untested_optimizer`
337
+ config["zero_allow_untested_optimizer"] = True
338
+
339
+ lr_scheduler = None
340
+ if "scheduler" in config:
341
+ lr_scheduler = DummyScheduler(optimizer)
342
+ else:
343
+ if isinstance(optimizer, DummyOptim):
344
+
345
+ def _lr_scheduler_callable(optimizer):
346
+ # create a shallow copy first, so later modifications do not affect original trainer
347
+ trainer_copy = copy.copy(trainer)
348
+ # at the time _lr_scheduler_callable is called, trainer.lr_scheduler has been set
349
+ # update it to None so that we can re-create a new scheduler
350
+ trainer_copy.lr_scheduler = None
351
+ lr_scheduler = trainer_copy.create_scheduler(
352
+ num_training_steps=num_training_steps, optimizer=optimizer
353
+ )
354
+ return lr_scheduler
355
+
356
+ lr_scheduler = DummyScheduler(optimizer, lr_scheduler_callable=_lr_scheduler_callable)
357
+ else:
358
+ lr_scheduler = trainer.create_scheduler(num_training_steps=num_training_steps, optimizer=optimizer)
359
+
360
+ return optimizer, lr_scheduler
361
+
362
+
363
+ def deepspeed_init(trainer, num_training_steps, inference=False):
364
+ """
365
+ Init DeepSpeed, after updating the DeepSpeed configuration with any relevant Trainer's args.
366
+
367
+ If `resume_from_checkpoint` was passed then an attempt to resume from a previously saved checkpoint will be made.
368
+
369
+ Args:
370
+ trainer: Trainer object
371
+ num_training_steps: per single gpu
372
+ resume_from_checkpoint: path to a checkpoint if to resume from after normal DeepSpeedEngine load
373
+ inference: launch in inference mode (no optimizer and no lr scheduler)
374
+ auto_find_batch_size: whether to ignore the `train_micro_batch_size_per_gpu` argument as it's being
375
+ set automatically by the auto batch size finder
376
+
377
+ Returns: optimizer, lr_scheduler
378
+
379
+ We may use `deepspeed_init` more than once during the life of Trainer, when we do - it's a temp hack based on:
380
+ https://github.com/microsoft/DeepSpeed/issues/1394#issuecomment-937405374 until Deepspeed fixes a bug where it
381
+ can't resume from a checkpoint after it did some stepping https://github.com/microsoft/DeepSpeed/issues/1612
382
+
383
+ """
384
+ from deepspeed.utils import logger as ds_logger
385
+
386
+ model = trainer.model
387
+ args = trainer.args
388
+
389
+ hf_deepspeed_config = trainer.accelerator.state.deepspeed_plugin.hf_ds_config
390
+
391
+ # resume config update - some bits like `model` and `num_training_steps` only become available during train
392
+ hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps)
393
+
394
+ # set the Deepspeed log level consistent with the Trainer
395
+ ds_logger.setLevel(args.get_process_log_level())
396
+
397
+ if inference:
398
+ # only Z3 makes sense for the inference
399
+ if not hf_deepspeed_config.is_zero3():
400
+ raise ValueError("ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config")
401
+
402
+ # in case the training config is re-used for inference
403
+ hf_deepspeed_config.del_config_sub_tree("optimizer")
404
+ hf_deepspeed_config.del_config_sub_tree("lr_scheduler")
405
+ optimizer, lr_scheduler = None, None
406
+ model_parameters = None
407
+ else:
408
+ trainer.optimizer = None # important for when deepspeed_init is used as re-init
409
+ model_parameters = list(filter(lambda p: p.requires_grad, model.parameters()))
410
+ optimizer, lr_scheduler = deepspeed_optim_sched(
411
+ trainer, hf_deepspeed_config, args, num_training_steps, model_parameters
412
+ )
413
+
414
+ # keep for quick debug:
415
+ # from pprint import pprint; pprint(config)
416
+
417
+ return optimizer, lr_scheduler
418
+
419
+
420
+ def deepspeed_load_checkpoint(deepspeed_engine, checkpoint_path, load_module_strict=True):
421
+ # it's possible that the user is trying to resume from model_path, which doesn't necessarily
422
+ # contain a deepspeed checkpoint. e.g. examples just check if the dir exists and assume it's
423
+ # a resume from a checkpoint and not just a local pretrained weight. So we check here if the
424
+ # path contains what looks like a deepspeed checkpoint
425
+ import glob
426
+
427
+ deepspeed_checkpoint_dirs = sorted(glob.glob(f"{checkpoint_path}/global_step*"))
428
+
429
+ if len(deepspeed_checkpoint_dirs) > 0:
430
+ logger.info(f"Attempting to resume from {checkpoint_path}")
431
+ # this magically updates self.optimizer and self.lr_scheduler
432
+ load_path, _ = deepspeed_engine.load_checkpoint(
433
+ checkpoint_path,
434
+ load_module_strict=load_module_strict,
435
+ load_optimizer_states=True,
436
+ load_lr_scheduler_states=True,
437
+ )
438
+ if load_path is None:
439
+ raise ValueError(f"[deepspeed] failed to resume from checkpoint {checkpoint_path}")
440
+ else:
441
+ raise ValueError(f"Can't find a valid checkpoint at {checkpoint_path}")
llmeval-env/lib/python3.10/site-packages/transformers/integrations/integration_utils.py ADDED
@@ -0,0 +1,1914 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """
15
+ Integrations with other Python libraries.
16
+ """
17
+ import functools
18
+ import importlib.metadata
19
+ import importlib.util
20
+ import json
21
+ import numbers
22
+ import os
23
+ import pickle
24
+ import shutil
25
+ import sys
26
+ import tempfile
27
+ from dataclasses import asdict, fields
28
+ from pathlib import Path
29
+ from typing import TYPE_CHECKING, Any, Dict, Literal, Optional, Union
30
+
31
+ import numpy as np
32
+ import packaging.version
33
+
34
+ from .. import __version__ as version
35
+ from ..utils import flatten_dict, is_datasets_available, is_pandas_available, is_torch_available, logging
36
+
37
+
38
+ logger = logging.get_logger(__name__)
39
+
40
+ if is_torch_available():
41
+ import torch
42
+
43
+ # comet_ml requires to be imported before any ML frameworks
44
+ _has_comet = importlib.util.find_spec("comet_ml") is not None and os.getenv("COMET_MODE", "").upper() != "DISABLED"
45
+ if _has_comet:
46
+ try:
47
+ import comet_ml # noqa: F401
48
+
49
+ if hasattr(comet_ml, "config") and comet_ml.config.get_config("comet.api_key"):
50
+ _has_comet = True
51
+ else:
52
+ if os.getenv("COMET_MODE", "").upper() != "DISABLED":
53
+ logger.warning("comet_ml is installed but `COMET_API_KEY` is not set.")
54
+ _has_comet = False
55
+ except (ImportError, ValueError):
56
+ _has_comet = False
57
+
58
+ _has_neptune = (
59
+ importlib.util.find_spec("neptune") is not None or importlib.util.find_spec("neptune-client") is not None
60
+ )
61
+ if TYPE_CHECKING and _has_neptune:
62
+ try:
63
+ _neptune_version = importlib.metadata.version("neptune")
64
+ logger.info(f"Neptune version {_neptune_version} available.")
65
+ except importlib.metadata.PackageNotFoundError:
66
+ try:
67
+ _neptune_version = importlib.metadata.version("neptune-client")
68
+ logger.info(f"Neptune-client version {_neptune_version} available.")
69
+ except importlib.metadata.PackageNotFoundError:
70
+ _has_neptune = False
71
+
72
+ from ..trainer_callback import ProgressCallback, TrainerCallback # noqa: E402
73
+ from ..trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun, IntervalStrategy # noqa: E402
74
+ from ..training_args import ParallelMode # noqa: E402
75
+ from ..utils import ENV_VARS_TRUE_VALUES, is_torch_xla_available # noqa: E402
76
+
77
+
78
+ # Integration functions:
79
+ def is_wandb_available():
80
+ # any value of WANDB_DISABLED disables wandb
81
+ if os.getenv("WANDB_DISABLED", "").upper() in ENV_VARS_TRUE_VALUES:
82
+ logger.warning(
83
+ "Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the "
84
+ "--report_to flag to control the integrations used for logging result (for instance --report_to none)."
85
+ )
86
+ return False
87
+ return importlib.util.find_spec("wandb") is not None
88
+
89
+
90
+ def is_clearml_available():
91
+ return importlib.util.find_spec("clearml") is not None
92
+
93
+
94
+ def is_comet_available():
95
+ return _has_comet
96
+
97
+
98
+ def is_tensorboard_available():
99
+ return importlib.util.find_spec("tensorboard") is not None or importlib.util.find_spec("tensorboardX") is not None
100
+
101
+
102
+ def is_optuna_available():
103
+ return importlib.util.find_spec("optuna") is not None
104
+
105
+
106
+ def is_ray_available():
107
+ return importlib.util.find_spec("ray") is not None
108
+
109
+
110
+ def is_ray_tune_available():
111
+ if not is_ray_available():
112
+ return False
113
+ return importlib.util.find_spec("ray.tune") is not None
114
+
115
+
116
+ def is_sigopt_available():
117
+ return importlib.util.find_spec("sigopt") is not None
118
+
119
+
120
+ def is_azureml_available():
121
+ if importlib.util.find_spec("azureml") is None:
122
+ return False
123
+ if importlib.util.find_spec("azureml.core") is None:
124
+ return False
125
+ return importlib.util.find_spec("azureml.core.run") is not None
126
+
127
+
128
+ def is_mlflow_available():
129
+ if os.getenv("DISABLE_MLFLOW_INTEGRATION", "FALSE").upper() == "TRUE":
130
+ return False
131
+ return importlib.util.find_spec("mlflow") is not None
132
+
133
+
134
+ def is_dagshub_available():
135
+ return None not in [importlib.util.find_spec("dagshub"), importlib.util.find_spec("mlflow")]
136
+
137
+
138
+ def is_neptune_available():
139
+ return _has_neptune
140
+
141
+
142
+ def is_codecarbon_available():
143
+ return importlib.util.find_spec("codecarbon") is not None
144
+
145
+
146
+ def is_flytekit_available():
147
+ return importlib.util.find_spec("flytekit") is not None
148
+
149
+
150
+ def is_flyte_deck_standard_available():
151
+ if not is_flytekit_available():
152
+ return False
153
+ return importlib.util.find_spec("flytekitplugins.deck") is not None
154
+
155
+
156
+ def is_dvclive_available():
157
+ return importlib.util.find_spec("dvclive") is not None
158
+
159
+
160
+ def hp_params(trial):
161
+ if is_optuna_available():
162
+ import optuna
163
+
164
+ if isinstance(trial, optuna.Trial):
165
+ return trial.params
166
+ if is_ray_tune_available():
167
+ if isinstance(trial, dict):
168
+ return trial
169
+
170
+ if is_sigopt_available():
171
+ if isinstance(trial, dict):
172
+ return trial
173
+
174
+ if is_wandb_available():
175
+ if isinstance(trial, dict):
176
+ return trial
177
+
178
+ raise RuntimeError(f"Unknown type for trial {trial.__class__}")
179
+
180
+
181
+ def run_hp_search_optuna(trainer, n_trials: int, direction: str, **kwargs) -> BestRun:
182
+ import optuna
183
+
184
+ if trainer.args.process_index == 0:
185
+
186
+ def _objective(trial, checkpoint_dir=None):
187
+ checkpoint = None
188
+ if checkpoint_dir:
189
+ for subdir in os.listdir(checkpoint_dir):
190
+ if subdir.startswith(PREFIX_CHECKPOINT_DIR):
191
+ checkpoint = os.path.join(checkpoint_dir, subdir)
192
+ trainer.objective = None
193
+ if trainer.args.world_size > 1:
194
+ if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED:
195
+ raise RuntimeError("only support DDP optuna HPO for ParallelMode.DISTRIBUTED currently.")
196
+ trainer._hp_search_setup(trial)
197
+ torch.distributed.broadcast_object_list(pickle.dumps(trainer.args), src=0)
198
+ trainer.train(resume_from_checkpoint=checkpoint)
199
+ else:
200
+ trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
201
+ # If there hasn't been any evaluation during the training loop.
202
+ if getattr(trainer, "objective", None) is None:
203
+ metrics = trainer.evaluate()
204
+ trainer.objective = trainer.compute_objective(metrics)
205
+ return trainer.objective
206
+
207
+ timeout = kwargs.pop("timeout", None)
208
+ n_jobs = kwargs.pop("n_jobs", 1)
209
+ directions = direction if isinstance(direction, list) else None
210
+ direction = None if directions is not None else direction
211
+ study = optuna.create_study(direction=direction, directions=directions, **kwargs)
212
+ study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)
213
+ if not study._is_multi_objective():
214
+ best_trial = study.best_trial
215
+ return BestRun(str(best_trial.number), best_trial.value, best_trial.params)
216
+ else:
217
+ best_trials = study.best_trials
218
+ return [BestRun(str(best.number), best.values, best.params) for best in best_trials]
219
+ else:
220
+ for i in range(n_trials):
221
+ trainer.objective = None
222
+ args_main_rank = list(pickle.dumps(trainer.args))
223
+ if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED:
224
+ raise RuntimeError("only support DDP optuna HPO for ParallelMode.DISTRIBUTED currently.")
225
+ torch.distributed.broadcast_object_list(args_main_rank, src=0)
226
+ args = pickle.loads(bytes(args_main_rank))
227
+ for key, value in asdict(args).items():
228
+ if key != "local_rank":
229
+ setattr(trainer.args, key, value)
230
+ trainer.train(resume_from_checkpoint=None)
231
+ # If there hasn't been any evaluation during the training loop.
232
+ if getattr(trainer, "objective", None) is None:
233
+ metrics = trainer.evaluate()
234
+ trainer.objective = trainer.compute_objective(metrics)
235
+ return None
236
+
237
+
238
+ def run_hp_search_ray(trainer, n_trials: int, direction: str, **kwargs) -> BestRun:
239
+ import ray
240
+ import ray.train
241
+
242
+ def _objective(trial: dict, local_trainer):
243
+ try:
244
+ from transformers.utils.notebook import NotebookProgressCallback
245
+
246
+ if local_trainer.pop_callback(NotebookProgressCallback):
247
+ local_trainer.add_callback(ProgressCallback)
248
+ except ModuleNotFoundError:
249
+ pass
250
+
251
+ local_trainer.objective = None
252
+
253
+ checkpoint = ray.train.get_checkpoint()
254
+ if checkpoint:
255
+ # Upon trial resume, the local_trainer's objective gets reset to None.
256
+ # If `local_trainer.train` is a noop (training has already reached
257
+ # the target number of epochs/steps), then this would
258
+ # trigger an unnecessary extra checkpoint at the end of training.
259
+ # -> Set the objective to a dummy value upon resume as a workaround.
260
+ local_trainer.objective = "objective"
261
+
262
+ with checkpoint.as_directory() as checkpoint_dir:
263
+ checkpoint_path = next(Path(checkpoint_dir).glob(f"{PREFIX_CHECKPOINT_DIR}*")).as_posix()
264
+ local_trainer.train(resume_from_checkpoint=checkpoint_path, trial=trial)
265
+ else:
266
+ local_trainer.train(trial=trial)
267
+
268
+ # If there hasn't been any evaluation during the training loop.
269
+ if getattr(local_trainer, "objective", None) is None:
270
+ metrics = local_trainer.evaluate()
271
+ local_trainer.objective = local_trainer.compute_objective(metrics)
272
+
273
+ metrics.update({"objective": local_trainer.objective, "done": True})
274
+
275
+ with tempfile.TemporaryDirectory() as temp_checkpoint_dir:
276
+ local_trainer._tune_save_checkpoint(checkpoint_dir=temp_checkpoint_dir)
277
+ checkpoint = ray.train.Checkpoint.from_directory(temp_checkpoint_dir)
278
+ ray.train.report(metrics, checkpoint=checkpoint)
279
+
280
+ if not trainer._memory_tracker.skip_memory_metrics:
281
+ from ..trainer_utils import TrainerMemoryTracker
282
+
283
+ logger.warning(
284
+ "Memory tracking for your Trainer is currently "
285
+ "enabled. Automatically disabling the memory tracker "
286
+ "since the memory tracker is not serializable."
287
+ )
288
+ trainer._memory_tracker = TrainerMemoryTracker(skip_memory_metrics=True)
289
+
290
+ # The model and TensorBoard writer do not pickle so we have to remove them (if they exists)
291
+ # while doing the ray hp search.
292
+ _tb_writer = trainer.pop_callback(TensorBoardCallback)
293
+ trainer.model = None
294
+
295
+ # Setup default `resources_per_trial`.
296
+ if "resources_per_trial" not in kwargs:
297
+ # Default to 1 CPU and 1 GPU (if applicable) per trial.
298
+ kwargs["resources_per_trial"] = {"cpu": 1}
299
+ if trainer.args.n_gpu > 0:
300
+ kwargs["resources_per_trial"]["gpu"] = 1
301
+ resource_msg = "1 CPU" + (" and 1 GPU" if trainer.args.n_gpu > 0 else "")
302
+ logger.info(
303
+ "No `resources_per_trial` arg was passed into "
304
+ "`hyperparameter_search`. Setting it to a default value "
305
+ f"of {resource_msg} for each trial."
306
+ )
307
+ # Make sure each trainer only uses GPUs that were allocated per trial.
308
+ gpus_per_trial = kwargs["resources_per_trial"].get("gpu", 0)
309
+ trainer.args._n_gpu = gpus_per_trial
310
+
311
+ # Setup default `progress_reporter`.
312
+ if "progress_reporter" not in kwargs:
313
+ from ray.tune import CLIReporter
314
+
315
+ kwargs["progress_reporter"] = CLIReporter(metric_columns=["objective"])
316
+
317
+ if "scheduler" in kwargs:
318
+ from ray.tune.schedulers import ASHAScheduler, HyperBandForBOHB, MedianStoppingRule, PopulationBasedTraining
319
+
320
+ # Check for `do_eval` and `eval_during_training` for schedulers that require intermediate reporting.
321
+ if isinstance(
322
+ kwargs["scheduler"], (ASHAScheduler, MedianStoppingRule, HyperBandForBOHB, PopulationBasedTraining)
323
+ ) and (not trainer.args.do_eval or trainer.args.evaluation_strategy == IntervalStrategy.NO):
324
+ raise RuntimeError(
325
+ "You are using {cls} as a scheduler but you haven't enabled evaluation during training. "
326
+ "This means your trials will not report intermediate results to Ray Tune, and "
327
+ "can thus not be stopped early or used to exploit other trials parameters. "
328
+ "If this is what you want, do not use {cls}. If you would like to use {cls}, "
329
+ "make sure you pass `do_eval=True` and `evaluation_strategy='steps'` in the "
330
+ "Trainer `args`.".format(cls=type(kwargs["scheduler"]).__name__)
331
+ )
332
+
333
+ trainable = ray.tune.with_parameters(_objective, local_trainer=trainer)
334
+
335
+ @functools.wraps(trainable)
336
+ def dynamic_modules_import_trainable(*args, **kwargs):
337
+ """
338
+ Wrapper around `tune.with_parameters` to ensure datasets_modules are loaded on each Actor.
339
+
340
+ Without this, an ImportError will be thrown. See https://github.com/huggingface/transformers/issues/11565.
341
+
342
+ Assumes that `_objective`, defined above, is a function.
343
+ """
344
+ if is_datasets_available():
345
+ import datasets.load
346
+
347
+ dynamic_modules_path = os.path.join(datasets.load.init_dynamic_modules(), "__init__.py")
348
+ # load dynamic_modules from path
349
+ spec = importlib.util.spec_from_file_location("datasets_modules", dynamic_modules_path)
350
+ datasets_modules = importlib.util.module_from_spec(spec)
351
+ sys.modules[spec.name] = datasets_modules
352
+ spec.loader.exec_module(datasets_modules)
353
+ return trainable(*args, **kwargs)
354
+
355
+ # special attr set by tune.with_parameters
356
+ if hasattr(trainable, "__mixins__"):
357
+ dynamic_modules_import_trainable.__mixins__ = trainable.__mixins__
358
+
359
+ analysis = ray.tune.run(
360
+ dynamic_modules_import_trainable,
361
+ config=trainer.hp_space(None),
362
+ num_samples=n_trials,
363
+ **kwargs,
364
+ )
365
+ best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope)
366
+ best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config, analysis)
367
+ if _tb_writer is not None:
368
+ trainer.add_callback(_tb_writer)
369
+ return best_run
370
+
371
+
372
+ def run_hp_search_sigopt(trainer, n_trials: int, direction: str, **kwargs) -> BestRun:
373
+ import sigopt
374
+
375
+ if trainer.args.process_index == 0:
376
+ if importlib.metadata.version("sigopt") >= "8.0.0":
377
+ sigopt.set_project("huggingface")
378
+
379
+ experiment = sigopt.create_experiment(
380
+ name="huggingface-tune",
381
+ type="offline",
382
+ parameters=trainer.hp_space(None),
383
+ metrics=[{"name": "objective", "objective": direction, "strategy": "optimize"}],
384
+ parallel_bandwidth=1,
385
+ budget=n_trials,
386
+ )
387
+
388
+ logger.info(f"created experiment: https://app.sigopt.com/experiment/{experiment.id}")
389
+
390
+ for run in experiment.loop():
391
+ with run:
392
+ trainer.objective = None
393
+ if trainer.args.world_size > 1:
394
+ if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED:
395
+ raise RuntimeError("only support DDP Sigopt HPO for ParallelMode.DISTRIBUTED currently.")
396
+ trainer._hp_search_setup(run.run)
397
+ torch.distributed.broadcast_object_list(pickle.dumps(trainer.args), src=0)
398
+ trainer.train(resume_from_checkpoint=None)
399
+ else:
400
+ trainer.train(resume_from_checkpoint=None, trial=run.run)
401
+ # If there hasn't been any evaluation during the training loop.
402
+ if getattr(trainer, "objective", None) is None:
403
+ metrics = trainer.evaluate()
404
+ trainer.objective = trainer.compute_objective(metrics)
405
+ run.log_metric("objective", trainer.objective)
406
+
407
+ best = list(experiment.get_best_runs())[0]
408
+ best_run = BestRun(best.id, best.values["objective"].value, best.assignments)
409
+ else:
410
+ from sigopt import Connection
411
+
412
+ conn = Connection()
413
+ proxies = kwargs.pop("proxies", None)
414
+ if proxies is not None:
415
+ conn.set_proxies(proxies)
416
+
417
+ experiment = conn.experiments().create(
418
+ name="huggingface-tune",
419
+ parameters=trainer.hp_space(None),
420
+ metrics=[{"name": "objective", "objective": direction, "strategy": "optimize"}],
421
+ parallel_bandwidth=1,
422
+ observation_budget=n_trials,
423
+ project="huggingface",
424
+ )
425
+ logger.info(f"created experiment: https://app.sigopt.com/experiment/{experiment.id}")
426
+
427
+ while experiment.progress.observation_count < experiment.observation_budget:
428
+ suggestion = conn.experiments(experiment.id).suggestions().create()
429
+ trainer.objective = None
430
+ if trainer.args.world_size > 1:
431
+ if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED:
432
+ raise RuntimeError("only support DDP Sigopt HPO for ParallelMode.DISTRIBUTED currently.")
433
+ trainer._hp_search_setup(suggestion)
434
+ torch.distributed.broadcast_object_list(pickle.dumps(trainer.args), src=0)
435
+ trainer.train(resume_from_checkpoint=None)
436
+ else:
437
+ trainer.train(resume_from_checkpoint=None, trial=suggestion)
438
+ # If there hasn't been any evaluation during the training loop.
439
+ if getattr(trainer, "objective", None) is None:
440
+ metrics = trainer.evaluate()
441
+ trainer.objective = trainer.compute_objective(metrics)
442
+
443
+ values = [{"name": "objective", "value": trainer.objective}]
444
+ obs = conn.experiments(experiment.id).observations().create(suggestion=suggestion.id, values=values)
445
+ logger.info(f"[suggestion_id, observation_id]: [{suggestion.id}, {obs.id}]")
446
+ experiment = conn.experiments(experiment.id).fetch()
447
+
448
+ best = list(conn.experiments(experiment.id).best_assignments().fetch().iterate_pages())[0]
449
+ best_run = BestRun(best.id, best.value, best.assignments)
450
+ return best_run
451
+ else:
452
+ for i in range(n_trials):
453
+ trainer.objective = None
454
+ args_main_rank = list(pickle.dumps(trainer.args))
455
+ if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED:
456
+ raise RuntimeError("only support DDP Sigopt HPO for ParallelMode.DISTRIBUTED currently.")
457
+ torch.distributed.broadcast_object_list(args_main_rank, src=0)
458
+ args = pickle.loads(bytes(args_main_rank))
459
+ for key, value in asdict(args).items():
460
+ if key != "local_rank":
461
+ setattr(trainer.args, key, value)
462
+ trainer.train(resume_from_checkpoint=None)
463
+ # If there hasn't been any evaluation during the training loop.
464
+ if getattr(trainer, "objective", None) is None:
465
+ metrics = trainer.evaluate()
466
+ trainer.objective = trainer.compute_objective(metrics)
467
+ return None
468
+
469
+
470
+ def run_hp_search_wandb(trainer, n_trials: int, direction: str, **kwargs) -> BestRun:
471
+ from ..integrations import is_wandb_available
472
+
473
+ if not is_wandb_available():
474
+ raise ImportError("This function needs wandb installed: `pip install wandb`")
475
+ import wandb
476
+
477
+ # add WandbCallback if not already added in trainer callbacks
478
+ reporting_to_wandb = False
479
+ for callback in trainer.callback_handler.callbacks:
480
+ if isinstance(callback, WandbCallback):
481
+ reporting_to_wandb = True
482
+ break
483
+ if not reporting_to_wandb:
484
+ trainer.add_callback(WandbCallback())
485
+ trainer.args.report_to = ["wandb"]
486
+ best_trial = {"run_id": None, "objective": None, "hyperparameters": None}
487
+ sweep_id = kwargs.pop("sweep_id", None)
488
+ project = kwargs.pop("project", None)
489
+ name = kwargs.pop("name", None)
490
+ entity = kwargs.pop("entity", None)
491
+ metric = kwargs.pop("metric", "eval/loss")
492
+
493
+ sweep_config = trainer.hp_space(None)
494
+ sweep_config["metric"]["goal"] = direction
495
+ sweep_config["metric"]["name"] = metric
496
+ if name:
497
+ sweep_config["name"] = name
498
+
499
+ def _objective():
500
+ run = wandb.run if wandb.run else wandb.init()
501
+ trainer.state.trial_name = run.name
502
+ run.config.update({"assignments": {}, "metric": metric})
503
+ config = wandb.config
504
+
505
+ trainer.objective = None
506
+
507
+ trainer.train(resume_from_checkpoint=None, trial=vars(config)["_items"])
508
+ # If there hasn't been any evaluation during the training loop.
509
+ if getattr(trainer, "objective", None) is None:
510
+ metrics = trainer.evaluate()
511
+ trainer.objective = trainer.compute_objective(metrics)
512
+ format_metrics = rewrite_logs(metrics)
513
+ if metric not in format_metrics:
514
+ logger.warning(
515
+ f"Provided metric {metric} not found. This might result in unexpected sweeps charts. The available"
516
+ f" metrics are {format_metrics.keys()}"
517
+ )
518
+ best_score = False
519
+ if best_trial["run_id"] is not None:
520
+ if direction == "minimize":
521
+ best_score = trainer.objective < best_trial["objective"]
522
+ elif direction == "maximize":
523
+ best_score = trainer.objective > best_trial["objective"]
524
+
525
+ if best_score or best_trial["run_id"] is None:
526
+ best_trial["run_id"] = run.id
527
+ best_trial["objective"] = trainer.objective
528
+ best_trial["hyperparameters"] = dict(config)
529
+
530
+ return trainer.objective
531
+
532
+ sweep_id = wandb.sweep(sweep_config, project=project, entity=entity) if not sweep_id else sweep_id
533
+ logger.info(f"wandb sweep id - {sweep_id}")
534
+ wandb.agent(sweep_id, function=_objective, count=n_trials)
535
+
536
+ return BestRun(best_trial["run_id"], best_trial["objective"], best_trial["hyperparameters"])
537
+
538
+
539
+ def get_available_reporting_integrations():
540
+ integrations = []
541
+ if is_azureml_available() and not is_mlflow_available():
542
+ integrations.append("azure_ml")
543
+ if is_comet_available():
544
+ integrations.append("comet_ml")
545
+ if is_dagshub_available():
546
+ integrations.append("dagshub")
547
+ if is_dvclive_available():
548
+ integrations.append("dvclive")
549
+ if is_mlflow_available():
550
+ integrations.append("mlflow")
551
+ if is_neptune_available():
552
+ integrations.append("neptune")
553
+ if is_tensorboard_available():
554
+ integrations.append("tensorboard")
555
+ if is_wandb_available():
556
+ integrations.append("wandb")
557
+ if is_codecarbon_available():
558
+ integrations.append("codecarbon")
559
+ if is_clearml_available():
560
+ integrations.append("clearml")
561
+ return integrations
562
+
563
+
564
+ def rewrite_logs(d):
565
+ new_d = {}
566
+ eval_prefix = "eval_"
567
+ eval_prefix_len = len(eval_prefix)
568
+ test_prefix = "test_"
569
+ test_prefix_len = len(test_prefix)
570
+ for k, v in d.items():
571
+ if k.startswith(eval_prefix):
572
+ new_d["eval/" + k[eval_prefix_len:]] = v
573
+ elif k.startswith(test_prefix):
574
+ new_d["test/" + k[test_prefix_len:]] = v
575
+ else:
576
+ new_d["train/" + k] = v
577
+ return new_d
578
+
579
+
580
+ class TensorBoardCallback(TrainerCallback):
581
+ """
582
+ A [`TrainerCallback`] that sends the logs to [TensorBoard](https://www.tensorflow.org/tensorboard).
583
+
584
+ Args:
585
+ tb_writer (`SummaryWriter`, *optional*):
586
+ The writer to use. Will instantiate one if not set.
587
+ """
588
+
589
+ def __init__(self, tb_writer=None):
590
+ has_tensorboard = is_tensorboard_available()
591
+ if not has_tensorboard:
592
+ raise RuntimeError(
593
+ "TensorBoardCallback requires tensorboard to be installed. Either update your PyTorch version or"
594
+ " install tensorboardX."
595
+ )
596
+ if has_tensorboard:
597
+ try:
598
+ from torch.utils.tensorboard import SummaryWriter # noqa: F401
599
+
600
+ self._SummaryWriter = SummaryWriter
601
+ except ImportError:
602
+ try:
603
+ from tensorboardX import SummaryWriter
604
+
605
+ self._SummaryWriter = SummaryWriter
606
+ except ImportError:
607
+ self._SummaryWriter = None
608
+ else:
609
+ self._SummaryWriter = None
610
+ self.tb_writer = tb_writer
611
+
612
+ def _init_summary_writer(self, args, log_dir=None):
613
+ log_dir = log_dir or args.logging_dir
614
+ if self._SummaryWriter is not None:
615
+ self.tb_writer = self._SummaryWriter(log_dir=log_dir)
616
+
617
+ def on_train_begin(self, args, state, control, **kwargs):
618
+ if not state.is_world_process_zero:
619
+ return
620
+
621
+ log_dir = None
622
+
623
+ if state.is_hyper_param_search:
624
+ trial_name = state.trial_name
625
+ if trial_name is not None:
626
+ log_dir = os.path.join(args.logging_dir, trial_name)
627
+
628
+ if self.tb_writer is None:
629
+ self._init_summary_writer(args, log_dir)
630
+
631
+ if self.tb_writer is not None:
632
+ self.tb_writer.add_text("args", args.to_json_string())
633
+ if "model" in kwargs:
634
+ model = kwargs["model"]
635
+ if hasattr(model, "config") and model.config is not None:
636
+ model_config_json = model.config.to_json_string()
637
+ self.tb_writer.add_text("model_config", model_config_json)
638
+
639
+ def on_log(self, args, state, control, logs=None, **kwargs):
640
+ if not state.is_world_process_zero:
641
+ return
642
+
643
+ if self.tb_writer is None:
644
+ self._init_summary_writer(args)
645
+
646
+ if self.tb_writer is not None:
647
+ logs = rewrite_logs(logs)
648
+ for k, v in logs.items():
649
+ if isinstance(v, (int, float)):
650
+ self.tb_writer.add_scalar(k, v, state.global_step)
651
+ else:
652
+ logger.warning(
653
+ "Trainer is attempting to log a value of "
654
+ f'"{v}" of type {type(v)} for key "{k}" as a scalar. '
655
+ "This invocation of Tensorboard's writer.add_scalar() "
656
+ "is incorrect so we dropped this attribute."
657
+ )
658
+ self.tb_writer.flush()
659
+
660
+ def on_train_end(self, args, state, control, **kwargs):
661
+ if self.tb_writer:
662
+ self.tb_writer.close()
663
+ self.tb_writer = None
664
+
665
+
666
+ class WandbCallback(TrainerCallback):
667
+ """
668
+ A [`TrainerCallback`] that logs metrics, media, model checkpoints to [Weight and Biases](https://www.wandb.com/).
669
+ """
670
+
671
+ def __init__(self):
672
+ has_wandb = is_wandb_available()
673
+ if not has_wandb:
674
+ raise RuntimeError("WandbCallback requires wandb to be installed. Run `pip install wandb`.")
675
+ if has_wandb:
676
+ import wandb
677
+
678
+ self._wandb = wandb
679
+ self._initialized = False
680
+ # log model
681
+ if os.getenv("WANDB_LOG_MODEL", "FALSE").upper() in ENV_VARS_TRUE_VALUES.union({"TRUE"}):
682
+ DeprecationWarning(
683
+ f"Setting `WANDB_LOG_MODEL` as {os.getenv('WANDB_LOG_MODEL')} is deprecated and will be removed in "
684
+ "version 5 of transformers. Use one of `'end'` or `'checkpoint'` instead."
685
+ )
686
+ logger.info(f"Setting `WANDB_LOG_MODEL` from {os.getenv('WANDB_LOG_MODEL')} to `end` instead")
687
+ self._log_model = "end"
688
+ else:
689
+ self._log_model = os.getenv("WANDB_LOG_MODEL", "false").lower()
690
+
691
+ def setup(self, args, state, model, **kwargs):
692
+ """
693
+ Setup the optional Weights & Biases (*wandb*) integration.
694
+
695
+ One can subclass and override this method to customize the setup if needed. Find more information
696
+ [here](https://docs.wandb.ai/guides/integrations/huggingface). You can also override the following environment
697
+ variables:
698
+
699
+ Environment:
700
+ - **WANDB_LOG_MODEL** (`str`, *optional*, defaults to `"false"`):
701
+ Whether to log model and checkpoints during training. Can be `"end"`, `"checkpoint"` or `"false"`. If set
702
+ to `"end"`, the model will be uploaded at the end of training. If set to `"checkpoint"`, the checkpoint
703
+ will be uploaded every `args.save_steps` . If set to `"false"`, the model will not be uploaded. Use along
704
+ with [`~transformers.TrainingArguments.load_best_model_at_end`] to upload best model.
705
+
706
+ <Deprecated version="5.0">
707
+
708
+ Setting `WANDB_LOG_MODEL` as `bool` will be deprecated in version 5 of 🤗 Transformers.
709
+
710
+ </Deprecated>
711
+ - **WANDB_WATCH** (`str`, *optional* defaults to `"false"`):
712
+ Can be `"gradients"`, `"all"`, `"parameters"`, or `"false"`. Set to `"all"` to log gradients and
713
+ parameters.
714
+ - **WANDB_PROJECT** (`str`, *optional*, defaults to `"huggingface"`):
715
+ Set this to a custom string to store results in a different project.
716
+ - **WANDB_DISABLED** (`bool`, *optional*, defaults to `False`):
717
+ Whether to disable wandb entirely. Set `WANDB_DISABLED=true` to disable.
718
+ """
719
+ if self._wandb is None:
720
+ return
721
+ self._initialized = True
722
+ if state.is_world_process_zero:
723
+ logger.info(
724
+ 'Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"'
725
+ )
726
+ combined_dict = {**args.to_dict()}
727
+
728
+ if hasattr(model, "config") and model.config is not None:
729
+ model_config = model.config.to_dict()
730
+ combined_dict = {**model_config, **combined_dict}
731
+ trial_name = state.trial_name
732
+ init_args = {}
733
+ if trial_name is not None:
734
+ init_args["name"] = trial_name
735
+ init_args["group"] = args.run_name
736
+ else:
737
+ if not (args.run_name is None or args.run_name == args.output_dir):
738
+ init_args["name"] = args.run_name
739
+
740
+ if self._wandb.run is None:
741
+ self._wandb.init(
742
+ project=os.getenv("WANDB_PROJECT", "huggingface"),
743
+ **init_args,
744
+ )
745
+ # add config parameters (run may have been created manually)
746
+ self._wandb.config.update(combined_dict, allow_val_change=True)
747
+
748
+ # define default x-axis (for latest wandb versions)
749
+ if getattr(self._wandb, "define_metric", None):
750
+ self._wandb.define_metric("train/global_step")
751
+ self._wandb.define_metric("*", step_metric="train/global_step", step_sync=True)
752
+
753
+ # keep track of model topology and gradients, unsupported on TPU
754
+ _watch_model = os.getenv("WANDB_WATCH", "false")
755
+ if not is_torch_xla_available() and _watch_model in ("all", "parameters", "gradients"):
756
+ self._wandb.watch(model, log=_watch_model, log_freq=max(100, state.logging_steps))
757
+ self._wandb.run._label(code="transformers_trainer")
758
+
759
+ def on_train_begin(self, args, state, control, model=None, **kwargs):
760
+ if self._wandb is None:
761
+ return
762
+ hp_search = state.is_hyper_param_search
763
+ if hp_search:
764
+ self._wandb.finish()
765
+ self._initialized = False
766
+ args.run_name = None
767
+ if not self._initialized:
768
+ self.setup(args, state, model, **kwargs)
769
+
770
+ def on_train_end(self, args, state, control, model=None, tokenizer=None, **kwargs):
771
+ if self._wandb is None:
772
+ return
773
+ if self._log_model in ("end", "checkpoint") and self._initialized and state.is_world_process_zero:
774
+ from ..trainer import Trainer
775
+
776
+ fake_trainer = Trainer(args=args, model=model, tokenizer=tokenizer)
777
+ with tempfile.TemporaryDirectory() as temp_dir:
778
+ fake_trainer.save_model(temp_dir)
779
+ metadata = (
780
+ {
781
+ k: v
782
+ for k, v in dict(self._wandb.summary).items()
783
+ if isinstance(v, numbers.Number) and not k.startswith("_")
784
+ }
785
+ if not args.load_best_model_at_end
786
+ else {
787
+ f"eval/{args.metric_for_best_model}": state.best_metric,
788
+ "train/total_floss": state.total_flos,
789
+ }
790
+ )
791
+ logger.info("Logging model artifacts. ...")
792
+ model_name = (
793
+ f"model-{self._wandb.run.id}"
794
+ if (args.run_name is None or args.run_name == args.output_dir)
795
+ else f"model-{self._wandb.run.name}"
796
+ )
797
+ artifact = self._wandb.Artifact(name=model_name, type="model", metadata=metadata)
798
+ for f in Path(temp_dir).glob("*"):
799
+ if f.is_file():
800
+ with artifact.new_file(f.name, mode="wb") as fa:
801
+ fa.write(f.read_bytes())
802
+ self._wandb.run.log_artifact(artifact)
803
+
804
+ def on_log(self, args, state, control, model=None, logs=None, **kwargs):
805
+ single_value_scalars = [
806
+ "train_runtime",
807
+ "train_samples_per_second",
808
+ "train_steps_per_second",
809
+ "train_loss",
810
+ "total_flos",
811
+ ]
812
+
813
+ if self._wandb is None:
814
+ return
815
+ if not self._initialized:
816
+ self.setup(args, state, model)
817
+ if state.is_world_process_zero:
818
+ for k, v in logs.items():
819
+ if k in single_value_scalars:
820
+ self._wandb.run.summary[k] = v
821
+ non_scalar_logs = {k: v for k, v in logs.items() if k not in single_value_scalars}
822
+ non_scalar_logs = rewrite_logs(non_scalar_logs)
823
+ self._wandb.log({**non_scalar_logs, "train/global_step": state.global_step})
824
+
825
+ def on_save(self, args, state, control, **kwargs):
826
+ if self._log_model == "checkpoint" and self._initialized and state.is_world_process_zero:
827
+ checkpoint_metadata = {
828
+ k: v
829
+ for k, v in dict(self._wandb.summary).items()
830
+ if isinstance(v, numbers.Number) and not k.startswith("_")
831
+ }
832
+
833
+ ckpt_dir = f"checkpoint-{state.global_step}"
834
+ artifact_path = os.path.join(args.output_dir, ckpt_dir)
835
+ logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. ...")
836
+ checkpoint_name = (
837
+ f"checkpoint-{self._wandb.run.id}"
838
+ if (args.run_name is None or args.run_name == args.output_dir)
839
+ else f"checkpoint-{self._wandb.run.name}"
840
+ )
841
+ artifact = self._wandb.Artifact(name=checkpoint_name, type="model", metadata=checkpoint_metadata)
842
+ artifact.add_dir(artifact_path)
843
+ self._wandb.log_artifact(artifact, aliases=[f"checkpoint-{state.global_step}"])
844
+
845
+
846
+ class CometCallback(TrainerCallback):
847
+ """
848
+ A [`TrainerCallback`] that sends the logs to [Comet ML](https://www.comet.ml/site/).
849
+ """
850
+
851
+ def __init__(self):
852
+ if not _has_comet:
853
+ raise RuntimeError("CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.")
854
+ self._initialized = False
855
+ self._log_assets = False
856
+
857
+ def setup(self, args, state, model):
858
+ """
859
+ Setup the optional Comet.ml integration.
860
+
861
+ Environment:
862
+ - **COMET_MODE** (`str`, *optional*, defaults to `ONLINE`):
863
+ Whether to create an online, offline experiment or disable Comet logging. Can be `OFFLINE`, `ONLINE`, or
864
+ `DISABLED`.
865
+ - **COMET_PROJECT_NAME** (`str`, *optional*):
866
+ Comet project name for experiments.
867
+ - **COMET_OFFLINE_DIRECTORY** (`str`, *optional*):
868
+ Folder to use for saving offline experiments when `COMET_MODE` is `OFFLINE`.
869
+ - **COMET_LOG_ASSETS** (`str`, *optional*, defaults to `TRUE`):
870
+ Whether or not to log training assets (tf event logs, checkpoints, etc), to Comet. Can be `TRUE`, or
871
+ `FALSE`.
872
+
873
+ For a number of configurable items in the environment, see
874
+ [here](https://www.comet.ml/docs/python-sdk/advanced/#comet-configuration-variables).
875
+ """
876
+ self._initialized = True
877
+ log_assets = os.getenv("COMET_LOG_ASSETS", "FALSE").upper()
878
+ if log_assets in {"TRUE", "1"}:
879
+ self._log_assets = True
880
+ if state.is_world_process_zero:
881
+ comet_mode = os.getenv("COMET_MODE", "ONLINE").upper()
882
+ experiment = None
883
+ experiment_kwargs = {"project_name": os.getenv("COMET_PROJECT_NAME", "huggingface")}
884
+ if comet_mode == "ONLINE":
885
+ experiment = comet_ml.Experiment(**experiment_kwargs)
886
+ experiment.log_other("Created from", "transformers")
887
+ logger.info("Automatic Comet.ml online logging enabled")
888
+ elif comet_mode == "OFFLINE":
889
+ experiment_kwargs["offline_directory"] = os.getenv("COMET_OFFLINE_DIRECTORY", "./")
890
+ experiment = comet_ml.OfflineExperiment(**experiment_kwargs)
891
+ experiment.log_other("Created from", "transformers")
892
+ logger.info("Automatic Comet.ml offline logging enabled; use `comet upload` when finished")
893
+ if experiment is not None:
894
+ experiment._set_model_graph(model, framework="transformers")
895
+ experiment._log_parameters(args, prefix="args/", framework="transformers")
896
+ if hasattr(model, "config"):
897
+ experiment._log_parameters(model.config, prefix="config/", framework="transformers")
898
+
899
+ def on_train_begin(self, args, state, control, model=None, **kwargs):
900
+ if not self._initialized:
901
+ self.setup(args, state, model)
902
+
903
+ def on_log(self, args, state, control, model=None, logs=None, **kwargs):
904
+ if not self._initialized:
905
+ self.setup(args, state, model)
906
+ if state.is_world_process_zero:
907
+ experiment = comet_ml.config.get_global_experiment()
908
+ if experiment is not None:
909
+ experiment._log_metrics(logs, step=state.global_step, epoch=state.epoch, framework="transformers")
910
+
911
+ def on_train_end(self, args, state, control, **kwargs):
912
+ if self._initialized and state.is_world_process_zero:
913
+ experiment = comet_ml.config.get_global_experiment()
914
+ if experiment is not None:
915
+ if self._log_assets is True:
916
+ logger.info("Logging checkpoints. This may take time.")
917
+ experiment.log_asset_folder(
918
+ args.output_dir, recursive=True, log_file_name=True, step=state.global_step
919
+ )
920
+ experiment.end()
921
+
922
+
923
+ class AzureMLCallback(TrainerCallback):
924
+ """
925
+ A [`TrainerCallback`] that sends the logs to [AzureML](https://pypi.org/project/azureml-sdk/).
926
+ """
927
+
928
+ def __init__(self, azureml_run=None):
929
+ if not is_azureml_available():
930
+ raise RuntimeError("AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`.")
931
+ self.azureml_run = azureml_run
932
+
933
+ def on_init_end(self, args, state, control, **kwargs):
934
+ from azureml.core.run import Run
935
+
936
+ if self.azureml_run is None and state.is_world_process_zero:
937
+ self.azureml_run = Run.get_context()
938
+
939
+ def on_log(self, args, state, control, logs=None, **kwargs):
940
+ if self.azureml_run and state.is_world_process_zero:
941
+ for k, v in logs.items():
942
+ if isinstance(v, (int, float)):
943
+ self.azureml_run.log(k, v, description=k)
944
+
945
+
946
+ class MLflowCallback(TrainerCallback):
947
+ """
948
+ A [`TrainerCallback`] that sends the logs to [MLflow](https://www.mlflow.org/). Can be disabled by setting
949
+ environment variable `DISABLE_MLFLOW_INTEGRATION = TRUE`.
950
+ """
951
+
952
+ def __init__(self):
953
+ if not is_mlflow_available():
954
+ raise RuntimeError("MLflowCallback requires mlflow to be installed. Run `pip install mlflow`.")
955
+ import mlflow
956
+
957
+ self._MAX_PARAM_VAL_LENGTH = mlflow.utils.validation.MAX_PARAM_VAL_LENGTH
958
+ self._MAX_PARAMS_TAGS_PER_BATCH = mlflow.utils.validation.MAX_PARAMS_TAGS_PER_BATCH
959
+
960
+ self._initialized = False
961
+ self._auto_end_run = False
962
+ self._log_artifacts = False
963
+ self._ml_flow = mlflow
964
+
965
+ def setup(self, args, state, model):
966
+ """
967
+ Setup the optional MLflow integration.
968
+
969
+ Environment:
970
+ - **HF_MLFLOW_LOG_ARTIFACTS** (`str`, *optional*):
971
+ Whether to use MLflow `.log_artifact()` facility to log artifacts. This only makes sense if logging to a
972
+ remote server, e.g. s3 or GCS. If set to `True` or *1*, will copy each saved checkpoint on each save in
973
+ [`TrainingArguments`]'s `output_dir` to the local or remote artifact storage. Using it without a remote
974
+ storage will just copy the files to your artifact location.
975
+ - **MLFLOW_TRACKING_URI** (`str`, *optional*):
976
+ Whether to store runs at a specific path or remote server. Unset by default, which skips setting the
977
+ tracking URI entirely.
978
+ - **MLFLOW_EXPERIMENT_NAME** (`str`, *optional*, defaults to `None`):
979
+ Whether to use an MLflow experiment_name under which to launch the run. Default to `None` which will point
980
+ to the `Default` experiment in MLflow. Otherwise, it is a case sensitive name of the experiment to be
981
+ activated. If an experiment with this name does not exist, a new experiment with this name is created.
982
+ - **MLFLOW_TAGS** (`str`, *optional*):
983
+ A string dump of a dictionary of key/value pair to be added to the MLflow run as tags. Example:
984
+ `os.environ['MLFLOW_TAGS']='{"release.candidate": "RC1", "release.version": "2.2.0"}'`.
985
+ - **MLFLOW_NESTED_RUN** (`str`, *optional*):
986
+ Whether to use MLflow nested runs. If set to `True` or *1*, will create a nested run inside the current
987
+ run.
988
+ - **MLFLOW_RUN_ID** (`str`, *optional*):
989
+ Allow to reattach to an existing run which can be usefull when resuming training from a checkpoint. When
990
+ `MLFLOW_RUN_ID` environment variable is set, `start_run` attempts to resume a run with the specified run ID
991
+ and other parameters are ignored.
992
+ - **MLFLOW_FLATTEN_PARAMS** (`str`, *optional*, defaults to `False`):
993
+ Whether to flatten the parameters dictionary before logging.
994
+ """
995
+ self._log_artifacts = os.getenv("HF_MLFLOW_LOG_ARTIFACTS", "FALSE").upper() in ENV_VARS_TRUE_VALUES
996
+ self._nested_run = os.getenv("MLFLOW_NESTED_RUN", "FALSE").upper() in ENV_VARS_TRUE_VALUES
997
+ self._tracking_uri = os.getenv("MLFLOW_TRACKING_URI", None)
998
+ self._experiment_name = os.getenv("MLFLOW_EXPERIMENT_NAME", None)
999
+ self._flatten_params = os.getenv("MLFLOW_FLATTEN_PARAMS", "FALSE").upper() in ENV_VARS_TRUE_VALUES
1000
+ self._run_id = os.getenv("MLFLOW_RUN_ID", None)
1001
+
1002
+ # "synchronous" flag is only available with mlflow version >= 2.8.0
1003
+ # https://github.com/mlflow/mlflow/pull/9705
1004
+ # https://github.com/mlflow/mlflow/releases/tag/v2.8.0
1005
+ self._async_log = packaging.version.parse(self._ml_flow.__version__) >= packaging.version.parse("2.8.0")
1006
+
1007
+ logger.debug(
1008
+ f"MLflow experiment_name={self._experiment_name}, run_name={args.run_name}, nested={self._nested_run},"
1009
+ f" tags={self._nested_run}, tracking_uri={self._tracking_uri}"
1010
+ )
1011
+ if state.is_world_process_zero:
1012
+ if not self._ml_flow.is_tracking_uri_set():
1013
+ if self._tracking_uri:
1014
+ self._ml_flow.set_tracking_uri(self._tracking_uri)
1015
+ logger.debug(f"MLflow tracking URI is set to {self._tracking_uri}")
1016
+ else:
1017
+ logger.debug(
1018
+ "Environment variable `MLFLOW_TRACKING_URI` is not provided and therefore will not be"
1019
+ " explicitly set."
1020
+ )
1021
+ else:
1022
+ logger.debug(f"MLflow tracking URI is set to {self._ml_flow.get_tracking_uri()}")
1023
+
1024
+ if self._ml_flow.active_run() is None or self._nested_run or self._run_id:
1025
+ if self._experiment_name:
1026
+ # Use of set_experiment() ensure that Experiment is created if not exists
1027
+ self._ml_flow.set_experiment(self._experiment_name)
1028
+ self._ml_flow.start_run(run_name=args.run_name, nested=self._nested_run)
1029
+ logger.debug(f"MLflow run started with run_id={self._ml_flow.active_run().info.run_id}")
1030
+ self._auto_end_run = True
1031
+ combined_dict = args.to_dict()
1032
+ if hasattr(model, "config") and model.config is not None:
1033
+ model_config = model.config.to_dict()
1034
+ combined_dict = {**model_config, **combined_dict}
1035
+ combined_dict = flatten_dict(combined_dict) if self._flatten_params else combined_dict
1036
+ # remove params that are too long for MLflow
1037
+ for name, value in list(combined_dict.items()):
1038
+ # internally, all values are converted to str in MLflow
1039
+ if len(str(value)) > self._MAX_PARAM_VAL_LENGTH:
1040
+ logger.warning(
1041
+ f'Trainer is attempting to log a value of "{value}" for key "{name}" as a parameter. MLflow\'s'
1042
+ " log_param() only accepts values no longer than 250 characters so we dropped this attribute."
1043
+ " You can use `MLFLOW_FLATTEN_PARAMS` environment variable to flatten the parameters and"
1044
+ " avoid this message."
1045
+ )
1046
+ del combined_dict[name]
1047
+ # MLflow cannot log more than 100 values in one go, so we have to split it
1048
+ combined_dict_items = list(combined_dict.items())
1049
+ for i in range(0, len(combined_dict_items), self._MAX_PARAMS_TAGS_PER_BATCH):
1050
+ if self._async_log:
1051
+ self._ml_flow.log_params(
1052
+ dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH]), synchronous=False
1053
+ )
1054
+ else:
1055
+ self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH]))
1056
+ mlflow_tags = os.getenv("MLFLOW_TAGS", None)
1057
+ if mlflow_tags:
1058
+ mlflow_tags = json.loads(mlflow_tags)
1059
+ self._ml_flow.set_tags(mlflow_tags)
1060
+ self._initialized = True
1061
+
1062
+ def on_train_begin(self, args, state, control, model=None, **kwargs):
1063
+ if not self._initialized:
1064
+ self.setup(args, state, model)
1065
+
1066
+ def on_log(self, args, state, control, logs, model=None, **kwargs):
1067
+ if not self._initialized:
1068
+ self.setup(args, state, model)
1069
+ if state.is_world_process_zero:
1070
+ metrics = {}
1071
+ for k, v in logs.items():
1072
+ if isinstance(v, (int, float)):
1073
+ metrics[k] = v
1074
+ elif isinstance(v, torch.Tensor) and v.numel() == 1:
1075
+ metrics[k] = v.item()
1076
+ else:
1077
+ logger.warning(
1078
+ f'Trainer is attempting to log a value of "{v}" of type {type(v)} for key "{k}" as a metric. '
1079
+ "MLflow's log_metric() only accepts float and int types so we dropped this attribute."
1080
+ )
1081
+
1082
+ if self._async_log:
1083
+ self._ml_flow.log_metrics(metrics=metrics, step=state.global_step, synchronous=False)
1084
+ else:
1085
+ self._ml_flow.log_metrics(metrics=metrics, step=state.global_step)
1086
+
1087
+ def on_train_end(self, args, state, control, **kwargs):
1088
+ if self._initialized and state.is_world_process_zero:
1089
+ if self._auto_end_run and self._ml_flow.active_run():
1090
+ self._ml_flow.end_run()
1091
+
1092
+ def on_save(self, args, state, control, **kwargs):
1093
+ if self._initialized and state.is_world_process_zero and self._log_artifacts:
1094
+ ckpt_dir = f"checkpoint-{state.global_step}"
1095
+ artifact_path = os.path.join(args.output_dir, ckpt_dir)
1096
+ logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time.")
1097
+ self._ml_flow.pyfunc.log_model(
1098
+ ckpt_dir,
1099
+ artifacts={"model_path": artifact_path},
1100
+ python_model=self._ml_flow.pyfunc.PythonModel(),
1101
+ )
1102
+
1103
+ def __del__(self):
1104
+ # if the previous run is not terminated correctly, the fluent API will
1105
+ # not let you start a new run before the previous one is killed
1106
+ if (
1107
+ self._auto_end_run
1108
+ and callable(getattr(self._ml_flow, "active_run", None))
1109
+ and self._ml_flow.active_run() is not None
1110
+ ):
1111
+ self._ml_flow.end_run()
1112
+
1113
+
1114
+ class DagsHubCallback(MLflowCallback):
1115
+ """
1116
+ A [`TrainerCallback`] that logs to [DagsHub](https://dagshub.com/). Extends [`MLflowCallback`]
1117
+ """
1118
+
1119
+ def __init__(self):
1120
+ super().__init__()
1121
+ if not is_dagshub_available():
1122
+ raise ImportError("DagsHubCallback requires dagshub to be installed. Run `pip install dagshub`.")
1123
+
1124
+ from dagshub.upload import Repo
1125
+
1126
+ self.Repo = Repo
1127
+
1128
+ def setup(self, *args, **kwargs):
1129
+ """
1130
+ Setup the DagsHub's Logging integration.
1131
+
1132
+ Environment:
1133
+ - **HF_DAGSHUB_LOG_ARTIFACTS** (`str`, *optional*):
1134
+ Whether to save the data and model artifacts for the experiment. Default to `False`.
1135
+ """
1136
+
1137
+ self.log_artifacts = os.getenv("HF_DAGSHUB_LOG_ARTIFACTS", "FALSE").upper() in ENV_VARS_TRUE_VALUES
1138
+ self.name = os.getenv("HF_DAGSHUB_MODEL_NAME") or "main"
1139
+ self.remote = os.getenv("MLFLOW_TRACKING_URI")
1140
+ self.repo = self.Repo(
1141
+ owner=self.remote.split(os.sep)[-2],
1142
+ name=self.remote.split(os.sep)[-1].split(".")[0],
1143
+ branch=os.getenv("BRANCH") or "main",
1144
+ )
1145
+ self.path = Path("artifacts")
1146
+
1147
+ if self.remote is None:
1148
+ raise RuntimeError(
1149
+ "DagsHubCallback requires the `MLFLOW_TRACKING_URI` environment variable to be set. Did you run"
1150
+ " `dagshub.init()`?"
1151
+ )
1152
+
1153
+ super().setup(*args, **kwargs)
1154
+
1155
+ def on_train_end(self, args, state, control, **kwargs):
1156
+ if self.log_artifacts:
1157
+ if getattr(self, "train_dataloader", None):
1158
+ torch.save(self.train_dataloader.dataset, os.path.join(args.output_dir, "dataset.pt"))
1159
+
1160
+ self.repo.directory(str(self.path)).add_dir(args.output_dir)
1161
+
1162
+
1163
+ class NeptuneMissingConfiguration(Exception):
1164
+ def __init__(self):
1165
+ super().__init__(
1166
+ """
1167
+ ------ Unsupported ---- We were not able to create new runs. You provided a custom Neptune run to
1168
+ `NeptuneCallback` with the `run` argument. For the integration to work fully, provide your `api_token` and
1169
+ `project` by saving them as environment variables or passing them to the callback.
1170
+ """
1171
+ )
1172
+
1173
+
1174
+ class NeptuneCallback(TrainerCallback):
1175
+ """TrainerCallback that sends the logs to [Neptune](https://app.neptune.ai).
1176
+
1177
+ Args:
1178
+ api_token (`str`, *optional*): Neptune API token obtained upon registration.
1179
+ You can leave this argument out if you have saved your token to the `NEPTUNE_API_TOKEN` environment
1180
+ variable (strongly recommended). See full setup instructions in the
1181
+ [docs](https://docs.neptune.ai/setup/installation).
1182
+ project (`str`, *optional*): Name of an existing Neptune project, in the form "workspace-name/project-name".
1183
+ You can find and copy the name in Neptune from the project settings -> Properties. If None (default), the
1184
+ value of the `NEPTUNE_PROJECT` environment variable is used.
1185
+ name (`str`, *optional*): Custom name for the run.
1186
+ base_namespace (`str`, optional, defaults to "finetuning"): In the Neptune run, the root namespace
1187
+ that will contain all of the metadata logged by the callback.
1188
+ log_parameters (`bool`, *optional*, defaults to `True`):
1189
+ If True, logs all Trainer arguments and model parameters provided by the Trainer.
1190
+ log_checkpoints (`str`, *optional*): If "same", uploads checkpoints whenever they are saved by the Trainer.
1191
+ If "last", uploads only the most recently saved checkpoint. If "best", uploads the best checkpoint (among
1192
+ the ones saved by the Trainer). If `None`, does not upload checkpoints.
1193
+ run (`Run`, *optional*): Pass a Neptune run object if you want to continue logging to an existing run.
1194
+ Read more about resuming runs in the [docs](https://docs.neptune.ai/logging/to_existing_object).
1195
+ **neptune_run_kwargs (*optional*):
1196
+ Additional keyword arguments to be passed directly to the
1197
+ [`neptune.init_run()`](https://docs.neptune.ai/api/neptune#init_run) function when a new run is created.
1198
+
1199
+ For instructions and examples, see the [Transformers integration
1200
+ guide](https://docs.neptune.ai/integrations/transformers) in the Neptune documentation.
1201
+ """
1202
+
1203
+ integration_version_key = "source_code/integrations/transformers"
1204
+ model_parameters_key = "model_parameters"
1205
+ trial_name_key = "trial"
1206
+ trial_params_key = "trial_params"
1207
+ trainer_parameters_key = "trainer_parameters"
1208
+ flat_metrics = {"train/epoch"}
1209
+
1210
+ def __init__(
1211
+ self,
1212
+ *,
1213
+ api_token: Optional[str] = None,
1214
+ project: Optional[str] = None,
1215
+ name: Optional[str] = None,
1216
+ base_namespace: str = "finetuning",
1217
+ run=None,
1218
+ log_parameters: bool = True,
1219
+ log_checkpoints: Optional[str] = None,
1220
+ **neptune_run_kwargs,
1221
+ ):
1222
+ if not is_neptune_available():
1223
+ raise ValueError(
1224
+ "NeptuneCallback requires the Neptune client library to be installed. "
1225
+ "To install the library, run `pip install neptune`."
1226
+ )
1227
+
1228
+ try:
1229
+ from neptune import Run
1230
+ from neptune.internal.utils import verify_type
1231
+ except ImportError:
1232
+ from neptune.new.internal.utils import verify_type
1233
+ from neptune.new.metadata_containers.run import Run
1234
+
1235
+ verify_type("api_token", api_token, (str, type(None)))
1236
+ verify_type("project", project, (str, type(None)))
1237
+ verify_type("name", name, (str, type(None)))
1238
+ verify_type("base_namespace", base_namespace, str)
1239
+ verify_type("run", run, (Run, type(None)))
1240
+ verify_type("log_parameters", log_parameters, bool)
1241
+ verify_type("log_checkpoints", log_checkpoints, (str, type(None)))
1242
+
1243
+ self._base_namespace_path = base_namespace
1244
+ self._log_parameters = log_parameters
1245
+ self._log_checkpoints = log_checkpoints
1246
+ self._initial_run: Optional[Run] = run
1247
+
1248
+ self._run = None
1249
+ self._is_monitoring_run = False
1250
+ self._run_id = None
1251
+ self._force_reset_monitoring_run = False
1252
+ self._init_run_kwargs = {"api_token": api_token, "project": project, "name": name, **neptune_run_kwargs}
1253
+
1254
+ self._volatile_checkpoints_dir = None
1255
+ self._should_upload_checkpoint = self._log_checkpoints is not None
1256
+ self._recent_checkpoint_path = None
1257
+
1258
+ if self._log_checkpoints in {"last", "best"}:
1259
+ self._target_checkpoints_namespace = f"checkpoints/{self._log_checkpoints}"
1260
+ self._should_clean_recently_uploaded_checkpoint = True
1261
+ else:
1262
+ self._target_checkpoints_namespace = "checkpoints"
1263
+ self._should_clean_recently_uploaded_checkpoint = False
1264
+
1265
+ def _stop_run_if_exists(self):
1266
+ if self._run:
1267
+ self._run.stop()
1268
+ del self._run
1269
+ self._run = None
1270
+
1271
+ def _initialize_run(self, **additional_neptune_kwargs):
1272
+ try:
1273
+ from neptune import init_run
1274
+ from neptune.exceptions import NeptuneMissingApiTokenException, NeptuneMissingProjectNameException
1275
+ except ImportError:
1276
+ from neptune.new import init_run
1277
+ from neptune.new.exceptions import NeptuneMissingApiTokenException, NeptuneMissingProjectNameException
1278
+
1279
+ self._stop_run_if_exists()
1280
+
1281
+ try:
1282
+ run_params = additional_neptune_kwargs.copy()
1283
+ run_params.update(self._init_run_kwargs)
1284
+ self._run = init_run(**run_params)
1285
+ self._run_id = self._run["sys/id"].fetch()
1286
+ except (NeptuneMissingProjectNameException, NeptuneMissingApiTokenException) as e:
1287
+ raise NeptuneMissingConfiguration() from e
1288
+
1289
+ def _use_initial_run(self):
1290
+ self._run = self._initial_run
1291
+ self._is_monitoring_run = True
1292
+ self._run_id = self._run["sys/id"].fetch()
1293
+ self._initial_run = None
1294
+
1295
+ def _ensure_run_with_monitoring(self):
1296
+ if self._initial_run is not None:
1297
+ self._use_initial_run()
1298
+ else:
1299
+ if not self._force_reset_monitoring_run and self._is_monitoring_run:
1300
+ return
1301
+
1302
+ if self._run and not self._is_monitoring_run and not self._force_reset_monitoring_run:
1303
+ self._initialize_run(with_id=self._run_id)
1304
+ self._is_monitoring_run = True
1305
+ else:
1306
+ self._initialize_run()
1307
+ self._force_reset_monitoring_run = False
1308
+
1309
+ def _ensure_at_least_run_without_monitoring(self):
1310
+ if self._initial_run is not None:
1311
+ self._use_initial_run()
1312
+ else:
1313
+ if not self._run:
1314
+ self._initialize_run(
1315
+ with_id=self._run_id,
1316
+ capture_stdout=False,
1317
+ capture_stderr=False,
1318
+ capture_hardware_metrics=False,
1319
+ capture_traceback=False,
1320
+ )
1321
+ self._is_monitoring_run = False
1322
+
1323
+ @property
1324
+ def run(self):
1325
+ if self._run is None:
1326
+ self._ensure_at_least_run_without_monitoring()
1327
+ return self._run
1328
+
1329
+ @property
1330
+ def _metadata_namespace(self):
1331
+ return self.run[self._base_namespace_path]
1332
+
1333
+ def _log_integration_version(self):
1334
+ self.run[NeptuneCallback.integration_version_key] = version
1335
+
1336
+ def _log_trainer_parameters(self, args):
1337
+ self._metadata_namespace[NeptuneCallback.trainer_parameters_key] = args.to_sanitized_dict()
1338
+
1339
+ def _log_model_parameters(self, model):
1340
+ from neptune.utils import stringify_unsupported
1341
+
1342
+ if model and hasattr(model, "config") and model.config is not None:
1343
+ self._metadata_namespace[NeptuneCallback.model_parameters_key] = stringify_unsupported(
1344
+ model.config.to_dict()
1345
+ )
1346
+
1347
+ def _log_hyper_param_search_parameters(self, state):
1348
+ if state and hasattr(state, "trial_name"):
1349
+ self._metadata_namespace[NeptuneCallback.trial_name_key] = state.trial_name
1350
+
1351
+ if state and hasattr(state, "trial_params") and state.trial_params is not None:
1352
+ self._metadata_namespace[NeptuneCallback.trial_params_key] = state.trial_params
1353
+
1354
+ def _log_model_checkpoint(self, source_directory: str, checkpoint: str):
1355
+ target_path = relative_path = os.path.join(source_directory, checkpoint)
1356
+
1357
+ if self._volatile_checkpoints_dir is not None:
1358
+ consistent_checkpoint_path = os.path.join(self._volatile_checkpoints_dir, checkpoint)
1359
+ try:
1360
+ # Remove leading ../ from a relative path.
1361
+ cpkt_path = relative_path.replace("..", "").lstrip(os.path.sep)
1362
+ copy_path = os.path.join(consistent_checkpoint_path, cpkt_path)
1363
+ shutil.copytree(relative_path, copy_path)
1364
+ target_path = consistent_checkpoint_path
1365
+ except IOError as e:
1366
+ logger.warning(
1367
+ "NeptuneCallback was unable to made a copy of checkpoint due to I/O exception: '{}'. "
1368
+ "Could fail trying to upload.".format(e)
1369
+ )
1370
+
1371
+ self._metadata_namespace[self._target_checkpoints_namespace].upload_files(target_path)
1372
+
1373
+ if self._should_clean_recently_uploaded_checkpoint and self._recent_checkpoint_path is not None:
1374
+ self._metadata_namespace[self._target_checkpoints_namespace].delete_files(self._recent_checkpoint_path)
1375
+
1376
+ self._recent_checkpoint_path = relative_path
1377
+
1378
+ def on_init_end(self, args, state, control, **kwargs):
1379
+ self._volatile_checkpoints_dir = None
1380
+ if self._log_checkpoints and (args.overwrite_output_dir or args.save_total_limit is not None):
1381
+ self._volatile_checkpoints_dir = tempfile.TemporaryDirectory().name
1382
+
1383
+ if self._log_checkpoints == "best" and not args.load_best_model_at_end:
1384
+ raise ValueError("To save the best model checkpoint, the load_best_model_at_end argument must be enabled.")
1385
+
1386
+ def on_train_begin(self, args, state, control, model=None, **kwargs):
1387
+ if not state.is_world_process_zero:
1388
+ return
1389
+
1390
+ self._ensure_run_with_monitoring()
1391
+ self._force_reset_monitoring_run = True
1392
+
1393
+ self._log_integration_version()
1394
+ if self._log_parameters:
1395
+ self._log_trainer_parameters(args)
1396
+ self._log_model_parameters(model)
1397
+
1398
+ if state.is_hyper_param_search:
1399
+ self._log_hyper_param_search_parameters(state)
1400
+
1401
+ def on_train_end(self, args, state, control, **kwargs):
1402
+ self._stop_run_if_exists()
1403
+
1404
+ def __del__(self):
1405
+ if self._volatile_checkpoints_dir is not None:
1406
+ shutil.rmtree(self._volatile_checkpoints_dir, ignore_errors=True)
1407
+
1408
+ self._stop_run_if_exists()
1409
+
1410
+ def on_save(self, args, state, control, **kwargs):
1411
+ if self._should_upload_checkpoint:
1412
+ self._log_model_checkpoint(args.output_dir, f"checkpoint-{state.global_step}")
1413
+
1414
+ def on_evaluate(self, args, state, control, metrics=None, **kwargs):
1415
+ if self._log_checkpoints == "best":
1416
+ best_metric_name = args.metric_for_best_model
1417
+ if not best_metric_name.startswith("eval_"):
1418
+ best_metric_name = f"eval_{best_metric_name}"
1419
+
1420
+ metric_value = metrics.get(best_metric_name)
1421
+
1422
+ operator = np.greater if args.greater_is_better else np.less
1423
+
1424
+ self._should_upload_checkpoint = state.best_metric is None or operator(metric_value, state.best_metric)
1425
+
1426
+ @classmethod
1427
+ def get_run(cls, trainer):
1428
+ for callback in trainer.callback_handler.callbacks:
1429
+ if isinstance(callback, cls):
1430
+ return callback.run
1431
+
1432
+ raise Exception("The trainer doesn't have a NeptuneCallback configured.")
1433
+
1434
+ def on_log(self, args, state, control, logs: Optional[Dict[str, float]] = None, **kwargs):
1435
+ if not state.is_world_process_zero:
1436
+ return
1437
+
1438
+ if logs is not None:
1439
+ for name, value in rewrite_logs(logs).items():
1440
+ if isinstance(value, (int, float)):
1441
+ if name in NeptuneCallback.flat_metrics:
1442
+ self._metadata_namespace[name] = value
1443
+ else:
1444
+ self._metadata_namespace[name].log(value, step=state.global_step)
1445
+
1446
+
1447
+ class CodeCarbonCallback(TrainerCallback):
1448
+ """
1449
+ A [`TrainerCallback`] that tracks the CO2 emission of training.
1450
+ """
1451
+
1452
+ def __init__(self):
1453
+ if not is_codecarbon_available():
1454
+ raise RuntimeError(
1455
+ "CodeCarbonCallback requires `codecarbon` to be installed. Run `pip install codecarbon`."
1456
+ )
1457
+ import codecarbon
1458
+
1459
+ self._codecarbon = codecarbon
1460
+ self.tracker = None
1461
+
1462
+ def on_init_end(self, args, state, control, **kwargs):
1463
+ if self.tracker is None and state.is_local_process_zero:
1464
+ # CodeCarbon will automatically handle environment variables for configuration
1465
+ self.tracker = self._codecarbon.EmissionsTracker(output_dir=args.output_dir)
1466
+
1467
+ def on_train_begin(self, args, state, control, model=None, **kwargs):
1468
+ if self.tracker and state.is_local_process_zero:
1469
+ self.tracker.start()
1470
+
1471
+ def on_train_end(self, args, state, control, **kwargs):
1472
+ if self.tracker and state.is_local_process_zero:
1473
+ self.tracker.stop()
1474
+
1475
+
1476
+ class ClearMLCallback(TrainerCallback):
1477
+ """
1478
+ A [`TrainerCallback`] that sends the logs to [ClearML](https://clear.ml/).
1479
+
1480
+ Environment:
1481
+ - **CLEARML_PROJECT** (`str`, *optional*, defaults to `HuggingFace Transformers`):
1482
+ ClearML project name.
1483
+ - **CLEARML_TASK** (`str`, *optional*, defaults to `Trainer`):
1484
+ ClearML task name.
1485
+ - **CLEARML_LOG_MODEL** (`bool`, *optional*, defaults to `False`):
1486
+ Whether to log models as artifacts during training.
1487
+ """
1488
+
1489
+ log_suffix = ""
1490
+
1491
+ _hparams_section = "Transformers"
1492
+ _model_config_section = "Model Configuration"
1493
+ _ignore_hparams_overrides = "_ignore_hparams_ui_overrides_"
1494
+ _ignoge_model_config_overrides = "_ignore_model_config_ui_overrides_"
1495
+ _model_config_description = "The configuration of model number {}."
1496
+ _model_config_description_note = (
1497
+ "Note that, when cloning this task and running it remotely,"
1498
+ " the configuration might be applied to another model instead of this one."
1499
+ " To avoid this, initialize the task externally by calling `Task.init`"
1500
+ " before the `ClearMLCallback` is instantiated."
1501
+ )
1502
+ _train_run_counter = 0
1503
+ _model_connect_counter = 0
1504
+ _task_created_in_callback = False
1505
+ _should_close_on_train_end = None
1506
+
1507
+ def __init__(self):
1508
+ if is_clearml_available():
1509
+ import clearml
1510
+
1511
+ self._clearml = clearml
1512
+ else:
1513
+ raise RuntimeError("ClearMLCallback requires 'clearml' to be installed. Run `pip install clearml`.")
1514
+
1515
+ self._initialized = False
1516
+ self._clearml_task = None
1517
+
1518
+ self._log_model = False
1519
+ self._checkpoints_saved = []
1520
+
1521
+ def setup(self, args, state, model, tokenizer, **kwargs):
1522
+ if self._clearml is None:
1523
+ return
1524
+ if self._initialized:
1525
+ return
1526
+ ClearMLCallback._train_run_counter += 1
1527
+ ClearMLCallback._model_connect_counter += 1
1528
+ ClearMLCallback.log_suffix = (
1529
+ "" if ClearMLCallback._train_run_counter == 1 else "_" + str(ClearMLCallback._train_run_counter)
1530
+ )
1531
+ if state.is_world_process_zero:
1532
+ logger.info("Automatic ClearML logging enabled.")
1533
+ if self._clearml_task is None:
1534
+ if ClearMLCallback._should_close_on_train_end is None:
1535
+ if not self._clearml.Task.running_locally() or self._clearml.Task.current_task():
1536
+ ClearMLCallback._should_close_on_train_end = False
1537
+ else:
1538
+ ClearMLCallback._should_close_on_train_end = True
1539
+
1540
+ # This might happen when running inside of a pipeline, where the task is already initialized
1541
+ # from outside of Hugging Face
1542
+ if self._clearml.Task.running_locally() and self._clearml.Task.current_task():
1543
+ self._clearml_task = self._clearml.Task.current_task()
1544
+ self._log_model = os.getenv(
1545
+ "CLEARML_LOG_MODEL",
1546
+ "FALSE" if not ClearMLCallback._task_created_in_callback else "TRUE",
1547
+ ).upper() in ENV_VARS_TRUE_VALUES.union({"TRUE"})
1548
+ logger.info("External ClearML Task has been connected.")
1549
+ else:
1550
+ self._clearml_task = self._clearml.Task.init(
1551
+ project_name=os.getenv("CLEARML_PROJECT", "HuggingFace Transformers"),
1552
+ task_name=os.getenv("CLEARML_TASK", "Trainer"),
1553
+ auto_connect_frameworks={"tensorboard": False, "pytorch": False},
1554
+ output_uri=True,
1555
+ )
1556
+ self._log_model = os.getenv("CLEARML_LOG_MODEL", "TRUE").upper() in ENV_VARS_TRUE_VALUES.union(
1557
+ {"TRUE"}
1558
+ )
1559
+ ClearMLCallback._task_created_in_callback = True
1560
+ logger.info("ClearML Task has been initialized.")
1561
+ self._initialized = True
1562
+
1563
+ suffixed_hparams_section = ClearMLCallback._hparams_section + ClearMLCallback.log_suffix
1564
+ ignore_hparams_config_section = suffixed_hparams_section + "/" + ClearMLCallback._ignore_hparams_overrides
1565
+ if self._clearml.Task.running_locally():
1566
+ self._copy_training_args_as_hparams(args, suffixed_hparams_section)
1567
+ self._clearml_task.set_parameter(
1568
+ name=ignore_hparams_config_section,
1569
+ value=True,
1570
+ value_type=bool,
1571
+ description=(
1572
+ "If True, ignore Transformers hyperparameters overrides done in the UI/backend "
1573
+ + "when running remotely. Otherwise, the overrides will be applied when running remotely"
1574
+ ),
1575
+ )
1576
+ elif not self._clearml_task.get_parameter(ignore_hparams_config_section, default=True, cast=True):
1577
+ self._clearml_task.connect(args, suffixed_hparams_section)
1578
+ else:
1579
+ self._copy_training_args_as_hparams(
1580
+ args, ClearMLCallback._hparams_section + ClearMLCallback.log_suffix
1581
+ )
1582
+
1583
+ if getattr(model, "config", None) is not None:
1584
+ ignore_model_config_section = (
1585
+ suffixed_hparams_section + "/" + ClearMLCallback._ignoge_model_config_overrides
1586
+ )
1587
+ configuration_object_description = ClearMLCallback._model_config_description.format(
1588
+ ClearMLCallback._model_connect_counter
1589
+ )
1590
+ if ClearMLCallback._model_connect_counter != ClearMLCallback._train_run_counter:
1591
+ configuration_object_description += " " + ClearMLCallback._model_config_description_note
1592
+ if self._clearml.Task.running_locally():
1593
+ self._clearml_task.set_parameter(
1594
+ name=ignore_model_config_section,
1595
+ value=True,
1596
+ value_type=bool,
1597
+ description=(
1598
+ "If True, ignore Transformers model configuration overrides done in the UI/backend "
1599
+ + "when running remotely. Otherwise, the overrides will be applied when running remotely"
1600
+ ),
1601
+ )
1602
+ self._clearml_task.set_configuration_object(
1603
+ name=ClearMLCallback._model_config_section + ClearMLCallback.log_suffix,
1604
+ config_dict=model.config.to_dict(),
1605
+ description=configuration_object_description,
1606
+ )
1607
+ elif not self._clearml_task.get_parameter(ignore_model_config_section, default=True, cast=True):
1608
+ model.config = model.config.from_dict(
1609
+ self._clearml_task.get_configuration_object_as_dict(
1610
+ ClearMLCallback._model_config_section + ClearMLCallback.log_suffix
1611
+ )
1612
+ )
1613
+ else:
1614
+ self._clearml_task.set_configuration_object(
1615
+ name=ClearMLCallback._model_config_section + ClearMLCallback.log_suffix,
1616
+ config_dict=model.config.to_dict(),
1617
+ description=configuration_object_description,
1618
+ )
1619
+
1620
+ def on_train_begin(self, args, state, control, model=None, tokenizer=None, **kwargs):
1621
+ if self._clearml is None:
1622
+ return
1623
+ self._checkpoints_saved = []
1624
+ if state.is_hyper_param_search:
1625
+ self._initialized = False
1626
+ if not self._initialized:
1627
+ self.setup(args, state, model, tokenizer, **kwargs)
1628
+
1629
+ def on_train_end(self, args, state, control, **kwargs):
1630
+ if ClearMLCallback._should_close_on_train_end:
1631
+ self._clearml_task.close()
1632
+ ClearMLCallback._train_run_counter = 0
1633
+
1634
+ def on_log(self, args, state, control, model=None, tokenizer=None, logs=None, **kwargs):
1635
+ if self._clearml is None:
1636
+ return
1637
+ if not self._initialized:
1638
+ self.setup(args, state, model, tokenizer, **kwargs)
1639
+ if state.is_world_process_zero:
1640
+ eval_prefix = "eval_"
1641
+ eval_prefix_len = len(eval_prefix)
1642
+ test_prefix = "test_"
1643
+ test_prefix_len = len(test_prefix)
1644
+ single_value_scalars = [
1645
+ "train_runtime",
1646
+ "train_samples_per_second",
1647
+ "train_steps_per_second",
1648
+ "train_loss",
1649
+ "total_flos",
1650
+ "epoch",
1651
+ ]
1652
+ for k, v in logs.items():
1653
+ if isinstance(v, (int, float)):
1654
+ if k in single_value_scalars:
1655
+ self._clearml_task.get_logger().report_single_value(
1656
+ name=k + ClearMLCallback.log_suffix, value=v
1657
+ )
1658
+ elif k.startswith(eval_prefix):
1659
+ self._clearml_task.get_logger().report_scalar(
1660
+ title="eval" + ClearMLCallback.log_suffix,
1661
+ series=k[eval_prefix_len:],
1662
+ value=v,
1663
+ iteration=state.global_step,
1664
+ )
1665
+ elif k.startswith(test_prefix):
1666
+ self._clearml_task.get_logger().report_scalar(
1667
+ title="test" + ClearMLCallback.log_suffix,
1668
+ series=k[test_prefix_len:],
1669
+ value=v,
1670
+ iteration=state.global_step,
1671
+ )
1672
+ else:
1673
+ self._clearml_task.get_logger().report_scalar(
1674
+ title="train" + ClearMLCallback.log_suffix,
1675
+ series=k,
1676
+ value=v,
1677
+ iteration=state.global_step,
1678
+ )
1679
+ else:
1680
+ logger.warning(
1681
+ "Trainer is attempting to log a value of "
1682
+ f'"{v}" of type {type(v)} for key "{k}" as a scalar. '
1683
+ "This invocation of ClearML logger's report_scalar() "
1684
+ "is incorrect so we dropped this attribute."
1685
+ )
1686
+
1687
+ def on_save(self, args, state, control, **kwargs):
1688
+ if self._log_model and self._clearml_task and state.is_world_process_zero:
1689
+ ckpt_dir = f"checkpoint-{state.global_step}"
1690
+ artifact_path = os.path.join(args.output_dir, ckpt_dir)
1691
+ name = ckpt_dir + ClearMLCallback.log_suffix
1692
+ logger.info(f"Logging checkpoint artifact `{name}`. This may take some time.")
1693
+ output_model = self._clearml.OutputModel(task=self._clearml_task, name=name)
1694
+ output_model.connect(task=self._clearml_task, name=name)
1695
+ output_model.update_weights_package(
1696
+ weights_path=artifact_path,
1697
+ target_filename=ckpt_dir,
1698
+ iteration=state.global_step,
1699
+ auto_delete_file=False,
1700
+ )
1701
+ self._checkpoints_saved.append(output_model)
1702
+ while args.save_total_limit and args.save_total_limit < len(self._checkpoints_saved):
1703
+ try:
1704
+ self._clearml.model.Model.remove(
1705
+ self._checkpoints_saved[0],
1706
+ delete_weights_file=True,
1707
+ force=True,
1708
+ raise_on_errors=True,
1709
+ )
1710
+ except Exception as e:
1711
+ logger.warning(
1712
+ "Could not remove checkpoint `{}` after going over the `save_total_limit`. Error is: {}".format(
1713
+ self._checkpoints_saved[0].name, e
1714
+ )
1715
+ )
1716
+ break
1717
+ self._checkpoints_saved = self._checkpoints_saved[1:]
1718
+
1719
+ def _copy_training_args_as_hparams(self, training_args, prefix):
1720
+ as_dict = {
1721
+ field.name: getattr(training_args, field.name)
1722
+ for field in fields(training_args)
1723
+ if field.init and not field.name.endswith("_token")
1724
+ }
1725
+ flat_dict = {str(k): v for k, v in self._clearml.utilities.proxy_object.flatten_dictionary(as_dict).items()}
1726
+ self._clearml_task._arguments.copy_from_dict(flat_dict, prefix=prefix)
1727
+
1728
+
1729
+ class FlyteCallback(TrainerCallback):
1730
+ """A [`TrainerCallback`] that sends the logs to [Flyte](https://flyte.org/).
1731
+ NOTE: This callback only works within a Flyte task.
1732
+
1733
+ Args:
1734
+ save_log_history (`bool`, *optional*, defaults to `True`):
1735
+ When set to True, the training logs are saved as a Flyte Deck.
1736
+
1737
+ sync_checkpoints (`bool`, *optional*, defaults to `True`):
1738
+ When set to True, checkpoints are synced with Flyte and can be used to resume training in the case of an
1739
+ interruption.
1740
+
1741
+ Example:
1742
+
1743
+ ```python
1744
+ # Note: This example skips over some setup steps for brevity.
1745
+ from flytekit import current_context, task
1746
+
1747
+
1748
+ @task
1749
+ def train_hf_transformer():
1750
+ cp = current_context().checkpoint
1751
+ trainer = Trainer(..., callbacks=[FlyteCallback()])
1752
+ output = trainer.train(resume_from_checkpoint=cp.restore())
1753
+ ```
1754
+ """
1755
+
1756
+ def __init__(self, save_log_history: bool = True, sync_checkpoints: bool = True):
1757
+ super().__init__()
1758
+ if not is_flytekit_available():
1759
+ raise ImportError("FlyteCallback requires flytekit to be installed. Run `pip install flytekit`.")
1760
+
1761
+ if not is_flyte_deck_standard_available() or not is_pandas_available():
1762
+ logger.warning(
1763
+ "Syncing log history requires both flytekitplugins-deck-standard and pandas to be installed. "
1764
+ "Run `pip install flytekitplugins-deck-standard pandas` to enable this feature."
1765
+ )
1766
+ save_log_history = False
1767
+
1768
+ from flytekit import current_context
1769
+
1770
+ self.cp = current_context().checkpoint
1771
+ self.save_log_history = save_log_history
1772
+ self.sync_checkpoints = sync_checkpoints
1773
+
1774
+ def on_save(self, args, state, control, **kwargs):
1775
+ if self.sync_checkpoints and state.is_world_process_zero:
1776
+ ckpt_dir = f"checkpoint-{state.global_step}"
1777
+ artifact_path = os.path.join(args.output_dir, ckpt_dir)
1778
+
1779
+ logger.info(f"Syncing checkpoint in {ckpt_dir} to Flyte. This may take time.")
1780
+ self.cp.save(artifact_path)
1781
+
1782
+ def on_train_end(self, args, state, control, **kwargs):
1783
+ if self.save_log_history:
1784
+ import pandas as pd
1785
+ from flytekit import Deck
1786
+ from flytekitplugins.deck.renderer import TableRenderer
1787
+
1788
+ log_history_df = pd.DataFrame(state.log_history)
1789
+ Deck("Log History", TableRenderer().to_html(log_history_df))
1790
+
1791
+
1792
+ class DVCLiveCallback(TrainerCallback):
1793
+ """
1794
+ A [`TrainerCallback`] that sends the logs to [DVCLive](https://www.dvc.org/doc/dvclive).
1795
+
1796
+ Use the environment variables below in `setup` to configure the integration. To customize this callback beyond
1797
+ those environment variables, see [here](https://dvc.org/doc/dvclive/ml-frameworks/huggingface).
1798
+
1799
+ Args:
1800
+ live (`dvclive.Live`, *optional*, defaults to `None`):
1801
+ Optional Live instance. If None, a new instance will be created using **kwargs.
1802
+ log_model (Union[Literal["all"], bool], *optional*, defaults to `None`):
1803
+ Whether to use `dvclive.Live.log_artifact()` to log checkpoints created by [`Trainer`]. If set to `True`,
1804
+ the final checkpoint is logged at the end of training. If set to `"all"`, the entire
1805
+ [`TrainingArguments`]'s `output_dir` is logged at each checkpoint.
1806
+ """
1807
+
1808
+ def __init__(
1809
+ self,
1810
+ live: Optional[Any] = None,
1811
+ log_model: Optional[Union[Literal["all"], bool]] = None,
1812
+ **kwargs,
1813
+ ):
1814
+ if not is_dvclive_available():
1815
+ raise RuntimeError("DVCLiveCallback requires dvclive to be installed. Run `pip install dvclive`.")
1816
+ from dvclive import Live
1817
+
1818
+ self._initialized = False
1819
+ self.live = None
1820
+ if isinstance(live, Live):
1821
+ self.live = live
1822
+ elif live is not None:
1823
+ raise RuntimeError(f"Found class {live.__class__} for live, expected dvclive.Live")
1824
+
1825
+ self._log_model = log_model
1826
+ if self._log_model is None:
1827
+ log_model_env = os.getenv("HF_DVCLIVE_LOG_MODEL", "FALSE")
1828
+ if log_model_env.upper() in ENV_VARS_TRUE_VALUES:
1829
+ self._log_model = True
1830
+ elif log_model_env.lower() == "all":
1831
+ self._log_model = "all"
1832
+
1833
+ def setup(self, args, state, model):
1834
+ """
1835
+ Setup the optional DVCLive integration. To customize this callback beyond the environment variables below, see
1836
+ [here](https://dvc.org/doc/dvclive/ml-frameworks/huggingface).
1837
+
1838
+ Environment:
1839
+ - **HF_DVCLIVE_LOG_MODEL** (`str`, *optional*):
1840
+ Whether to use `dvclive.Live.log_artifact()` to log checkpoints created by [`Trainer`]. If set to `True` or
1841
+ *1*, the final checkpoint is logged at the end of training. If set to `all`, the entire
1842
+ [`TrainingArguments`]'s `output_dir` is logged at each checkpoint.
1843
+ """
1844
+ from dvclive import Live
1845
+
1846
+ self._initialized = True
1847
+ if state.is_world_process_zero:
1848
+ if not self.live:
1849
+ self.live = Live()
1850
+ self.live.log_params(args.to_dict())
1851
+
1852
+ def on_train_begin(self, args, state, control, model=None, **kwargs):
1853
+ if not self._initialized:
1854
+ self.setup(args, state, model)
1855
+
1856
+ def on_log(self, args, state, control, model=None, logs=None, **kwargs):
1857
+ if not self._initialized:
1858
+ self.setup(args, state, model)
1859
+ if state.is_world_process_zero:
1860
+ from dvclive.plots import Metric
1861
+ from dvclive.utils import standardize_metric_name
1862
+
1863
+ for key, value in logs.items():
1864
+ if Metric.could_log(value):
1865
+ self.live.log_metric(standardize_metric_name(key, "dvclive.huggingface"), value)
1866
+ else:
1867
+ logger.warning(
1868
+ "Trainer is attempting to log a value of "
1869
+ f'"{value}" of type {type(value)} for key "{key}" as a scalar. '
1870
+ "This invocation of DVCLive's Live.log_metric() "
1871
+ "is incorrect so we dropped this attribute."
1872
+ )
1873
+ self.live.next_step()
1874
+
1875
+ def on_save(self, args, state, control, **kwargs):
1876
+ if self._log_model == "all" and self._initialized and state.is_world_process_zero:
1877
+ self.live.log_artifact(args.output_dir)
1878
+
1879
+ def on_train_end(self, args, state, control, **kwargs):
1880
+ if self._initialized and state.is_world_process_zero:
1881
+ from transformers.trainer import Trainer
1882
+
1883
+ if self._log_model is True:
1884
+ fake_trainer = Trainer(args=args, model=kwargs.get("model"), tokenizer=kwargs.get("tokenizer"))
1885
+ name = "best" if args.load_best_model_at_end else "last"
1886
+ output_dir = os.path.join(args.output_dir, name)
1887
+ fake_trainer.save_model(output_dir)
1888
+ self.live.log_artifact(output_dir, name=name, type="model", copy=True)
1889
+ self.live.end()
1890
+
1891
+
1892
+ INTEGRATION_TO_CALLBACK = {
1893
+ "azure_ml": AzureMLCallback,
1894
+ "comet_ml": CometCallback,
1895
+ "mlflow": MLflowCallback,
1896
+ "neptune": NeptuneCallback,
1897
+ "tensorboard": TensorBoardCallback,
1898
+ "wandb": WandbCallback,
1899
+ "codecarbon": CodeCarbonCallback,
1900
+ "clearml": ClearMLCallback,
1901
+ "dagshub": DagsHubCallback,
1902
+ "flyte": FlyteCallback,
1903
+ "dvclive": DVCLiveCallback,
1904
+ }
1905
+
1906
+
1907
+ def get_reporting_integration_callbacks(report_to):
1908
+ for integration in report_to:
1909
+ if integration not in INTEGRATION_TO_CALLBACK:
1910
+ raise ValueError(
1911
+ f"{integration} is not supported, only {', '.join(INTEGRATION_TO_CALLBACK.keys())} are supported."
1912
+ )
1913
+
1914
+ return [INTEGRATION_TO_CALLBACK[integration] for integration in report_to]
llmeval-env/lib/python3.10/site-packages/transformers/integrations/peft.py ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ import inspect
15
+ import warnings
16
+ from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
17
+
18
+ from ..utils import (
19
+ check_peft_version,
20
+ find_adapter_config_file,
21
+ is_accelerate_available,
22
+ is_peft_available,
23
+ is_torch_available,
24
+ logging,
25
+ )
26
+
27
+
28
+ if is_accelerate_available():
29
+ from accelerate import dispatch_model
30
+ from accelerate.utils import get_balanced_memory, infer_auto_device_map
31
+
32
+ # Minimum PEFT version supported for the integration
33
+ MIN_PEFT_VERSION = "0.5.0"
34
+
35
+ if TYPE_CHECKING:
36
+ if is_torch_available():
37
+ import torch
38
+
39
+
40
+ logger = logging.get_logger(__name__)
41
+
42
+
43
+ class PeftAdapterMixin:
44
+ """
45
+ A class containing all functions for loading and using adapters weights that are supported in PEFT library. For
46
+ more details about adapters and injecting them on a transformer-based model, check out the documentation of PEFT
47
+ library: https://huggingface.co/docs/peft/index
48
+
49
+ Currently supported PEFT methods are all non-prefix tuning methods. Below is the list of supported PEFT methods
50
+ that anyone can load, train and run with this mixin class:
51
+ - Low Rank Adapters (LoRA): https://huggingface.co/docs/peft/conceptual_guides/lora
52
+ - IA3: https://huggingface.co/docs/peft/conceptual_guides/ia3
53
+ - AdaLora: https://arxiv.org/abs/2303.10512
54
+
55
+ Other PEFT models such as prompt tuning, prompt learning are out of scope as these adapters are not "injectable"
56
+ into a torch module. For using these methods, please refer to the usage guide of PEFT library.
57
+
58
+ With this mixin, if the correct PEFT version is installed, it is possible to:
59
+
60
+ - Load an adapter stored on a local path or in a remote Hub repository, and inject it in the model
61
+ - Attach new adapters in the model and train them with Trainer or by your own.
62
+ - Attach multiple adapters and iteratively activate / deactivate them
63
+ - Activate / deactivate all adapters from the model.
64
+ - Get the `state_dict` of the active adapter.
65
+ """
66
+
67
+ _hf_peft_config_loaded = False
68
+
69
+ def load_adapter(
70
+ self,
71
+ peft_model_id: Optional[str] = None,
72
+ adapter_name: Optional[str] = None,
73
+ revision: Optional[str] = None,
74
+ token: Optional[str] = None,
75
+ device_map: Optional[str] = "auto",
76
+ max_memory: Optional[str] = None,
77
+ offload_folder: Optional[str] = None,
78
+ offload_index: Optional[int] = None,
79
+ peft_config: Dict[str, Any] = None,
80
+ adapter_state_dict: Optional[Dict[str, "torch.Tensor"]] = None,
81
+ adapter_kwargs: Optional[Dict[str, Any]] = None,
82
+ ) -> None:
83
+ """
84
+ Load adapter weights from file or remote Hub folder. If you are not familiar with adapters and PEFT methods, we
85
+ invite you to read more about them on PEFT official documentation: https://huggingface.co/docs/peft
86
+
87
+ Requires peft as a backend to load the adapter weights.
88
+
89
+ Args:
90
+ peft_model_id (`str`, *optional*):
91
+ The identifier of the model to look for on the Hub, or a local path to the saved adapter config file
92
+ and adapter weights.
93
+ adapter_name (`str`, *optional*):
94
+ The adapter name to use. If not set, will use the default adapter.
95
+ revision (`str`, *optional*, defaults to `"main"`):
96
+ The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
97
+ git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
98
+ identifier allowed by git.
99
+
100
+ <Tip>
101
+
102
+ To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>".
103
+
104
+ </Tip>
105
+
106
+ token (`str`, `optional`):
107
+ Whether to use authentication token to load the remote folder. Userful to load private repositories
108
+ that are on HuggingFace Hub. You might need to call `huggingface-cli login` and paste your tokens to
109
+ cache it.
110
+ device_map (`str` or `Dict[str, Union[int, str, torch.device]]` or `int` or `torch.device`, *optional*):
111
+ A map that specifies where each submodule should go. It doesn't need to be refined to each
112
+ parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
113
+ same device. If we only pass the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank
114
+ like `1`) on which the model will be allocated, the device map will map the entire model to this
115
+ device. Passing `device_map = 0` means put the whole model on GPU 0.
116
+
117
+ To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For
118
+ more information about each option see [designing a device
119
+ map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
120
+ max_memory (`Dict`, *optional*):
121
+ A dictionary device identifier to maximum memory. Will default to the maximum memory available for each
122
+ GPU and the available CPU RAM if unset.
123
+ offload_folder (`str` or `os.PathLike`, `optional`):
124
+ If the `device_map` contains any value `"disk"`, the folder where we will offload weights.
125
+ offload_index (`int`, `optional`):
126
+ `offload_index` argument to be passed to `accelerate.dispatch_model` method.
127
+ peft_config (`Dict[str, Any]`, *optional*):
128
+ The configuration of the adapter to add, supported adapters are non-prefix tuning and adaption prompts
129
+ methods. This argument is used in case users directly pass PEFT state dicts
130
+ adapter_state_dict (`Dict[str, torch.Tensor]`, *optional*):
131
+ The state dict of the adapter to load. This argument is used in case users directly pass PEFT state
132
+ dicts
133
+ adapter_kwargs (`Dict[str, Any]`, *optional*):
134
+ Additional keyword arguments passed along to the `from_pretrained` method of the adapter config and
135
+ `find_adapter_config_file` method.
136
+ """
137
+ check_peft_version(min_version=MIN_PEFT_VERSION)
138
+
139
+ adapter_name = adapter_name if adapter_name is not None else "default"
140
+ if adapter_kwargs is None:
141
+ adapter_kwargs = {}
142
+
143
+ from peft import PeftConfig, inject_adapter_in_model, load_peft_weights
144
+ from peft.utils import set_peft_model_state_dict
145
+
146
+ if self._hf_peft_config_loaded and adapter_name in self.peft_config:
147
+ raise ValueError(f"Adapter with name {adapter_name} already exists. Please use a different name.")
148
+
149
+ if peft_model_id is None and (adapter_state_dict is None and peft_config is None):
150
+ raise ValueError(
151
+ "You should either pass a `peft_model_id` or a `peft_config` and `adapter_state_dict` to load an adapter."
152
+ )
153
+
154
+ # We keep `revision` in the signature for backward compatibility
155
+ if revision is not None and "revision" not in adapter_kwargs:
156
+ adapter_kwargs["revision"] = revision
157
+ elif revision is not None and "revision" in adapter_kwargs and revision != adapter_kwargs["revision"]:
158
+ logger.error(
159
+ "You passed a `revision` argument both in `adapter_kwargs` and as a standalone argument. "
160
+ "The one in `adapter_kwargs` will be used."
161
+ )
162
+
163
+ # Override token with adapter_kwargs' token
164
+ if "token" in adapter_kwargs:
165
+ token = adapter_kwargs.pop("token")
166
+
167
+ if peft_config is None:
168
+ adapter_config_file = find_adapter_config_file(
169
+ peft_model_id,
170
+ token=token,
171
+ **adapter_kwargs,
172
+ )
173
+
174
+ if adapter_config_file is None:
175
+ raise ValueError(
176
+ f"adapter model file not found in {peft_model_id}. Make sure you are passing the correct path to the "
177
+ "adapter model."
178
+ )
179
+
180
+ peft_config = PeftConfig.from_pretrained(
181
+ peft_model_id,
182
+ token=token,
183
+ **adapter_kwargs,
184
+ )
185
+
186
+ # Create and add fresh new adapters into the model.
187
+ inject_adapter_in_model(peft_config, self, adapter_name)
188
+
189
+ if not self._hf_peft_config_loaded:
190
+ self._hf_peft_config_loaded = True
191
+
192
+ if peft_model_id is not None:
193
+ adapter_state_dict = load_peft_weights(peft_model_id, token=token, **adapter_kwargs)
194
+
195
+ # We need to pre-process the state dict to remove unneeded prefixes - for backward compatibility
196
+ processed_adapter_state_dict = {}
197
+ prefix = "base_model.model."
198
+ for key, value in adapter_state_dict.items():
199
+ if key.startswith(prefix):
200
+ new_key = key[len(prefix) :]
201
+ else:
202
+ new_key = key
203
+ processed_adapter_state_dict[new_key] = value
204
+
205
+ # Load state dict
206
+ incompatible_keys = set_peft_model_state_dict(self, processed_adapter_state_dict, adapter_name)
207
+
208
+ if incompatible_keys is not None:
209
+ # check only for unexpected keys
210
+ if hasattr(incompatible_keys, "unexpected_keys") and len(incompatible_keys.unexpected_keys) > 0:
211
+ logger.warning(
212
+ f"Loading adapter weights from {peft_model_id} led to unexpected keys not found in the model: "
213
+ f" {incompatible_keys.unexpected_keys}. "
214
+ )
215
+
216
+ # Re-dispatch model and hooks in case the model is offloaded to CPU / Disk.
217
+ if (
218
+ (getattr(self, "hf_device_map", None) is not None)
219
+ and (len(set(self.hf_device_map.values()).intersection({"cpu", "disk"})) > 0)
220
+ and len(self.peft_config) == 1
221
+ ):
222
+ self._dispatch_accelerate_model(
223
+ device_map=device_map,
224
+ max_memory=max_memory,
225
+ offload_folder=offload_folder,
226
+ offload_index=offload_index,
227
+ )
228
+
229
+ def add_adapter(self, adapter_config, adapter_name: Optional[str] = None) -> None:
230
+ r"""
231
+ If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
232
+ official documentation: https://huggingface.co/docs/peft
233
+
234
+ Adds a fresh new adapter to the current model for training purpose. If no adapter name is passed, a default
235
+ name is assigned to the adapter to follow the convention of PEFT library (in PEFT we use "default" as the
236
+ default adapter name).
237
+
238
+ Args:
239
+ adapter_config (`~peft.PeftConfig`):
240
+ The configuration of the adapter to add, supported adapters are non-prefix tuning and adaption prompts
241
+ methods
242
+ adapter_name (`str`, *optional*, defaults to `"default"`):
243
+ The name of the adapter to add. If no name is passed, a default name is assigned to the adapter.
244
+ """
245
+ check_peft_version(min_version=MIN_PEFT_VERSION)
246
+
247
+ from peft import PeftConfig, inject_adapter_in_model
248
+
249
+ adapter_name = adapter_name or "default"
250
+
251
+ if not self._hf_peft_config_loaded:
252
+ self._hf_peft_config_loaded = True
253
+ elif adapter_name in self.peft_config:
254
+ raise ValueError(f"Adapter with name {adapter_name} already exists. Please use a different name.")
255
+
256
+ if not isinstance(adapter_config, PeftConfig):
257
+ raise ValueError(
258
+ f"adapter_config should be an instance of PeftConfig. Got {type(adapter_config)} instead."
259
+ )
260
+
261
+ # Retrieve the name or path of the model, one could also use self.config._name_or_path
262
+ # but to be consistent with what we do in PEFT: https://github.com/huggingface/peft/blob/6e783780ca9df3a623992cc4d1d665001232eae0/src/peft/mapping.py#L100
263
+ adapter_config.base_model_name_or_path = self.__dict__.get("name_or_path", None)
264
+ inject_adapter_in_model(adapter_config, self, adapter_name)
265
+
266
+ self.set_adapter(adapter_name)
267
+
268
+ def set_adapter(self, adapter_name: Union[List[str], str]) -> None:
269
+ """
270
+ If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
271
+ official documentation: https://huggingface.co/docs/peft
272
+
273
+ Sets a specific adapter by forcing the model to use a that adapter and disable the other adapters.
274
+
275
+ Args:
276
+ adapter_name (`Union[List[str], str]`):
277
+ The name of the adapter to set. Can be also a list of strings to set multiple adapters.
278
+ """
279
+ check_peft_version(min_version=MIN_PEFT_VERSION)
280
+ if not self._hf_peft_config_loaded:
281
+ raise ValueError("No adapter loaded. Please load an adapter first.")
282
+ elif isinstance(adapter_name, list):
283
+ missing = set(adapter_name) - set(self.peft_config)
284
+ if len(missing) > 0:
285
+ raise ValueError(
286
+ f"Following adapter(s) could not be found: {', '.join(missing)}. Make sure you are passing the correct adapter name(s)."
287
+ f" current loaded adapters are: {list(self.peft_config.keys())}"
288
+ )
289
+ elif adapter_name not in self.peft_config:
290
+ raise ValueError(
291
+ f"Adapter with name {adapter_name} not found. Please pass the correct adapter name among {list(self.peft_config.keys())}"
292
+ )
293
+
294
+ from peft.tuners.tuners_utils import BaseTunerLayer
295
+ from peft.utils import ModulesToSaveWrapper
296
+
297
+ _adapters_has_been_set = False
298
+
299
+ for _, module in self.named_modules():
300
+ if isinstance(module, (BaseTunerLayer, ModulesToSaveWrapper)):
301
+ # For backward compatbility with previous PEFT versions
302
+ if hasattr(module, "set_adapter"):
303
+ module.set_adapter(adapter_name)
304
+ else:
305
+ module.active_adapter = adapter_name
306
+ _adapters_has_been_set = True
307
+
308
+ if not _adapters_has_been_set:
309
+ raise ValueError(
310
+ "Did not succeeded in setting the adapter. Please make sure you are using a model that supports adapters."
311
+ )
312
+
313
+ def disable_adapters(self) -> None:
314
+ r"""
315
+ If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
316
+ official documentation: https://huggingface.co/docs/peft
317
+
318
+ Disable all adapters that are attached to the model. This leads to inferring with the base model only.
319
+ """
320
+ check_peft_version(min_version=MIN_PEFT_VERSION)
321
+
322
+ if not self._hf_peft_config_loaded:
323
+ raise ValueError("No adapter loaded. Please load an adapter first.")
324
+
325
+ from peft.tuners.tuners_utils import BaseTunerLayer
326
+ from peft.utils import ModulesToSaveWrapper
327
+
328
+ for _, module in self.named_modules():
329
+ if isinstance(module, (BaseTunerLayer, ModulesToSaveWrapper)):
330
+ # The recent version of PEFT need to call `enable_adapters` instead
331
+ if hasattr(module, "enable_adapters"):
332
+ module.enable_adapters(enabled=False)
333
+ else:
334
+ module.disable_adapters = True
335
+
336
+ def enable_adapters(self) -> None:
337
+ """
338
+ If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
339
+ official documentation: https://huggingface.co/docs/peft
340
+
341
+ Enable adapters that are attached to the model. The model will use `self.active_adapter()`
342
+ """
343
+ check_peft_version(min_version=MIN_PEFT_VERSION)
344
+
345
+ if not self._hf_peft_config_loaded:
346
+ raise ValueError("No adapter loaded. Please load an adapter first.")
347
+
348
+ from peft.tuners.tuners_utils import BaseTunerLayer
349
+
350
+ for _, module in self.named_modules():
351
+ if isinstance(module, BaseTunerLayer):
352
+ # The recent version of PEFT need to call `enable_adapters` instead
353
+ if hasattr(module, "enable_adapters"):
354
+ module.enable_adapters(enabled=True)
355
+ else:
356
+ module.disable_adapters = False
357
+
358
+ def active_adapters(self) -> List[str]:
359
+ """
360
+ If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
361
+ official documentation: https://huggingface.co/docs/peft
362
+
363
+ Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters
364
+ for inference) returns the list of all active adapters so that users can deal with them accordingly.
365
+
366
+ For previous PEFT versions (that does not support multi-adapter inference), `module.active_adapter` will return
367
+ a single string.
368
+ """
369
+ check_peft_version(min_version=MIN_PEFT_VERSION)
370
+
371
+ if not is_peft_available():
372
+ raise ImportError("PEFT is not available. Please install PEFT to use this function: `pip install peft`.")
373
+
374
+ if not self._hf_peft_config_loaded:
375
+ raise ValueError("No adapter loaded. Please load an adapter first.")
376
+
377
+ from peft.tuners.tuners_utils import BaseTunerLayer
378
+
379
+ for _, module in self.named_modules():
380
+ if isinstance(module, BaseTunerLayer):
381
+ active_adapters = module.active_adapter
382
+ break
383
+
384
+ # For previous PEFT versions
385
+ if isinstance(active_adapters, str):
386
+ active_adapters = [active_adapters]
387
+
388
+ return active_adapters
389
+
390
+ def active_adapter(self) -> str:
391
+ warnings.warn(
392
+ "The `active_adapter` method is deprecated and will be removed in a future version.", FutureWarning
393
+ )
394
+
395
+ return self.active_adapters()[0]
396
+
397
+ def get_adapter_state_dict(self, adapter_name: Optional[str] = None) -> dict:
398
+ """
399
+ If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
400
+ official documentation: https://huggingface.co/docs/peft
401
+
402
+ Gets the adapter state dict that should only contain the weights tensors of the specified adapter_name adapter.
403
+ If no adapter_name is passed, the active adapter is used.
404
+
405
+ Args:
406
+ adapter_name (`str`, *optional*):
407
+ The name of the adapter to get the state dict from. If no name is passed, the active adapter is used.
408
+ """
409
+ check_peft_version(min_version=MIN_PEFT_VERSION)
410
+
411
+ if not self._hf_peft_config_loaded:
412
+ raise ValueError("No adapter loaded. Please load an adapter first.")
413
+
414
+ from peft import get_peft_model_state_dict
415
+
416
+ if adapter_name is None:
417
+ adapter_name = self.active_adapter()
418
+
419
+ adapter_state_dict = get_peft_model_state_dict(self, adapter_name=adapter_name)
420
+ return adapter_state_dict
421
+
422
+ def _dispatch_accelerate_model(
423
+ self,
424
+ device_map: str,
425
+ max_memory: Optional[int] = None,
426
+ offload_folder: Optional[str] = None,
427
+ offload_index: Optional[int] = None,
428
+ ) -> None:
429
+ """
430
+ Optional re-dispatch the model and attach new hooks to the model in case the model has been loaded with
431
+ accelerate (i.e. with `device_map=xxx`)
432
+
433
+ Args:
434
+ device_map (`str` or `Dict[str, Union[int, str, torch.device]]` or `int` or `torch.device`, *optional*):
435
+ A map that specifies where each submodule should go. It doesn't need to be refined to each
436
+ parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
437
+ same device. If we only pass the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank
438
+ like `1`) on which the model will be allocated, the device map will map the entire model to this
439
+ device. Passing `device_map = 0` means put the whole model on GPU 0.
440
+
441
+ To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For
442
+ more information about each option see [designing a device
443
+ map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
444
+ max_memory (`Dict`, *optional*):
445
+ A dictionary device identifier to maximum memory. Will default to the maximum memory available for each
446
+ GPU and the available CPU RAM if unset.
447
+ offload_folder (`str` or `os.PathLike`, *optional*):
448
+ If the `device_map` contains any value `"disk"`, the folder where we will offload weights.
449
+ offload_index (`int`, *optional*):
450
+ The offload_index argument to be passed to `accelerate.dispatch_model` method.
451
+ """
452
+ dispatch_model_kwargs = {}
453
+ # Safety checker for previous `accelerate` versions
454
+ # `offload_index` was introduced in https://github.com/huggingface/accelerate/pull/873/
455
+ if "offload_index" in inspect.signature(dispatch_model).parameters:
456
+ dispatch_model_kwargs["offload_index"] = offload_index
457
+
458
+ no_split_module_classes = self._no_split_modules
459
+
460
+ if device_map != "sequential":
461
+ max_memory = get_balanced_memory(
462
+ self,
463
+ max_memory=max_memory,
464
+ no_split_module_classes=no_split_module_classes,
465
+ low_zero=(device_map == "balanced_low_0"),
466
+ )
467
+ if isinstance(device_map, str):
468
+ device_map = infer_auto_device_map(
469
+ self, max_memory=max_memory, no_split_module_classes=no_split_module_classes
470
+ )
471
+ dispatch_model(
472
+ self,
473
+ device_map=device_map,
474
+ offload_dir=offload_folder,
475
+ **dispatch_model_kwargs,
476
+ )
llmeval-env/lib/python3.10/site-packages/transformers/integrations/quanto.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from ..utils import is_torch_available
16
+
17
+
18
+ if is_torch_available():
19
+ import torch
20
+
21
+
22
+ def replace_with_quanto_layers(
23
+ model,
24
+ quantization_config=None,
25
+ modules_to_not_convert=None,
26
+ current_key_name=None,
27
+ has_been_replaced=False,
28
+ ):
29
+ """
30
+ Public method that recursively replaces the Linear layers of the given model with Quanto quantized layers.
31
+ Returns the converted model and a boolean that indicates if the conversion has been successfull or not.
32
+
33
+ Args:
34
+ model (`torch.nn.Module`):
35
+ The model to convert, can be any `torch.nn.Module` instance.
36
+ quantization_config (`AqlmConfig`, defaults to `None`):
37
+ The quantization config object that contains the quantization parameters.
38
+ modules_to_not_convert (`list`, *optional*, defaults to `None`):
39
+ A list of modules to not convert. If a module name is in the list (e.g. `lm_head`), it will not be
40
+ converted.
41
+ current_key_name (`list`, *optional*, defaults to `None`):
42
+ A list that contains the current key name. This is used for recursion and should not be passed by the user.
43
+ has_been_replaced (`bool`, *optional*, defaults to `None`):
44
+ A boolean that indicates if the conversion has been successful or not. This is used for recursion and
45
+ should not be passed by the user.
46
+ """
47
+ from accelerate import init_empty_weights
48
+ from quanto import QLayerNorm, QLinear, qfloat8, qint2, qint4, qint8
49
+
50
+ w_mapping = {"float8": qfloat8, "int8": qint8, "int4": qint4, "int2": qint2}
51
+ a_mapping = {None: None, "float8": qfloat8, "int8": qint8}
52
+
53
+ if modules_to_not_convert is None:
54
+ modules_to_not_convert = []
55
+
56
+ for name, module in model.named_children():
57
+ if current_key_name is None:
58
+ current_key_name = []
59
+ current_key_name.append(name)
60
+
61
+ if not any(key in ".".join(current_key_name) for key in modules_to_not_convert):
62
+ with init_empty_weights():
63
+ if isinstance(module, torch.nn.Linear):
64
+ model._modules[name] = QLinear(
65
+ in_features=module.in_features,
66
+ out_features=module.out_features,
67
+ bias=module.bias is not None,
68
+ dtype=module.weight.dtype,
69
+ weights=w_mapping[quantization_config.weights],
70
+ activations=a_mapping[quantization_config.activations],
71
+ )
72
+ model._modules[name].requires_grad_(False)
73
+ has_been_replaced = True
74
+ elif isinstance(module, torch.nn.LayerNorm):
75
+ if quantization_config.activations is not None:
76
+ model._modules[name] = QLayerNorm(
77
+ module.normalized_shape,
78
+ module.eps,
79
+ module.elementwise_affine,
80
+ module.bias is not None,
81
+ activations=a_mapping[quantization_config.activations],
82
+ )
83
+ has_been_replaced = True
84
+ if len(list(module.children())) > 0:
85
+ _, has_been_replaced = replace_with_quanto_layers(
86
+ module,
87
+ quantization_config=quantization_config,
88
+ modules_to_not_convert=modules_to_not_convert,
89
+ current_key_name=current_key_name,
90
+ has_been_replaced=has_been_replaced,
91
+ )
92
+ # Remove the last key for recursion
93
+ current_key_name.pop(-1)
94
+ return model, has_been_replaced
llmeval-env/lib/python3.10/site-packages/transformers/integrations/tpu.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from torch.utils.data import DataLoader
16
+
17
+ from ..utils import is_torch_xla_available
18
+
19
+
20
+ def tpu_spmd_dataloader(dataloader: DataLoader):
21
+ if is_torch_xla_available():
22
+ import torch_xla.distributed.parallel_loader as pl
23
+
24
+ assert isinstance(
25
+ dataloader, pl.MpDeviceLoader
26
+ ), "The dataloader must be a `torch_xla.distributed.parallel_loader.MpDeviceLoader`."
27
+
28
+ # This is to support PyTorch/XLA FSDP via SPMD.
29
+ # Here we shard the input data's 0th dim across the fsdp axis.
30
+ import torch_xla.distributed.spmd as xs
31
+
32
+ sharding_spec = xs.ShardingSpec(xs.get_global_mesh(), ("fsdp", None))
33
+ dataloader._parallel_loader_kwargs["input_sharding"] = sharding_spec
34
+ return dataloader
35
+ else:
36
+ return dataloader
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__init__.py ADDED
@@ -0,0 +1,1108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2018 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ import json
16
+ import os
17
+ import warnings
18
+ from pathlib import Path
19
+ from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
20
+
21
+ from huggingface_hub import model_info
22
+
23
+ from ..configuration_utils import PretrainedConfig
24
+ from ..dynamic_module_utils import get_class_from_dynamic_module
25
+ from ..feature_extraction_utils import PreTrainedFeatureExtractor
26
+ from ..image_processing_utils import BaseImageProcessor
27
+ from ..models.auto.configuration_auto import AutoConfig
28
+ from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
29
+ from ..models.auto.image_processing_auto import IMAGE_PROCESSOR_MAPPING, AutoImageProcessor
30
+ from ..models.auto.modeling_auto import AutoModelForDepthEstimation, AutoModelForImageToImage
31
+ from ..models.auto.tokenization_auto import TOKENIZER_MAPPING, AutoTokenizer
32
+ from ..tokenization_utils import PreTrainedTokenizer
33
+ from ..utils import (
34
+ CONFIG_NAME,
35
+ HUGGINGFACE_CO_RESOLVE_ENDPOINT,
36
+ cached_file,
37
+ extract_commit_hash,
38
+ find_adapter_config_file,
39
+ is_kenlm_available,
40
+ is_offline_mode,
41
+ is_peft_available,
42
+ is_pyctcdecode_available,
43
+ is_tf_available,
44
+ is_torch_available,
45
+ logging,
46
+ )
47
+ from .audio_classification import AudioClassificationPipeline
48
+ from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline
49
+ from .base import (
50
+ ArgumentHandler,
51
+ CsvPipelineDataFormat,
52
+ JsonPipelineDataFormat,
53
+ PipedPipelineDataFormat,
54
+ Pipeline,
55
+ PipelineDataFormat,
56
+ PipelineException,
57
+ PipelineRegistry,
58
+ get_default_model_and_revision,
59
+ infer_framework_load_model,
60
+ )
61
+ from .conversational import Conversation, ConversationalPipeline
62
+ from .depth_estimation import DepthEstimationPipeline
63
+ from .document_question_answering import DocumentQuestionAnsweringPipeline
64
+ from .feature_extraction import FeatureExtractionPipeline
65
+ from .fill_mask import FillMaskPipeline
66
+ from .image_classification import ImageClassificationPipeline
67
+ from .image_feature_extraction import ImageFeatureExtractionPipeline
68
+ from .image_segmentation import ImageSegmentationPipeline
69
+ from .image_to_image import ImageToImagePipeline
70
+ from .image_to_text import ImageToTextPipeline
71
+ from .mask_generation import MaskGenerationPipeline
72
+ from .object_detection import ObjectDetectionPipeline
73
+ from .question_answering import QuestionAnsweringArgumentHandler, QuestionAnsweringPipeline
74
+ from .table_question_answering import TableQuestionAnsweringArgumentHandler, TableQuestionAnsweringPipeline
75
+ from .text2text_generation import SummarizationPipeline, Text2TextGenerationPipeline, TranslationPipeline
76
+ from .text_classification import TextClassificationPipeline
77
+ from .text_generation import TextGenerationPipeline
78
+ from .text_to_audio import TextToAudioPipeline
79
+ from .token_classification import (
80
+ AggregationStrategy,
81
+ NerPipeline,
82
+ TokenClassificationArgumentHandler,
83
+ TokenClassificationPipeline,
84
+ )
85
+ from .video_classification import VideoClassificationPipeline
86
+ from .visual_question_answering import VisualQuestionAnsweringPipeline
87
+ from .zero_shot_audio_classification import ZeroShotAudioClassificationPipeline
88
+ from .zero_shot_classification import ZeroShotClassificationArgumentHandler, ZeroShotClassificationPipeline
89
+ from .zero_shot_image_classification import ZeroShotImageClassificationPipeline
90
+ from .zero_shot_object_detection import ZeroShotObjectDetectionPipeline
91
+
92
+
93
+ if is_tf_available():
94
+ import tensorflow as tf
95
+
96
+ from ..models.auto.modeling_tf_auto import (
97
+ TFAutoModel,
98
+ TFAutoModelForCausalLM,
99
+ TFAutoModelForImageClassification,
100
+ TFAutoModelForMaskedLM,
101
+ TFAutoModelForQuestionAnswering,
102
+ TFAutoModelForSeq2SeqLM,
103
+ TFAutoModelForSequenceClassification,
104
+ TFAutoModelForTableQuestionAnswering,
105
+ TFAutoModelForTokenClassification,
106
+ TFAutoModelForVision2Seq,
107
+ TFAutoModelForZeroShotImageClassification,
108
+ )
109
+
110
+ if is_torch_available():
111
+ import torch
112
+
113
+ from ..models.auto.modeling_auto import (
114
+ AutoModel,
115
+ AutoModelForAudioClassification,
116
+ AutoModelForCausalLM,
117
+ AutoModelForCTC,
118
+ AutoModelForDocumentQuestionAnswering,
119
+ AutoModelForImageClassification,
120
+ AutoModelForImageSegmentation,
121
+ AutoModelForMaskedLM,
122
+ AutoModelForMaskGeneration,
123
+ AutoModelForObjectDetection,
124
+ AutoModelForQuestionAnswering,
125
+ AutoModelForSemanticSegmentation,
126
+ AutoModelForSeq2SeqLM,
127
+ AutoModelForSequenceClassification,
128
+ AutoModelForSpeechSeq2Seq,
129
+ AutoModelForTableQuestionAnswering,
130
+ AutoModelForTextToSpectrogram,
131
+ AutoModelForTextToWaveform,
132
+ AutoModelForTokenClassification,
133
+ AutoModelForVideoClassification,
134
+ AutoModelForVision2Seq,
135
+ AutoModelForVisualQuestionAnswering,
136
+ AutoModelForZeroShotImageClassification,
137
+ AutoModelForZeroShotObjectDetection,
138
+ )
139
+
140
+
141
+ if TYPE_CHECKING:
142
+ from ..modeling_tf_utils import TFPreTrainedModel
143
+ from ..modeling_utils import PreTrainedModel
144
+ from ..tokenization_utils_fast import PreTrainedTokenizerFast
145
+
146
+
147
+ logger = logging.get_logger(__name__)
148
+
149
+
150
+ # Register all the supported tasks here
151
+ TASK_ALIASES = {
152
+ "sentiment-analysis": "text-classification",
153
+ "ner": "token-classification",
154
+ "vqa": "visual-question-answering",
155
+ "text-to-speech": "text-to-audio",
156
+ }
157
+ SUPPORTED_TASKS = {
158
+ "audio-classification": {
159
+ "impl": AudioClassificationPipeline,
160
+ "tf": (),
161
+ "pt": (AutoModelForAudioClassification,) if is_torch_available() else (),
162
+ "default": {"model": {"pt": ("superb/wav2vec2-base-superb-ks", "372e048")}},
163
+ "type": "audio",
164
+ },
165
+ "automatic-speech-recognition": {
166
+ "impl": AutomaticSpeechRecognitionPipeline,
167
+ "tf": (),
168
+ "pt": (AutoModelForCTC, AutoModelForSpeechSeq2Seq) if is_torch_available() else (),
169
+ "default": {"model": {"pt": ("facebook/wav2vec2-base-960h", "55bb623")}},
170
+ "type": "multimodal",
171
+ },
172
+ "text-to-audio": {
173
+ "impl": TextToAudioPipeline,
174
+ "tf": (),
175
+ "pt": (AutoModelForTextToWaveform, AutoModelForTextToSpectrogram) if is_torch_available() else (),
176
+ "default": {"model": {"pt": ("suno/bark-small", "645cfba")}},
177
+ "type": "text",
178
+ },
179
+ "feature-extraction": {
180
+ "impl": FeatureExtractionPipeline,
181
+ "tf": (TFAutoModel,) if is_tf_available() else (),
182
+ "pt": (AutoModel,) if is_torch_available() else (),
183
+ "default": {
184
+ "model": {
185
+ "pt": ("distilbert/distilbert-base-cased", "935ac13"),
186
+ "tf": ("distilbert/distilbert-base-cased", "935ac13"),
187
+ }
188
+ },
189
+ "type": "multimodal",
190
+ },
191
+ "text-classification": {
192
+ "impl": TextClassificationPipeline,
193
+ "tf": (TFAutoModelForSequenceClassification,) if is_tf_available() else (),
194
+ "pt": (AutoModelForSequenceClassification,) if is_torch_available() else (),
195
+ "default": {
196
+ "model": {
197
+ "pt": ("distilbert/distilbert-base-uncased-finetuned-sst-2-english", "af0f99b"),
198
+ "tf": ("distilbert/distilbert-base-uncased-finetuned-sst-2-english", "af0f99b"),
199
+ },
200
+ },
201
+ "type": "text",
202
+ },
203
+ "token-classification": {
204
+ "impl": TokenClassificationPipeline,
205
+ "tf": (TFAutoModelForTokenClassification,) if is_tf_available() else (),
206
+ "pt": (AutoModelForTokenClassification,) if is_torch_available() else (),
207
+ "default": {
208
+ "model": {
209
+ "pt": ("dbmdz/bert-large-cased-finetuned-conll03-english", "f2482bf"),
210
+ "tf": ("dbmdz/bert-large-cased-finetuned-conll03-english", "f2482bf"),
211
+ },
212
+ },
213
+ "type": "text",
214
+ },
215
+ "question-answering": {
216
+ "impl": QuestionAnsweringPipeline,
217
+ "tf": (TFAutoModelForQuestionAnswering,) if is_tf_available() else (),
218
+ "pt": (AutoModelForQuestionAnswering,) if is_torch_available() else (),
219
+ "default": {
220
+ "model": {
221
+ "pt": ("distilbert/distilbert-base-cased-distilled-squad", "626af31"),
222
+ "tf": ("distilbert/distilbert-base-cased-distilled-squad", "626af31"),
223
+ },
224
+ },
225
+ "type": "text",
226
+ },
227
+ "table-question-answering": {
228
+ "impl": TableQuestionAnsweringPipeline,
229
+ "pt": (AutoModelForTableQuestionAnswering,) if is_torch_available() else (),
230
+ "tf": (TFAutoModelForTableQuestionAnswering,) if is_tf_available() else (),
231
+ "default": {
232
+ "model": {
233
+ "pt": ("google/tapas-base-finetuned-wtq", "69ceee2"),
234
+ "tf": ("google/tapas-base-finetuned-wtq", "69ceee2"),
235
+ },
236
+ },
237
+ "type": "text",
238
+ },
239
+ "visual-question-answering": {
240
+ "impl": VisualQuestionAnsweringPipeline,
241
+ "pt": (AutoModelForVisualQuestionAnswering,) if is_torch_available() else (),
242
+ "tf": (),
243
+ "default": {
244
+ "model": {"pt": ("dandelin/vilt-b32-finetuned-vqa", "4355f59")},
245
+ },
246
+ "type": "multimodal",
247
+ },
248
+ "document-question-answering": {
249
+ "impl": DocumentQuestionAnsweringPipeline,
250
+ "pt": (AutoModelForDocumentQuestionAnswering,) if is_torch_available() else (),
251
+ "tf": (),
252
+ "default": {
253
+ "model": {"pt": ("impira/layoutlm-document-qa", "52e01b3")},
254
+ },
255
+ "type": "multimodal",
256
+ },
257
+ "fill-mask": {
258
+ "impl": FillMaskPipeline,
259
+ "tf": (TFAutoModelForMaskedLM,) if is_tf_available() else (),
260
+ "pt": (AutoModelForMaskedLM,) if is_torch_available() else (),
261
+ "default": {
262
+ "model": {
263
+ "pt": ("distilbert/distilroberta-base", "ec58a5b"),
264
+ "tf": ("distilbert/distilroberta-base", "ec58a5b"),
265
+ }
266
+ },
267
+ "type": "text",
268
+ },
269
+ "summarization": {
270
+ "impl": SummarizationPipeline,
271
+ "tf": (TFAutoModelForSeq2SeqLM,) if is_tf_available() else (),
272
+ "pt": (AutoModelForSeq2SeqLM,) if is_torch_available() else (),
273
+ "default": {
274
+ "model": {"pt": ("sshleifer/distilbart-cnn-12-6", "a4f8f3e"), "tf": ("google-t5/t5-small", "d769bba")}
275
+ },
276
+ "type": "text",
277
+ },
278
+ # This task is a special case as it's parametrized by SRC, TGT languages.
279
+ "translation": {
280
+ "impl": TranslationPipeline,
281
+ "tf": (TFAutoModelForSeq2SeqLM,) if is_tf_available() else (),
282
+ "pt": (AutoModelForSeq2SeqLM,) if is_torch_available() else (),
283
+ "default": {
284
+ ("en", "fr"): {"model": {"pt": ("google-t5/t5-base", "686f1db"), "tf": ("google-t5/t5-base", "686f1db")}},
285
+ ("en", "de"): {"model": {"pt": ("google-t5/t5-base", "686f1db"), "tf": ("google-t5/t5-base", "686f1db")}},
286
+ ("en", "ro"): {"model": {"pt": ("google-t5/t5-base", "686f1db"), "tf": ("google-t5/t5-base", "686f1db")}},
287
+ },
288
+ "type": "text",
289
+ },
290
+ "text2text-generation": {
291
+ "impl": Text2TextGenerationPipeline,
292
+ "tf": (TFAutoModelForSeq2SeqLM,) if is_tf_available() else (),
293
+ "pt": (AutoModelForSeq2SeqLM,) if is_torch_available() else (),
294
+ "default": {"model": {"pt": ("google-t5/t5-base", "686f1db"), "tf": ("google-t5/t5-base", "686f1db")}},
295
+ "type": "text",
296
+ },
297
+ "text-generation": {
298
+ "impl": TextGenerationPipeline,
299
+ "tf": (TFAutoModelForCausalLM,) if is_tf_available() else (),
300
+ "pt": (AutoModelForCausalLM,) if is_torch_available() else (),
301
+ "default": {"model": {"pt": ("openai-community/gpt2", "6c0e608"), "tf": ("openai-community/gpt2", "6c0e608")}},
302
+ "type": "text",
303
+ },
304
+ "zero-shot-classification": {
305
+ "impl": ZeroShotClassificationPipeline,
306
+ "tf": (TFAutoModelForSequenceClassification,) if is_tf_available() else (),
307
+ "pt": (AutoModelForSequenceClassification,) if is_torch_available() else (),
308
+ "default": {
309
+ "model": {
310
+ "pt": ("facebook/bart-large-mnli", "c626438"),
311
+ "tf": ("FacebookAI/roberta-large-mnli", "130fb28"),
312
+ },
313
+ "config": {
314
+ "pt": ("facebook/bart-large-mnli", "c626438"),
315
+ "tf": ("FacebookAI/roberta-large-mnli", "130fb28"),
316
+ },
317
+ },
318
+ "type": "text",
319
+ },
320
+ "zero-shot-image-classification": {
321
+ "impl": ZeroShotImageClassificationPipeline,
322
+ "tf": (TFAutoModelForZeroShotImageClassification,) if is_tf_available() else (),
323
+ "pt": (AutoModelForZeroShotImageClassification,) if is_torch_available() else (),
324
+ "default": {
325
+ "model": {
326
+ "pt": ("openai/clip-vit-base-patch32", "f4881ba"),
327
+ "tf": ("openai/clip-vit-base-patch32", "f4881ba"),
328
+ }
329
+ },
330
+ "type": "multimodal",
331
+ },
332
+ "zero-shot-audio-classification": {
333
+ "impl": ZeroShotAudioClassificationPipeline,
334
+ "tf": (),
335
+ "pt": (AutoModel,) if is_torch_available() else (),
336
+ "default": {
337
+ "model": {
338
+ "pt": ("laion/clap-htsat-fused", "973b6e5"),
339
+ }
340
+ },
341
+ "type": "multimodal",
342
+ },
343
+ "conversational": {
344
+ "impl": ConversationalPipeline,
345
+ "tf": (TFAutoModelForSeq2SeqLM, TFAutoModelForCausalLM) if is_tf_available() else (),
346
+ "pt": (AutoModelForSeq2SeqLM, AutoModelForCausalLM) if is_torch_available() else (),
347
+ "default": {
348
+ "model": {"pt": ("microsoft/DialoGPT-medium", "8bada3b"), "tf": ("microsoft/DialoGPT-medium", "8bada3b")}
349
+ },
350
+ "type": "text",
351
+ },
352
+ "image-classification": {
353
+ "impl": ImageClassificationPipeline,
354
+ "tf": (TFAutoModelForImageClassification,) if is_tf_available() else (),
355
+ "pt": (AutoModelForImageClassification,) if is_torch_available() else (),
356
+ "default": {
357
+ "model": {
358
+ "pt": ("google/vit-base-patch16-224", "5dca96d"),
359
+ "tf": ("google/vit-base-patch16-224", "5dca96d"),
360
+ }
361
+ },
362
+ "type": "image",
363
+ },
364
+ "image-feature-extraction": {
365
+ "impl": ImageFeatureExtractionPipeline,
366
+ "tf": (TFAutoModel,) if is_tf_available() else (),
367
+ "pt": (AutoModel,) if is_torch_available() else (),
368
+ "default": {
369
+ "model": {
370
+ "pt": ("google/vit-base-patch16-224", "3f49326"),
371
+ "tf": ("google/vit-base-patch16-224", "3f49326"),
372
+ }
373
+ },
374
+ "type": "image",
375
+ },
376
+ "image-segmentation": {
377
+ "impl": ImageSegmentationPipeline,
378
+ "tf": (),
379
+ "pt": (AutoModelForImageSegmentation, AutoModelForSemanticSegmentation) if is_torch_available() else (),
380
+ "default": {"model": {"pt": ("facebook/detr-resnet-50-panoptic", "fc15262")}},
381
+ "type": "multimodal",
382
+ },
383
+ "image-to-text": {
384
+ "impl": ImageToTextPipeline,
385
+ "tf": (TFAutoModelForVision2Seq,) if is_tf_available() else (),
386
+ "pt": (AutoModelForVision2Seq,) if is_torch_available() else (),
387
+ "default": {
388
+ "model": {
389
+ "pt": ("ydshieh/vit-gpt2-coco-en", "65636df"),
390
+ "tf": ("ydshieh/vit-gpt2-coco-en", "65636df"),
391
+ }
392
+ },
393
+ "type": "multimodal",
394
+ },
395
+ "object-detection": {
396
+ "impl": ObjectDetectionPipeline,
397
+ "tf": (),
398
+ "pt": (AutoModelForObjectDetection,) if is_torch_available() else (),
399
+ "default": {"model": {"pt": ("facebook/detr-resnet-50", "2729413")}},
400
+ "type": "multimodal",
401
+ },
402
+ "zero-shot-object-detection": {
403
+ "impl": ZeroShotObjectDetectionPipeline,
404
+ "tf": (),
405
+ "pt": (AutoModelForZeroShotObjectDetection,) if is_torch_available() else (),
406
+ "default": {"model": {"pt": ("google/owlvit-base-patch32", "17740e1")}},
407
+ "type": "multimodal",
408
+ },
409
+ "depth-estimation": {
410
+ "impl": DepthEstimationPipeline,
411
+ "tf": (),
412
+ "pt": (AutoModelForDepthEstimation,) if is_torch_available() else (),
413
+ "default": {"model": {"pt": ("Intel/dpt-large", "e93beec")}},
414
+ "type": "image",
415
+ },
416
+ "video-classification": {
417
+ "impl": VideoClassificationPipeline,
418
+ "tf": (),
419
+ "pt": (AutoModelForVideoClassification,) if is_torch_available() else (),
420
+ "default": {"model": {"pt": ("MCG-NJU/videomae-base-finetuned-kinetics", "4800870")}},
421
+ "type": "video",
422
+ },
423
+ "mask-generation": {
424
+ "impl": MaskGenerationPipeline,
425
+ "tf": (),
426
+ "pt": (AutoModelForMaskGeneration,) if is_torch_available() else (),
427
+ "default": {"model": {"pt": ("facebook/sam-vit-huge", "997b15")}},
428
+ "type": "multimodal",
429
+ },
430
+ "image-to-image": {
431
+ "impl": ImageToImagePipeline,
432
+ "tf": (),
433
+ "pt": (AutoModelForImageToImage,) if is_torch_available() else (),
434
+ "default": {"model": {"pt": ("caidas/swin2SR-classical-sr-x2-64", "4aaedcb")}},
435
+ "type": "image",
436
+ },
437
+ }
438
+
439
+ NO_FEATURE_EXTRACTOR_TASKS = set()
440
+ NO_IMAGE_PROCESSOR_TASKS = set()
441
+ NO_TOKENIZER_TASKS = set()
442
+
443
+ # Those model configs are special, they are generic over their task, meaning
444
+ # any tokenizer/feature_extractor might be use for a given model so we cannot
445
+ # use the statically defined TOKENIZER_MAPPING and FEATURE_EXTRACTOR_MAPPING to
446
+ # see if the model defines such objects or not.
447
+ MULTI_MODEL_AUDIO_CONFIGS = {"SpeechEncoderDecoderConfig"}
448
+ MULTI_MODEL_VISION_CONFIGS = {"VisionEncoderDecoderConfig", "VisionTextDualEncoderConfig"}
449
+ for task, values in SUPPORTED_TASKS.items():
450
+ if values["type"] == "text":
451
+ NO_FEATURE_EXTRACTOR_TASKS.add(task)
452
+ NO_IMAGE_PROCESSOR_TASKS.add(task)
453
+ elif values["type"] in {"image", "video"}:
454
+ NO_TOKENIZER_TASKS.add(task)
455
+ elif values["type"] in {"audio"}:
456
+ NO_TOKENIZER_TASKS.add(task)
457
+ NO_IMAGE_PROCESSOR_TASKS.add(task)
458
+ elif values["type"] != "multimodal":
459
+ raise ValueError(f"SUPPORTED_TASK {task} contains invalid type {values['type']}")
460
+
461
+ PIPELINE_REGISTRY = PipelineRegistry(supported_tasks=SUPPORTED_TASKS, task_aliases=TASK_ALIASES)
462
+
463
+
464
+ def get_supported_tasks() -> List[str]:
465
+ """
466
+ Returns a list of supported task strings.
467
+ """
468
+ return PIPELINE_REGISTRY.get_supported_tasks()
469
+
470
+
471
+ def get_task(model: str, token: Optional[str] = None, **deprecated_kwargs) -> str:
472
+ use_auth_token = deprecated_kwargs.pop("use_auth_token", None)
473
+ if use_auth_token is not None:
474
+ warnings.warn(
475
+ "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
476
+ FutureWarning,
477
+ )
478
+ if token is not None:
479
+ raise ValueError("`token` and `use_auth_token` are both specified. Please set only the argument `token`.")
480
+ token = use_auth_token
481
+
482
+ if is_offline_mode():
483
+ raise RuntimeError("You cannot infer task automatically within `pipeline` when using offline mode")
484
+ try:
485
+ info = model_info(model, token=token)
486
+ except Exception as e:
487
+ raise RuntimeError(f"Instantiating a pipeline without a task set raised an error: {e}")
488
+ if not info.pipeline_tag:
489
+ raise RuntimeError(
490
+ f"The model {model} does not seem to have a correct `pipeline_tag` set to infer the task automatically"
491
+ )
492
+ if getattr(info, "library_name", "transformers") != "transformers":
493
+ raise RuntimeError(f"This model is meant to be used with {info.library_name} not with transformers")
494
+ task = info.pipeline_tag
495
+ return task
496
+
497
+
498
+ def check_task(task: str) -> Tuple[str, Dict, Any]:
499
+ """
500
+ Checks an incoming task string, to validate it's correct and return the default Pipeline and Model classes, and
501
+ default models if they exist.
502
+
503
+ Args:
504
+ task (`str`):
505
+ The task defining which pipeline will be returned. Currently accepted tasks are:
506
+
507
+ - `"audio-classification"`
508
+ - `"automatic-speech-recognition"`
509
+ - `"conversational"`
510
+ - `"depth-estimation"`
511
+ - `"document-question-answering"`
512
+ - `"feature-extraction"`
513
+ - `"fill-mask"`
514
+ - `"image-classification"`
515
+ - `"image-feature-extraction"`
516
+ - `"image-segmentation"`
517
+ - `"image-to-text"`
518
+ - `"image-to-image"`
519
+ - `"object-detection"`
520
+ - `"question-answering"`
521
+ - `"summarization"`
522
+ - `"table-question-answering"`
523
+ - `"text2text-generation"`
524
+ - `"text-classification"` (alias `"sentiment-analysis"` available)
525
+ - `"text-generation"`
526
+ - `"text-to-audio"` (alias `"text-to-speech"` available)
527
+ - `"token-classification"` (alias `"ner"` available)
528
+ - `"translation"`
529
+ - `"translation_xx_to_yy"`
530
+ - `"video-classification"`
531
+ - `"visual-question-answering"` (alias `"vqa"` available)
532
+ - `"zero-shot-classification"`
533
+ - `"zero-shot-image-classification"`
534
+ - `"zero-shot-object-detection"`
535
+
536
+ Returns:
537
+ (normalized_task: `str`, task_defaults: `dict`, task_options: (`tuple`, None)) The normalized task name
538
+ (removed alias and options). The actual dictionary required to initialize the pipeline and some extra task
539
+ options for parametrized tasks like "translation_XX_to_YY"
540
+
541
+
542
+ """
543
+ return PIPELINE_REGISTRY.check_task(task)
544
+
545
+
546
+ def clean_custom_task(task_info):
547
+ import transformers
548
+
549
+ if "impl" not in task_info:
550
+ raise RuntimeError("This model introduces a custom pipeline without specifying its implementation.")
551
+ pt_class_names = task_info.get("pt", ())
552
+ if isinstance(pt_class_names, str):
553
+ pt_class_names = [pt_class_names]
554
+ task_info["pt"] = tuple(getattr(transformers, c) for c in pt_class_names)
555
+ tf_class_names = task_info.get("tf", ())
556
+ if isinstance(tf_class_names, str):
557
+ tf_class_names = [tf_class_names]
558
+ task_info["tf"] = tuple(getattr(transformers, c) for c in tf_class_names)
559
+ return task_info, None
560
+
561
+
562
+ def pipeline(
563
+ task: str = None,
564
+ model: Optional[Union[str, "PreTrainedModel", "TFPreTrainedModel"]] = None,
565
+ config: Optional[Union[str, PretrainedConfig]] = None,
566
+ tokenizer: Optional[Union[str, PreTrainedTokenizer, "PreTrainedTokenizerFast"]] = None,
567
+ feature_extractor: Optional[Union[str, PreTrainedFeatureExtractor]] = None,
568
+ image_processor: Optional[Union[str, BaseImageProcessor]] = None,
569
+ framework: Optional[str] = None,
570
+ revision: Optional[str] = None,
571
+ use_fast: bool = True,
572
+ token: Optional[Union[str, bool]] = None,
573
+ device: Optional[Union[int, str, "torch.device"]] = None,
574
+ device_map=None,
575
+ torch_dtype=None,
576
+ trust_remote_code: Optional[bool] = None,
577
+ model_kwargs: Dict[str, Any] = None,
578
+ pipeline_class: Optional[Any] = None,
579
+ **kwargs,
580
+ ) -> Pipeline:
581
+ """
582
+ Utility factory method to build a [`Pipeline`].
583
+
584
+ Pipelines are made of:
585
+
586
+ - A [tokenizer](tokenizer) in charge of mapping raw textual input to token.
587
+ - A [model](model) to make predictions from the inputs.
588
+ - Some (optional) post processing for enhancing model's output.
589
+
590
+ Args:
591
+ task (`str`):
592
+ The task defining which pipeline will be returned. Currently accepted tasks are:
593
+
594
+ - `"audio-classification"`: will return a [`AudioClassificationPipeline`].
595
+ - `"automatic-speech-recognition"`: will return a [`AutomaticSpeechRecognitionPipeline`].
596
+ - `"conversational"`: will return a [`ConversationalPipeline`].
597
+ - `"depth-estimation"`: will return a [`DepthEstimationPipeline`].
598
+ - `"document-question-answering"`: will return a [`DocumentQuestionAnsweringPipeline`].
599
+ - `"feature-extraction"`: will return a [`FeatureExtractionPipeline`].
600
+ - `"fill-mask"`: will return a [`FillMaskPipeline`]:.
601
+ - `"image-classification"`: will return a [`ImageClassificationPipeline`].
602
+ - `"image-feature-extraction"`: will return an [`ImageFeatureExtractionPipeline`].
603
+ - `"image-segmentation"`: will return a [`ImageSegmentationPipeline`].
604
+ - `"image-to-image"`: will return a [`ImageToImagePipeline`].
605
+ - `"image-to-text"`: will return a [`ImageToTextPipeline`].
606
+ - `"mask-generation"`: will return a [`MaskGenerationPipeline`].
607
+ - `"object-detection"`: will return a [`ObjectDetectionPipeline`].
608
+ - `"question-answering"`: will return a [`QuestionAnsweringPipeline`].
609
+ - `"summarization"`: will return a [`SummarizationPipeline`].
610
+ - `"table-question-answering"`: will return a [`TableQuestionAnsweringPipeline`].
611
+ - `"text2text-generation"`: will return a [`Text2TextGenerationPipeline`].
612
+ - `"text-classification"` (alias `"sentiment-analysis"` available): will return a
613
+ [`TextClassificationPipeline`].
614
+ - `"text-generation"`: will return a [`TextGenerationPipeline`]:.
615
+ - `"text-to-audio"` (alias `"text-to-speech"` available): will return a [`TextToAudioPipeline`]:.
616
+ - `"token-classification"` (alias `"ner"` available): will return a [`TokenClassificationPipeline`].
617
+ - `"translation"`: will return a [`TranslationPipeline`].
618
+ - `"translation_xx_to_yy"`: will return a [`TranslationPipeline`].
619
+ - `"video-classification"`: will return a [`VideoClassificationPipeline`].
620
+ - `"visual-question-answering"`: will return a [`VisualQuestionAnsweringPipeline`].
621
+ - `"zero-shot-classification"`: will return a [`ZeroShotClassificationPipeline`].
622
+ - `"zero-shot-image-classification"`: will return a [`ZeroShotImageClassificationPipeline`].
623
+ - `"zero-shot-audio-classification"`: will return a [`ZeroShotAudioClassificationPipeline`].
624
+ - `"zero-shot-object-detection"`: will return a [`ZeroShotObjectDetectionPipeline`].
625
+
626
+ model (`str` or [`PreTrainedModel`] or [`TFPreTrainedModel`], *optional*):
627
+ The model that will be used by the pipeline to make predictions. This can be a model identifier or an
628
+ actual instance of a pretrained model inheriting from [`PreTrainedModel`] (for PyTorch) or
629
+ [`TFPreTrainedModel`] (for TensorFlow).
630
+
631
+ If not provided, the default for the `task` will be loaded.
632
+ config (`str` or [`PretrainedConfig`], *optional*):
633
+ The configuration that will be used by the pipeline to instantiate the model. This can be a model
634
+ identifier or an actual pretrained model configuration inheriting from [`PretrainedConfig`].
635
+
636
+ If not provided, the default configuration file for the requested model will be used. That means that if
637
+ `model` is given, its default configuration will be used. However, if `model` is not supplied, this
638
+ `task`'s default model's config is used instead.
639
+ tokenizer (`str` or [`PreTrainedTokenizer`], *optional*):
640
+ The tokenizer that will be used by the pipeline to encode data for the model. This can be a model
641
+ identifier or an actual pretrained tokenizer inheriting from [`PreTrainedTokenizer`].
642
+
643
+ If not provided, the default tokenizer for the given `model` will be loaded (if it is a string). If `model`
644
+ is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string).
645
+ However, if `config` is also not given or not a string, then the default tokenizer for the given `task`
646
+ will be loaded.
647
+ feature_extractor (`str` or [`PreTrainedFeatureExtractor`], *optional*):
648
+ The feature extractor that will be used by the pipeline to encode data for the model. This can be a model
649
+ identifier or an actual pretrained feature extractor inheriting from [`PreTrainedFeatureExtractor`].
650
+
651
+ Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal
652
+ models. Multi-modal models will also require a tokenizer to be passed.
653
+
654
+ If not provided, the default feature extractor for the given `model` will be loaded (if it is a string). If
655
+ `model` is not specified or not a string, then the default feature extractor for `config` is loaded (if it
656
+ is a string). However, if `config` is also not given or not a string, then the default feature extractor
657
+ for the given `task` will be loaded.
658
+ framework (`str`, *optional*):
659
+ The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
660
+ installed.
661
+
662
+ If no framework is specified, will default to the one currently installed. If no framework is specified and
663
+ both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
664
+ provided.
665
+ revision (`str`, *optional*, defaults to `"main"`):
666
+ When passing a task name or a string model identifier: The specific model version to use. It can be a
667
+ branch name, a tag name, or a commit id, since we use a git-based system for storing models and other
668
+ artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
669
+ use_fast (`bool`, *optional*, defaults to `True`):
670
+ Whether or not to use a Fast tokenizer if possible (a [`PreTrainedTokenizerFast`]).
671
+ use_auth_token (`str` or *bool*, *optional*):
672
+ The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
673
+ when running `huggingface-cli login` (stored in `~/.huggingface`).
674
+ device (`int` or `str` or `torch.device`):
675
+ Defines the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which this
676
+ pipeline will be allocated.
677
+ device_map (`str` or `Dict[str, Union[int, str, torch.device]`, *optional*):
678
+ Sent directly as `model_kwargs` (just a simpler shortcut). When `accelerate` library is present, set
679
+ `device_map="auto"` to compute the most optimized `device_map` automatically (see
680
+ [here](https://huggingface.co/docs/accelerate/main/en/package_reference/big_modeling#accelerate.cpu_offload)
681
+ for more information).
682
+
683
+ <Tip warning={true}>
684
+
685
+ Do not use `device_map` AND `device` at the same time as they will conflict
686
+
687
+ </Tip>
688
+
689
+ torch_dtype (`str` or `torch.dtype`, *optional*):
690
+ Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
691
+ (`torch.float16`, `torch.bfloat16`, ... or `"auto"`).
692
+ trust_remote_code (`bool`, *optional*, defaults to `False`):
693
+ Whether or not to allow for custom code defined on the Hub in their own modeling, configuration,
694
+ tokenization or even pipeline files. This option should only be set to `True` for repositories you trust
695
+ and in which you have read the code, as it will execute code present on the Hub on your local machine.
696
+ model_kwargs (`Dict[str, Any]`, *optional*):
697
+ Additional dictionary of keyword arguments passed along to the model's `from_pretrained(...,
698
+ **model_kwargs)` function.
699
+ kwargs (`Dict[str, Any]`, *optional*):
700
+ Additional keyword arguments passed along to the specific pipeline init (see the documentation for the
701
+ corresponding pipeline class for possible values).
702
+
703
+ Returns:
704
+ [`Pipeline`]: A suitable pipeline for the task.
705
+
706
+ Examples:
707
+
708
+ ```python
709
+ >>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
710
+
711
+ >>> # Sentiment analysis pipeline
712
+ >>> analyzer = pipeline("sentiment-analysis")
713
+
714
+ >>> # Question answering pipeline, specifying the checkpoint identifier
715
+ >>> oracle = pipeline(
716
+ ... "question-answering", model="distilbert/distilbert-base-cased-distilled-squad", tokenizer="google-bert/bert-base-cased"
717
+ ... )
718
+
719
+ >>> # Named entity recognition pipeline, passing in a specific model and tokenizer
720
+ >>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
721
+ >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
722
+ >>> recognizer = pipeline("ner", model=model, tokenizer=tokenizer)
723
+ ```"""
724
+ if model_kwargs is None:
725
+ model_kwargs = {}
726
+ # Make sure we only pass use_auth_token once as a kwarg (it used to be possible to pass it in model_kwargs,
727
+ # this is to keep BC).
728
+ use_auth_token = model_kwargs.pop("use_auth_token", None)
729
+ if use_auth_token is not None:
730
+ warnings.warn(
731
+ "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
732
+ FutureWarning,
733
+ )
734
+ if token is not None:
735
+ raise ValueError("`token` and `use_auth_token` are both specified. Please set only the argument `token`.")
736
+ token = use_auth_token
737
+
738
+ code_revision = kwargs.pop("code_revision", None)
739
+ commit_hash = kwargs.pop("_commit_hash", None)
740
+
741
+ hub_kwargs = {
742
+ "revision": revision,
743
+ "token": token,
744
+ "trust_remote_code": trust_remote_code,
745
+ "_commit_hash": commit_hash,
746
+ }
747
+
748
+ if task is None and model is None:
749
+ raise RuntimeError(
750
+ "Impossible to instantiate a pipeline without either a task or a model "
751
+ "being specified. "
752
+ "Please provide a task class or a model"
753
+ )
754
+
755
+ if model is None and tokenizer is not None:
756
+ raise RuntimeError(
757
+ "Impossible to instantiate a pipeline with tokenizer specified but not the model as the provided tokenizer"
758
+ " may not be compatible with the default model. Please provide a PreTrainedModel class or a"
759
+ " path/identifier to a pretrained model when providing tokenizer."
760
+ )
761
+ if model is None and feature_extractor is not None:
762
+ raise RuntimeError(
763
+ "Impossible to instantiate a pipeline with feature_extractor specified but not the model as the provided"
764
+ " feature_extractor may not be compatible with the default model. Please provide a PreTrainedModel class"
765
+ " or a path/identifier to a pretrained model when providing feature_extractor."
766
+ )
767
+ if isinstance(model, Path):
768
+ model = str(model)
769
+
770
+ if commit_hash is None:
771
+ pretrained_model_name_or_path = None
772
+ if isinstance(config, str):
773
+ pretrained_model_name_or_path = config
774
+ elif config is None and isinstance(model, str):
775
+ pretrained_model_name_or_path = model
776
+
777
+ if not isinstance(config, PretrainedConfig) and pretrained_model_name_or_path is not None:
778
+ # We make a call to the config file first (which may be absent) to get the commit hash as soon as possible
779
+ resolved_config_file = cached_file(
780
+ pretrained_model_name_or_path,
781
+ CONFIG_NAME,
782
+ _raise_exceptions_for_gated_repo=False,
783
+ _raise_exceptions_for_missing_entries=False,
784
+ _raise_exceptions_for_connection_errors=False,
785
+ cache_dir=model_kwargs.get("cache_dir"),
786
+ **hub_kwargs,
787
+ )
788
+ hub_kwargs["_commit_hash"] = extract_commit_hash(resolved_config_file, commit_hash)
789
+ else:
790
+ hub_kwargs["_commit_hash"] = getattr(config, "_commit_hash", None)
791
+
792
+ # Config is the primordial information item.
793
+ # Instantiate config if needed
794
+ if isinstance(config, str):
795
+ config = AutoConfig.from_pretrained(
796
+ config, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
797
+ )
798
+ hub_kwargs["_commit_hash"] = config._commit_hash
799
+ elif config is None and isinstance(model, str):
800
+ # Check for an adapter file in the model path if PEFT is available
801
+ if is_peft_available():
802
+ # `find_adapter_config_file` doesn't accept `trust_remote_code`
803
+ _hub_kwargs = {k: v for k, v in hub_kwargs.items() if k != "trust_remote_code"}
804
+ maybe_adapter_path = find_adapter_config_file(
805
+ model,
806
+ token=hub_kwargs["token"],
807
+ revision=hub_kwargs["revision"],
808
+ _commit_hash=hub_kwargs["_commit_hash"],
809
+ )
810
+
811
+ if maybe_adapter_path is not None:
812
+ with open(maybe_adapter_path, "r", encoding="utf-8") as f:
813
+ adapter_config = json.load(f)
814
+ model = adapter_config["base_model_name_or_path"]
815
+
816
+ config = AutoConfig.from_pretrained(
817
+ model, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
818
+ )
819
+ hub_kwargs["_commit_hash"] = config._commit_hash
820
+
821
+ custom_tasks = {}
822
+ if config is not None and len(getattr(config, "custom_pipelines", {})) > 0:
823
+ custom_tasks = config.custom_pipelines
824
+ if task is None and trust_remote_code is not False:
825
+ if len(custom_tasks) == 1:
826
+ task = list(custom_tasks.keys())[0]
827
+ else:
828
+ raise RuntimeError(
829
+ "We can't infer the task automatically for this model as there are multiple tasks available. Pick "
830
+ f"one in {', '.join(custom_tasks.keys())}"
831
+ )
832
+
833
+ if task is None and model is not None:
834
+ if not isinstance(model, str):
835
+ raise RuntimeError(
836
+ "Inferring the task automatically requires to check the hub with a model_id defined as a `str`. "
837
+ f"{model} is not a valid model_id."
838
+ )
839
+ task = get_task(model, token)
840
+
841
+ # Retrieve the task
842
+ if task in custom_tasks:
843
+ normalized_task = task
844
+ targeted_task, task_options = clean_custom_task(custom_tasks[task])
845
+ if pipeline_class is None:
846
+ if not trust_remote_code:
847
+ raise ValueError(
848
+ "Loading this pipeline requires you to execute the code in the pipeline file in that"
849
+ " repo on your local machine. Make sure you have read the code there to avoid malicious use, then"
850
+ " set the option `trust_remote_code=True` to remove this error."
851
+ )
852
+ class_ref = targeted_task["impl"]
853
+ pipeline_class = get_class_from_dynamic_module(
854
+ class_ref,
855
+ model,
856
+ code_revision=code_revision,
857
+ **hub_kwargs,
858
+ )
859
+ else:
860
+ normalized_task, targeted_task, task_options = check_task(task)
861
+ if pipeline_class is None:
862
+ pipeline_class = targeted_task["impl"]
863
+
864
+ # Use default model/config/tokenizer for the task if no model is provided
865
+ if model is None:
866
+ # At that point framework might still be undetermined
867
+ model, default_revision = get_default_model_and_revision(targeted_task, framework, task_options)
868
+ revision = revision if revision is not None else default_revision
869
+ logger.warning(
870
+ f"No model was supplied, defaulted to {model} and revision"
871
+ f" {revision} ({HUGGINGFACE_CO_RESOLVE_ENDPOINT}/{model}).\n"
872
+ "Using a pipeline without specifying a model name and revision in production is not recommended."
873
+ )
874
+ if config is None and isinstance(model, str):
875
+ config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)
876
+ hub_kwargs["_commit_hash"] = config._commit_hash
877
+
878
+ if device_map is not None:
879
+ if "device_map" in model_kwargs:
880
+ raise ValueError(
881
+ 'You cannot use both `pipeline(... device_map=..., model_kwargs={"device_map":...})` as those'
882
+ " arguments might conflict, use only one.)"
883
+ )
884
+ if device is not None:
885
+ logger.warning(
886
+ "Both `device` and `device_map` are specified. `device` will override `device_map`. You"
887
+ " will most likely encounter unexpected behavior. Please remove `device` and keep `device_map`."
888
+ )
889
+ model_kwargs["device_map"] = device_map
890
+ if torch_dtype is not None:
891
+ if "torch_dtype" in model_kwargs:
892
+ raise ValueError(
893
+ 'You cannot use both `pipeline(... torch_dtype=..., model_kwargs={"torch_dtype":...})` as those'
894
+ " arguments might conflict, use only one.)"
895
+ )
896
+ if isinstance(torch_dtype, str) and hasattr(torch, torch_dtype):
897
+ torch_dtype = getattr(torch, torch_dtype)
898
+ model_kwargs["torch_dtype"] = torch_dtype
899
+
900
+ model_name = model if isinstance(model, str) else None
901
+
902
+ # Load the correct model if possible
903
+ # Infer the framework from the model if not already defined
904
+ if isinstance(model, str) or framework is None:
905
+ model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
906
+ framework, model = infer_framework_load_model(
907
+ model,
908
+ model_classes=model_classes,
909
+ config=config,
910
+ framework=framework,
911
+ task=task,
912
+ **hub_kwargs,
913
+ **model_kwargs,
914
+ )
915
+
916
+ model_config = model.config
917
+ hub_kwargs["_commit_hash"] = model.config._commit_hash
918
+ load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None
919
+ load_feature_extractor = type(model_config) in FEATURE_EXTRACTOR_MAPPING or feature_extractor is not None
920
+ load_image_processor = type(model_config) in IMAGE_PROCESSOR_MAPPING or image_processor is not None
921
+
922
+ # If `model` (instance of `PretrainedModel` instead of `str`) is passed (and/or same for config), while
923
+ # `image_processor` or `feature_extractor` is `None`, the loading will fail. This happens particularly for some
924
+ # vision tasks when calling `pipeline()` with `model` and only one of the `image_processor` and `feature_extractor`.
925
+ # TODO: we need to make `NO_IMAGE_PROCESSOR_TASKS` and `NO_FEATURE_EXTRACTOR_TASKS` more robust to avoid such issue.
926
+ # This block is only temporarily to make CI green.
927
+ if load_image_processor and load_feature_extractor:
928
+ load_feature_extractor = False
929
+
930
+ if (
931
+ tokenizer is None
932
+ and not load_tokenizer
933
+ and normalized_task not in NO_TOKENIZER_TASKS
934
+ # Using class name to avoid importing the real class.
935
+ and (
936
+ model_config.__class__.__name__ in MULTI_MODEL_AUDIO_CONFIGS
937
+ or model_config.__class__.__name__ in MULTI_MODEL_VISION_CONFIGS
938
+ )
939
+ ):
940
+ # This is a special category of models, that are fusions of multiple models
941
+ # so the model_config might not define a tokenizer, but it seems to be
942
+ # necessary for the task, so we're force-trying to load it.
943
+ load_tokenizer = True
944
+ if (
945
+ image_processor is None
946
+ and not load_image_processor
947
+ and normalized_task not in NO_IMAGE_PROCESSOR_TASKS
948
+ # Using class name to avoid importing the real class.
949
+ and model_config.__class__.__name__ in MULTI_MODEL_VISION_CONFIGS
950
+ ):
951
+ # This is a special category of models, that are fusions of multiple models
952
+ # so the model_config might not define a tokenizer, but it seems to be
953
+ # necessary for the task, so we're force-trying to load it.
954
+ load_image_processor = True
955
+ if (
956
+ feature_extractor is None
957
+ and not load_feature_extractor
958
+ and normalized_task not in NO_FEATURE_EXTRACTOR_TASKS
959
+ # Using class name to avoid importing the real class.
960
+ and model_config.__class__.__name__ in MULTI_MODEL_AUDIO_CONFIGS
961
+ ):
962
+ # This is a special category of models, that are fusions of multiple models
963
+ # so the model_config might not define a tokenizer, but it seems to be
964
+ # necessary for the task, so we're force-trying to load it.
965
+ load_feature_extractor = True
966
+
967
+ if task in NO_TOKENIZER_TASKS:
968
+ # These will never require a tokenizer.
969
+ # the model on the other hand might have a tokenizer, but
970
+ # the files could be missing from the hub, instead of failing
971
+ # on such repos, we just force to not load it.
972
+ load_tokenizer = False
973
+
974
+ if task in NO_FEATURE_EXTRACTOR_TASKS:
975
+ load_feature_extractor = False
976
+ if task in NO_IMAGE_PROCESSOR_TASKS:
977
+ load_image_processor = False
978
+
979
+ if load_tokenizer:
980
+ # Try to infer tokenizer from model or config name (if provided as str)
981
+ if tokenizer is None:
982
+ if isinstance(model_name, str):
983
+ tokenizer = model_name
984
+ elif isinstance(config, str):
985
+ tokenizer = config
986
+ else:
987
+ # Impossible to guess what is the right tokenizer here
988
+ raise Exception(
989
+ "Impossible to guess which tokenizer to use. "
990
+ "Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer."
991
+ )
992
+
993
+ # Instantiate tokenizer if needed
994
+ if isinstance(tokenizer, (str, tuple)):
995
+ if isinstance(tokenizer, tuple):
996
+ # For tuple we have (tokenizer name, {kwargs})
997
+ use_fast = tokenizer[1].pop("use_fast", use_fast)
998
+ tokenizer_identifier = tokenizer[0]
999
+ tokenizer_kwargs = tokenizer[1]
1000
+ else:
1001
+ tokenizer_identifier = tokenizer
1002
+ tokenizer_kwargs = model_kwargs.copy()
1003
+ tokenizer_kwargs.pop("torch_dtype", None)
1004
+
1005
+ tokenizer = AutoTokenizer.from_pretrained(
1006
+ tokenizer_identifier, use_fast=use_fast, _from_pipeline=task, **hub_kwargs, **tokenizer_kwargs
1007
+ )
1008
+
1009
+ if load_image_processor:
1010
+ # Try to infer image processor from model or config name (if provided as str)
1011
+ if image_processor is None:
1012
+ if isinstance(model_name, str):
1013
+ image_processor = model_name
1014
+ elif isinstance(config, str):
1015
+ image_processor = config
1016
+ # Backward compatibility, as `feature_extractor` used to be the name
1017
+ # for `ImageProcessor`.
1018
+ elif feature_extractor is not None and isinstance(feature_extractor, BaseImageProcessor):
1019
+ image_processor = feature_extractor
1020
+ else:
1021
+ # Impossible to guess what is the right image_processor here
1022
+ raise Exception(
1023
+ "Impossible to guess which image processor to use. "
1024
+ "Please provide a PreTrainedImageProcessor class or a path/identifier "
1025
+ "to a pretrained image processor."
1026
+ )
1027
+
1028
+ # Instantiate image_processor if needed
1029
+ if isinstance(image_processor, (str, tuple)):
1030
+ image_processor = AutoImageProcessor.from_pretrained(
1031
+ image_processor, _from_pipeline=task, **hub_kwargs, **model_kwargs
1032
+ )
1033
+
1034
+ if load_feature_extractor:
1035
+ # Try to infer feature extractor from model or config name (if provided as str)
1036
+ if feature_extractor is None:
1037
+ if isinstance(model_name, str):
1038
+ feature_extractor = model_name
1039
+ elif isinstance(config, str):
1040
+ feature_extractor = config
1041
+ else:
1042
+ # Impossible to guess what is the right feature_extractor here
1043
+ raise Exception(
1044
+ "Impossible to guess which feature extractor to use. "
1045
+ "Please provide a PreTrainedFeatureExtractor class or a path/identifier "
1046
+ "to a pretrained feature extractor."
1047
+ )
1048
+
1049
+ # Instantiate feature_extractor if needed
1050
+ if isinstance(feature_extractor, (str, tuple)):
1051
+ feature_extractor = AutoFeatureExtractor.from_pretrained(
1052
+ feature_extractor, _from_pipeline=task, **hub_kwargs, **model_kwargs
1053
+ )
1054
+
1055
+ if (
1056
+ feature_extractor._processor_class
1057
+ and feature_extractor._processor_class.endswith("WithLM")
1058
+ and isinstance(model_name, str)
1059
+ ):
1060
+ try:
1061
+ import kenlm # to trigger `ImportError` if not installed
1062
+ from pyctcdecode import BeamSearchDecoderCTC
1063
+
1064
+ if os.path.isdir(model_name) or os.path.isfile(model_name):
1065
+ decoder = BeamSearchDecoderCTC.load_from_dir(model_name)
1066
+ else:
1067
+ language_model_glob = os.path.join(
1068
+ BeamSearchDecoderCTC._LANGUAGE_MODEL_SERIALIZED_DIRECTORY, "*"
1069
+ )
1070
+ alphabet_filename = BeamSearchDecoderCTC._ALPHABET_SERIALIZED_FILENAME
1071
+ allow_patterns = [language_model_glob, alphabet_filename]
1072
+ decoder = BeamSearchDecoderCTC.load_from_hf_hub(model_name, allow_patterns=allow_patterns)
1073
+
1074
+ kwargs["decoder"] = decoder
1075
+ except ImportError as e:
1076
+ logger.warning(f"Could not load the `decoder` for {model_name}. Defaulting to raw CTC. Error: {e}")
1077
+ if not is_kenlm_available():
1078
+ logger.warning("Try to install `kenlm`: `pip install kenlm")
1079
+
1080
+ if not is_pyctcdecode_available():
1081
+ logger.warning("Try to install `pyctcdecode`: `pip install pyctcdecode")
1082
+
1083
+ if task == "translation" and model.config.task_specific_params:
1084
+ for key in model.config.task_specific_params:
1085
+ if key.startswith("translation"):
1086
+ task = key
1087
+ warnings.warn(
1088
+ f'"translation" task was used, instead of "translation_XX_to_YY", defaulting to "{task}"',
1089
+ UserWarning,
1090
+ )
1091
+ break
1092
+
1093
+ if tokenizer is not None:
1094
+ kwargs["tokenizer"] = tokenizer
1095
+
1096
+ if feature_extractor is not None:
1097
+ kwargs["feature_extractor"] = feature_extractor
1098
+
1099
+ if torch_dtype is not None:
1100
+ kwargs["torch_dtype"] = torch_dtype
1101
+
1102
+ if image_processor is not None:
1103
+ kwargs["image_processor"] = image_processor
1104
+
1105
+ if device is not None:
1106
+ kwargs["device"] = device
1107
+
1108
+ return pipeline_class(model=model, framework=framework, task=task, **kwargs)
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/audio_classification.cpython-310.pyc ADDED
Binary file (7.55 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/audio_utils.cpython-310.pyc ADDED
Binary file (7.33 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/base.cpython-310.pyc ADDED
Binary file (45.4 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/conversational.cpython-310.pyc ADDED
Binary file (12.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/depth_estimation.cpython-310.pyc ADDED
Binary file (5.05 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/document_question_answering.cpython-310.pyc ADDED
Binary file (17 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_classification.cpython-310.pyc ADDED
Binary file (8.42 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_feature_extraction.cpython-310.pyc ADDED
Binary file (4.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_segmentation.cpython-310.pyc ADDED
Binary file (7.84 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_to_image.cpython-310.pyc ADDED
Binary file (4.78 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/image_to_text.cpython-310.pyc ADDED
Binary file (5.97 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/mask_generation.cpython-310.pyc ADDED
Binary file (10.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/object_detection.cpython-310.pyc ADDED
Binary file (7.92 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/pt_utils.cpython-310.pyc ADDED
Binary file (9.39 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/question_answering.cpython-310.pyc ADDED
Binary file (20.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/table_question_answering.cpython-310.pyc ADDED
Binary file (14.9 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/text2text_generation.cpython-310.pyc ADDED
Binary file (16.2 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/text_classification.cpython-310.pyc ADDED
Binary file (8.88 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/text_to_audio.cpython-310.pyc ADDED
Binary file (6.48 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/token_classification.cpython-310.pyc ADDED
Binary file (19.9 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/visual_question_answering.cpython-310.pyc ADDED
Binary file (6.86 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/zero_shot_audio_classification.cpython-310.pyc ADDED
Binary file (6.36 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/zero_shot_classification.cpython-310.pyc ADDED
Binary file (11.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/transformers/pipelines/__pycache__/zero_shot_image_classification.cpython-310.pyc ADDED
Binary file (7.03 kB). View file