applied-ai-018 commited on
Commit
a2ea2d4
·
verified ·
1 Parent(s): 270d531

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. llmeval-env/lib/python3.10/site-packages/torch/cuda/__init__.py +1412 -0
  2. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/__init__.cpython-310.pyc +0 -0
  3. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/_memory_viz.cpython-310.pyc +0 -0
  4. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/_sanitizer.cpython-310.pyc +0 -0
  5. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/_utils.cpython-310.pyc +0 -0
  6. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/comm.cpython-310.pyc +0 -0
  7. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/error.cpython-310.pyc +0 -0
  8. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/graphs.cpython-310.pyc +0 -0
  9. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/jiterator.cpython-310.pyc +0 -0
  10. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/memory.cpython-310.pyc +0 -0
  11. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/nccl.cpython-310.pyc +0 -0
  12. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/nvtx.cpython-310.pyc +0 -0
  13. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/profiler.cpython-310.pyc +0 -0
  14. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/random.cpython-310.pyc +0 -0
  15. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/sparse.cpython-310.pyc +0 -0
  16. llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/streams.cpython-310.pyc +0 -0
  17. llmeval-env/lib/python3.10/site-packages/torch/cuda/_memory_viz.py +626 -0
  18. llmeval-env/lib/python3.10/site-packages/torch/cuda/_sanitizer.py +622 -0
  19. llmeval-env/lib/python3.10/site-packages/torch/cuda/_utils.py +38 -0
  20. llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/__init__.py +11 -0
  21. llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/__pycache__/__init__.cpython-310.pyc +0 -0
  22. llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/__pycache__/autocast_mode.cpython-310.pyc +0 -0
  23. llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py +144 -0
  24. llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/common.py +9 -0
  25. llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py +28 -0
  26. llmeval-env/lib/python3.10/site-packages/torch/cuda/comm.py +18 -0
  27. llmeval-env/lib/python3.10/site-packages/torch/cuda/error.py +0 -0
  28. llmeval-env/lib/python3.10/site-packages/torch/cuda/graphs.py +479 -0
  29. llmeval-env/lib/python3.10/site-packages/torch/cuda/jiterator.py +185 -0
  30. llmeval-env/lib/python3.10/site-packages/torch/cuda/memory.py +914 -0
  31. llmeval-env/lib/python3.10/site-packages/torch/cuda/nccl.py +137 -0
  32. llmeval-env/lib/python3.10/site-packages/torch/cuda/nvtx.py +91 -0
  33. llmeval-env/lib/python3.10/site-packages/torch/cuda/profiler.py +61 -0
  34. llmeval-env/lib/python3.10/site-packages/torch/cuda/random.py +179 -0
  35. llmeval-env/lib/python3.10/site-packages/torch/cuda/sparse.py +1 -0
  36. llmeval-env/lib/python3.10/site-packages/torch/cuda/streams.py +241 -0
  37. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/__init__.cpython-310.pyc +0 -0
  38. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_compatibility.cpython-310.pyc +0 -0
  39. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_lazy_graph_module.cpython-310.pyc +0 -0
  40. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_pytree.cpython-310.pyc +0 -0
  41. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_symbolic_trace.cpython-310.pyc +0 -0
  42. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/annotate.cpython-310.pyc +0 -0
  43. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/config.cpython-310.pyc +0 -0
  44. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/graph.cpython-310.pyc +0 -0
  45. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/graph_module.cpython-310.pyc +0 -0
  46. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/immutable_collections.cpython-310.pyc +0 -0
  47. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/interpreter.cpython-310.pyc +0 -0
  48. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/node.cpython-310.pyc +0 -0
  49. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/operator_schemas.cpython-310.pyc +0 -0
  50. llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/proxy.cpython-310.pyc +0 -0
llmeval-env/lib/python3.10/site-packages/torch/cuda/__init__.py ADDED
@@ -0,0 +1,1412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ r"""
2
+ This package adds support for CUDA tensor types.
3
+
4
+ It implements the same function as CPU tensors, but they utilize
5
+ GPUs for computation.
6
+
7
+ It is lazily initialized, so you can always import it, and use
8
+ :func:`is_available()` to determine if your system supports CUDA.
9
+
10
+ :ref:`cuda-semantics` has more details about working with CUDA.
11
+ """
12
+
13
+
14
+ import contextlib
15
+ import importlib
16
+ import os
17
+ import sys
18
+ import threading
19
+ import traceback
20
+ import warnings
21
+ from functools import lru_cache
22
+ from typing import Any, Callable, cast, List, Optional, Tuple, Union
23
+
24
+ import torch
25
+ import torch._C
26
+ from torch.types import Device
27
+ from .. import device as _device
28
+ from .._utils import _dummy_type, _LazySeedTracker, classproperty
29
+ from ._utils import _get_device_index
30
+ from .graphs import (
31
+ CUDAGraph,
32
+ graph,
33
+ graph_pool_handle,
34
+ is_current_stream_capturing,
35
+ make_graphed_callables,
36
+ )
37
+ from .streams import Event, ExternalStream, Stream
38
+
39
+ try:
40
+ from torch._C import _cudart # type: ignore[attr-defined]
41
+ except ImportError:
42
+ _cudart = None
43
+
44
+ _initialized = False
45
+ _tls = threading.local()
46
+ _initialization_lock = threading.Lock()
47
+ _queued_calls: List[
48
+ Tuple[Callable[[], None], List[str]]
49
+ ] = [] # don't invoke these until initialization occurs
50
+ _is_in_bad_fork = getattr(torch._C, "_cuda_isInBadFork", lambda: False)
51
+ _device_t = Union[_device, str, int, None]
52
+
53
+ _HAS_PYNVML = False
54
+ _PYNVML_ERR = None
55
+ try:
56
+ import pynvml # type: ignore[import]
57
+
58
+ _HAS_PYNVML = True
59
+ except ImportError as err:
60
+ _PYNVML_ERR = err # sometimes a lib is installed but the import fails for some other reason, so we log the error for later
61
+
62
+ _lazy_seed_tracker = _LazySeedTracker()
63
+
64
+ # Define dummy _CudaDeviceProperties type if PyTorch was compiled without CUDA
65
+ if hasattr(torch._C, "_CudaDeviceProperties"):
66
+ _CudaDeviceProperties = torch._C._CudaDeviceProperties
67
+ else:
68
+ _CudaDeviceProperties = _dummy_type("_CudaDeviceProperties") # type: ignore[assignment, misc]
69
+
70
+ if hasattr(torch._C, "_cuda_exchangeDevice"):
71
+ _exchange_device = torch._C._cuda_exchangeDevice
72
+ else:
73
+
74
+ def _exchange_device(device: int) -> int:
75
+ if device < 0:
76
+ return -1
77
+ raise RuntimeError("PyTorch was compiled without CUDA support")
78
+
79
+
80
+ if hasattr(torch._C, "_cuda_maybeExchangeDevice"):
81
+ _maybe_exchange_device = torch._C._cuda_maybeExchangeDevice
82
+ else:
83
+
84
+ def _maybe_exchange_device(device: int) -> int:
85
+ if device < 0:
86
+ return -1
87
+ raise RuntimeError("PyTorch was compiled without CUDA support")
88
+
89
+
90
+ has_half: bool = True
91
+ has_magma: bool = torch._C._has_magma
92
+
93
+ default_generators: Tuple[torch._C.Generator] = () # type: ignore[assignment]
94
+
95
+
96
+ def _is_compiled() -> bool:
97
+ r"""Return true if compile with CUDA support."""
98
+ return hasattr(torch._C, "_cuda_getDeviceCount")
99
+
100
+
101
+ def _nvml_based_avail() -> bool:
102
+ return os.getenv("PYTORCH_NVML_BASED_CUDA_CHECK") == "1"
103
+
104
+
105
+ def is_available() -> bool:
106
+ r"""Return a bool indicating if CUDA is currently available."""
107
+ if not _is_compiled():
108
+ return False
109
+ if _nvml_based_avail():
110
+ # The user has set an env variable to request this availability check that attempts to avoid fork poisoning by
111
+ # using NVML at the cost of a weaker CUDA availability assessment. Note that if NVML discovery/initialization
112
+ # fails, this assessment falls back to the default CUDA Runtime API assessment (`cudaGetDeviceCount`)
113
+ return device_count() > 0
114
+ else:
115
+ # The default availability inspection never throws and returns 0 if the driver is missing or can't
116
+ # be initialized. This uses the CUDA Runtime API `cudaGetDeviceCount` which in turn initializes the CUDA Driver
117
+ # API via `cuInit`
118
+ return torch._C._cuda_getDeviceCount() > 0
119
+
120
+
121
+ def is_bf16_supported():
122
+ r"""Return a bool indicating if the current CUDA/ROCm device supports dtype bfloat16."""
123
+ # Check for ROCm, if true return true, no ROCM_VERSION check required,
124
+ # since it is supported on AMD GPU archs.
125
+ if torch.version.hip:
126
+ return True
127
+
128
+ device = torch.cuda.current_device()
129
+
130
+ # Check for CUDA version and device compute capability.
131
+ # This is a fast way to check for it.
132
+ cuda_version = torch.version.cuda
133
+ if (
134
+ cuda_version is not None
135
+ and int(cuda_version.split(".")[0]) >= 11
136
+ and torch.cuda.get_device_properties(device).major >= 8
137
+ ):
138
+ return True
139
+
140
+ # Finally try to create a bfloat16 device.
141
+ return _check_bf16_tensor_supported(device)
142
+
143
+
144
+ @lru_cache(maxsize=16)
145
+ def _check_bf16_tensor_supported(device: _device_t):
146
+ try:
147
+ torch.tensor([1.0], dtype=torch.bfloat16, device=device)
148
+ return True
149
+ except Exception:
150
+ return False
151
+
152
+
153
+ def _sleep(cycles):
154
+ torch._C._cuda_sleep(cycles)
155
+
156
+
157
+ def _check_capability():
158
+ incorrect_binary_warn = """
159
+ Found GPU%d %s which requires CUDA_VERSION >= %d to
160
+ work properly, but your PyTorch was compiled
161
+ with CUDA_VERSION %d. Please install the correct PyTorch binary
162
+ using instructions from https://pytorch.org
163
+ """
164
+
165
+ old_gpu_warn = """
166
+ Found GPU%d %s which is of cuda capability %d.%d.
167
+ PyTorch no longer supports this GPU because it is too old.
168
+ The minimum cuda capability supported by this library is %d.%d.
169
+ """
170
+
171
+ if torch.version.cuda is not None: # on ROCm we don't want this check
172
+ CUDA_VERSION = torch._C._cuda_getCompiledVersion()
173
+ for d in range(device_count()):
174
+ capability = get_device_capability(d)
175
+ major = capability[0]
176
+ minor = capability[1]
177
+ name = get_device_name(d)
178
+ current_arch = major * 10 + minor
179
+ min_arch = min(
180
+ (int(arch.split("_")[1]) for arch in torch.cuda.get_arch_list()),
181
+ default=35,
182
+ )
183
+ if current_arch < min_arch:
184
+ warnings.warn(
185
+ old_gpu_warn
186
+ % (d, name, major, minor, min_arch // 10, min_arch % 10)
187
+ )
188
+
189
+
190
+ def _check_cubins():
191
+ incompatible_device_warn = """
192
+ {} with CUDA capability sm_{} is not compatible with the current PyTorch installation.
193
+ The current PyTorch install supports CUDA capabilities {}.
194
+ If you want to use the {} GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
195
+ """
196
+ if torch.version.cuda is None: # on ROCm we don't want this check
197
+ return
198
+ arch_list = get_arch_list()
199
+ if len(arch_list) == 0:
200
+ return
201
+ supported_sm = [int(arch.split("_")[1]) for arch in arch_list if "sm_" in arch]
202
+ for idx in range(device_count()):
203
+ cap_major, cap_minor = get_device_capability(idx)
204
+ # NVIDIA GPU compute architectures are backward compatible within major version
205
+ supported = any(sm // 10 == cap_major for sm in supported_sm)
206
+ if not supported:
207
+ device_name = get_device_name(idx)
208
+ capability = cap_major * 10 + cap_minor
209
+ warnings.warn(
210
+ incompatible_device_warn.format(
211
+ device_name, capability, " ".join(arch_list), device_name
212
+ )
213
+ )
214
+
215
+
216
+ def is_initialized():
217
+ r"""Return whether PyTorch's CUDA state has been initialized."""
218
+ return _initialized and not _is_in_bad_fork()
219
+
220
+
221
+ def _lazy_call(callable, **kwargs):
222
+ if is_initialized():
223
+ callable()
224
+ else:
225
+ # TODO(torch_deploy): this accesses linecache, which attempts to read the
226
+ # file system to get traceback info. Patch linecache or do something
227
+ # else here if this ends up being important.
228
+ global _lazy_seed_tracker
229
+ if kwargs.get("seed_all", False):
230
+ _lazy_seed_tracker.queue_seed_all(callable, traceback.format_stack())
231
+ elif kwargs.get("seed", False):
232
+ _lazy_seed_tracker.queue_seed(callable, traceback.format_stack())
233
+ else:
234
+ # Don't store the actual traceback to avoid memory cycle
235
+ _queued_calls.append((callable, traceback.format_stack()))
236
+
237
+
238
+ _lazy_call(_check_capability)
239
+ _lazy_call(_check_cubins)
240
+
241
+
242
+ class DeferredCudaCallError(Exception):
243
+ pass
244
+
245
+
246
+ OutOfMemoryError = torch._C._OutOfMemoryError
247
+
248
+
249
+ def init():
250
+ r"""Initialize PyTorch's CUDA state.
251
+
252
+ You may need to call this explicitly if you are interacting with
253
+ PyTorch via its C API, as Python bindings for CUDA functionality
254
+ will not be available until this initialization takes place.
255
+ Ordinary users should not need this, as all of PyTorch's CUDA methods
256
+ automatically initialize CUDA state on-demand.
257
+
258
+ Does nothing if the CUDA state is already initialized.
259
+ """
260
+ _lazy_init()
261
+
262
+
263
+ def _lazy_init():
264
+ global _initialized, _queued_calls
265
+ if is_initialized() or hasattr(_tls, "is_initializing"):
266
+ return
267
+ with _initialization_lock:
268
+ # We be double-checked locking, boys! This is OK because
269
+ # the above test was GIL protected anyway. The inner test
270
+ # is for when a thread blocked on some other thread which was
271
+ # doing the initialization; when they get the lock, they will
272
+ # find there is nothing left to do.
273
+ if is_initialized():
274
+ return
275
+ # It is important to prevent other threads from entering _lazy_init
276
+ # immediately, while we are still guaranteed to have the GIL, because some
277
+ # of the C calls we make below will release the GIL
278
+ if _is_in_bad_fork():
279
+ raise RuntimeError(
280
+ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
281
+ "multiprocessing, you must use the 'spawn' start method"
282
+ )
283
+ if not hasattr(torch._C, "_cuda_getDeviceCount"):
284
+ raise AssertionError("Torch not compiled with CUDA enabled")
285
+ if _cudart is None:
286
+ raise AssertionError(
287
+ "libcudart functions unavailable. It looks like you have a broken build?"
288
+ )
289
+ # This function throws if there's a driver initialization error, no GPUs
290
+ # are found or any other error occurs
291
+ if "CUDA_MODULE_LOADING" not in os.environ:
292
+ os.environ["CUDA_MODULE_LOADING"] = "LAZY"
293
+ torch._C._cuda_init()
294
+ # Some of the queued calls may reentrantly call _lazy_init();
295
+ # we need to just return without initializing in that case.
296
+ # However, we must not let any *other* threads in!
297
+ _tls.is_initializing = True
298
+
299
+ for calls in _lazy_seed_tracker.get_calls():
300
+ if calls:
301
+ _queued_calls.append(calls)
302
+
303
+ try:
304
+ for queued_call, orig_traceback in _queued_calls:
305
+ try:
306
+ queued_call()
307
+ except Exception as e:
308
+ msg = (
309
+ f"CUDA call failed lazily at initialization with error: {str(e)}\n\n"
310
+ f"CUDA call was originally invoked at:\n\n{''.join(orig_traceback)}"
311
+ )
312
+ raise DeferredCudaCallError(msg) from e
313
+ finally:
314
+ delattr(_tls, "is_initializing")
315
+ _initialized = True
316
+
317
+
318
+ def cudart():
319
+ _lazy_init()
320
+ return _cudart
321
+
322
+
323
+ class cudaStatus:
324
+ SUCCESS: int = 0
325
+ ERROR_NOT_READY: int = 34
326
+
327
+
328
+ class CudaError(RuntimeError):
329
+ def __init__(self, code: int) -> None:
330
+ msg = _cudart.cudaGetErrorString(_cudart.cudaError(code))
331
+ super().__init__(f"{msg} ({code})")
332
+
333
+
334
+ def check_error(res: int) -> None:
335
+ if res != _cudart.cudaError.success:
336
+ raise CudaError(res)
337
+
338
+
339
+ class _DeviceGuard:
340
+ def __init__(self, index: int):
341
+ self.idx = index
342
+ self.prev_idx = -1
343
+
344
+ def __enter__(self):
345
+ self.prev_idx = torch.cuda._exchange_device(self.idx)
346
+
347
+ def __exit__(self, type: Any, value: Any, traceback: Any):
348
+ self.idx = torch.cuda._maybe_exchange_device(self.prev_idx)
349
+ return False
350
+
351
+
352
+ class device:
353
+ r"""Context-manager that changes the selected device.
354
+
355
+ Args:
356
+ device (torch.device or int): device index to select. It's a no-op if
357
+ this argument is a negative integer or ``None``.
358
+ """
359
+
360
+ def __init__(self, device: Any):
361
+ self.idx = _get_device_index(device, optional=True)
362
+ self.prev_idx = -1
363
+
364
+ def __enter__(self):
365
+ self.prev_idx = torch.cuda._exchange_device(self.idx)
366
+
367
+ def __exit__(self, type: Any, value: Any, traceback: Any):
368
+ self.idx = torch.cuda._maybe_exchange_device(self.prev_idx)
369
+ return False
370
+
371
+
372
+ class device_of(device):
373
+ r"""Context-manager that changes the current device to that of given object.
374
+
375
+ You can use both tensors and storages as arguments. If a given object is
376
+ not allocated on a GPU, this is a no-op.
377
+
378
+ Args:
379
+ obj (Tensor or Storage): object allocated on the selected device.
380
+ """
381
+
382
+ def __init__(self, obj):
383
+ idx = obj.get_device() if obj.is_cuda else -1
384
+ super().__init__(idx)
385
+
386
+
387
+ def set_device(device: _device_t) -> None:
388
+ r"""Set the current device.
389
+
390
+ Usage of this function is discouraged in favor of :any:`device`. In most
391
+ cases it's better to use ``CUDA_VISIBLE_DEVICES`` environmental variable.
392
+
393
+ Args:
394
+ device (torch.device or int): selected device. This function is a no-op
395
+ if this argument is negative.
396
+ """
397
+ device = _get_device_index(device)
398
+ if device >= 0:
399
+ torch._C._cuda_setDevice(device)
400
+
401
+
402
+ def get_device_name(device: Optional[_device_t] = None) -> str:
403
+ r"""Get the name of a device.
404
+
405
+ Args:
406
+ device (torch.device or int, optional): device for which to return the
407
+ name. This function is a no-op if this argument is a negative
408
+ integer. It uses the current device, given by :func:`~torch.cuda.current_device`,
409
+ if :attr:`device` is ``None`` (default).
410
+
411
+ Returns:
412
+ str: the name of the device
413
+ """
414
+ return get_device_properties(device).name
415
+
416
+
417
+ def get_device_capability(device: Optional[_device_t] = None) -> Tuple[int, int]:
418
+ r"""Get the cuda capability of a device.
419
+
420
+ Args:
421
+ device (torch.device or int, optional): device for which to return the
422
+ device capability. This function is a no-op if this argument is
423
+ a negative integer. It uses the current device, given by
424
+ :func:`~torch.cuda.current_device`, if :attr:`device` is ``None``
425
+ (default).
426
+
427
+ Returns:
428
+ tuple(int, int): the major and minor cuda capability of the device
429
+ """
430
+ prop = get_device_properties(device)
431
+ return prop.major, prop.minor
432
+
433
+
434
+ def get_device_properties(device: _device_t) -> _CudaDeviceProperties:
435
+ r"""Get the properties of a device.
436
+
437
+ Args:
438
+ device (torch.device or int or str): device for which to return the
439
+ properties of the device.
440
+
441
+ Returns:
442
+ _CudaDeviceProperties: the properties of the device
443
+ """
444
+ _lazy_init() # will define _get_device_properties
445
+ device = _get_device_index(device, optional=True)
446
+ if device < 0 or device >= device_count():
447
+ raise AssertionError("Invalid device id")
448
+ return _get_device_properties(device) # type: ignore[name-defined]
449
+
450
+
451
+ def can_device_access_peer(device: _device_t, peer_device: _device_t) -> bool:
452
+ r"""Check if peer access between two devices is possible."""
453
+ _lazy_init()
454
+ device = _get_device_index(device, optional=True)
455
+ peer_device = _get_device_index(peer_device)
456
+ if device < 0 or device >= device_count():
457
+ raise AssertionError("Invalid device id")
458
+ if peer_device < 0 or peer_device >= device_count():
459
+ raise AssertionError("Invalid peer device id")
460
+ return torch._C._cuda_canDeviceAccessPeer(device, peer_device)
461
+
462
+
463
+ class StreamContext:
464
+ r"""Context-manager that selects a given stream.
465
+
466
+ All CUDA kernels queued within its context will be enqueued on a selected
467
+ stream.
468
+
469
+ Args:
470
+ Stream (Stream): selected stream. This manager is a no-op if it's
471
+ ``None``.
472
+ .. note:: Streams are per-device.
473
+ """
474
+ cur_stream: Optional["torch.cuda.Stream"]
475
+
476
+ def __init__(self, stream: Optional["torch.cuda.Stream"]):
477
+ self.stream = stream
478
+ self.idx = _get_device_index(None, True)
479
+ if not torch.jit.is_scripting():
480
+ if self.idx is None:
481
+ self.idx = -1
482
+
483
+ self.src_prev_stream = (
484
+ None if not torch.jit.is_scripting() else torch.cuda.default_stream(None)
485
+ )
486
+ self.dst_prev_stream = (
487
+ None if not torch.jit.is_scripting() else torch.cuda.default_stream(None)
488
+ )
489
+
490
+ def __enter__(self):
491
+ # Local cur_stream variable for type refinement
492
+ cur_stream = self.stream
493
+ # Return if stream is None or CUDA device not available
494
+ if cur_stream is None or self.idx == -1:
495
+ return
496
+ self.src_prev_stream = torch.cuda.current_stream(None)
497
+
498
+ # If the stream is not on the current device, then
499
+ # set the current stream on the device
500
+ if self.src_prev_stream.device != cur_stream.device:
501
+ with device(cur_stream.device):
502
+ self.dst_prev_stream = torch.cuda.current_stream(cur_stream.device)
503
+ torch.cuda.set_stream(cur_stream)
504
+
505
+ def __exit__(self, type: Any, value: Any, traceback: Any):
506
+ # Local cur_stream variable for type refinement
507
+ cur_stream = self.stream
508
+ # If stream is None or no CUDA device available, return
509
+ if cur_stream is None or self.idx == -1:
510
+ return
511
+
512
+ # Reset the stream on the original device
513
+ # and destination device
514
+ if self.src_prev_stream.device != cur_stream.device: # type: ignore[union-attr]
515
+ torch.cuda.set_stream(self.dst_prev_stream) # type: ignore[arg-type]
516
+ torch.cuda.set_stream(self.src_prev_stream) # type: ignore[arg-type]
517
+
518
+
519
+ def stream(stream: Optional["torch.cuda.Stream"]) -> StreamContext:
520
+ r"""Wrap around the Context-manager StreamContext that selects a given stream.
521
+
522
+ Arguments:
523
+ stream (Stream): selected stream. This manager is a no-op if it's
524
+ ``None``.
525
+ ..Note:: In eager mode stream is of type Stream class while in JIT it is
526
+ an object of the custom class ``torch.classes.cuda.Stream``.
527
+ """
528
+ return StreamContext(stream)
529
+
530
+
531
+ def _set_stream_by_id(stream_id, device_index, device_type):
532
+ r"""set stream specified by the stream id, device index and
533
+ device type
534
+
535
+ Args: stream_id (int): stream id in stream pool
536
+ device_index (int): device index in topo
537
+ device_type (int): enum device type
538
+ """
539
+ torch._C._cuda_setStream(
540
+ stream_id=stream_id,
541
+ device_index=device_index,
542
+ device_type=device_type,
543
+ )
544
+
545
+
546
+ def set_stream(stream: Stream):
547
+ r"""Set the current stream.This is a wrapper API to set the stream.
548
+ Usage of this function is discouraged in favor of the ``stream``
549
+ context manager.
550
+
551
+ Args:
552
+ stream (Stream): selected stream. This function is a no-op
553
+ if this argument is ``None``.
554
+ """
555
+ if stream is None:
556
+ return
557
+ _set_stream_by_id(
558
+ stream_id=stream.stream_id,
559
+ device_index=stream.device_index,
560
+ device_type=stream.device_type,
561
+ )
562
+
563
+
564
+ def _parse_visible_devices() -> Union[List[int], List[str]]:
565
+ r"""Parse CUDA_VISIBLE_DEVICES environment variable."""
566
+ var = os.getenv("CUDA_VISIBLE_DEVICES")
567
+ if var is None:
568
+ return list(range(64))
569
+
570
+ def _strtoul(s: str) -> int:
571
+ """Return -1 or positive integer sequence string starts with."""
572
+ if not s:
573
+ return -1
574
+ for idx, c in enumerate(s):
575
+ if not (c.isdigit() or (idx == 0 and c in "+-")):
576
+ break
577
+ if idx + 1 == len(s):
578
+ idx += 1
579
+ return int(s[:idx]) if idx > 0 else -1
580
+
581
+ def parse_list_with_prefix(lst: str, prefix: str) -> List[str]:
582
+ rcs: List[str] = []
583
+ for elem in lst.split(","):
584
+ # Repeated id results in empty set
585
+ if elem in rcs:
586
+ return cast(List[str], [])
587
+ # Anything other but prefix is ignored
588
+ if not elem.startswith(prefix):
589
+ break
590
+ rcs.append(elem)
591
+ return rcs
592
+
593
+ if var.startswith("GPU-"):
594
+ return parse_list_with_prefix(var, "GPU-")
595
+ if var.startswith("MIG-"):
596
+ return parse_list_with_prefix(var, "MIG-")
597
+ # CUDA_VISIBLE_DEVICES uses something like strtoul
598
+ # which makes `1gpu2,2ampere` is equivalent to `1,2`
599
+ rc: List[int] = []
600
+ for elem in var.split(","):
601
+ x = _strtoul(elem.strip())
602
+ # Repeated ordinal results in empty set
603
+ if x in rc:
604
+ return cast(List[int], [])
605
+ # Negative value aborts the sequence
606
+ if x < 0:
607
+ break
608
+ rc.append(x)
609
+ return rc
610
+
611
+
612
+ def _raw_device_count_nvml() -> int:
613
+ r"""Return number of devices as reported by NVML or negative value if NVML discovery/initialization failed."""
614
+ from ctypes import byref, c_int, CDLL
615
+
616
+ nvml_h = CDLL("libnvidia-ml.so.1")
617
+ rc = nvml_h.nvmlInit()
618
+ if rc != 0:
619
+ warnings.warn("Can't initialize NVML")
620
+ return -1
621
+ dev_count = c_int(-1)
622
+ rc = nvml_h.nvmlDeviceGetCount_v2(byref(dev_count))
623
+ if rc != 0:
624
+ warnings.warn("Can't get nvml device count")
625
+ return -1
626
+ del nvml_h
627
+ return dev_count.value
628
+
629
+
630
+ def _raw_device_uuid_nvml() -> Optional[List[str]]:
631
+ r"""Return list of device UUID as reported by NVML or None if NVM discovery/initialization failed."""
632
+ from ctypes import byref, c_int, c_void_p, CDLL, create_string_buffer
633
+
634
+ nvml_h = CDLL("libnvidia-ml.so.1")
635
+ rc = nvml_h.nvmlInit()
636
+ if rc != 0:
637
+ warnings.warn("Can't initialize NVML")
638
+ return None
639
+ dev_count = c_int(-1)
640
+ rc = nvml_h.nvmlDeviceGetCount_v2(byref(dev_count))
641
+ if rc != 0:
642
+ warnings.warn("Can't get nvml device count")
643
+ return None
644
+ uuids: List[str] = []
645
+ for idx in range(dev_count.value):
646
+ dev_id = c_void_p()
647
+ rc = nvml_h.nvmlDeviceGetHandleByIndex_v2(idx, byref(dev_id))
648
+ if rc != 0:
649
+ warnings.warn("Can't get device handle")
650
+ return None
651
+ buf_len = 96
652
+ buf = create_string_buffer(buf_len)
653
+ rc = nvml_h.nvmlDeviceGetUUID(dev_id, buf, buf_len)
654
+ if rc != 0:
655
+ warnings.warn("Can't get device UUID")
656
+ return None
657
+ uuids.append(buf.raw.decode("ascii").strip("\0"))
658
+ del nvml_h
659
+ return uuids
660
+
661
+
662
+ def _transform_uuid_to_ordinals(candidates: List[str], uuids: List[str]) -> List[int]:
663
+ r"""Given the set of partial uuids and list of known uuids builds a set of ordinals excluding ambiguous partials IDs."""
664
+
665
+ def uuid_to_orinal(candidate: str, uuids: List[str]) -> int:
666
+ best_match = -1
667
+ for idx, uuid in enumerate(uuids):
668
+ if not uuid.startswith(candidate):
669
+ continue
670
+ # Ambiguous candidate
671
+ if best_match != -1:
672
+ return -1
673
+ best_match = idx
674
+ return best_match
675
+
676
+ rc: List[int] = []
677
+ for candidate in candidates:
678
+ idx = uuid_to_orinal(candidate, uuids)
679
+ # First invalid ordinal stops parsing
680
+ if idx < 0:
681
+ break
682
+ # Duplicates result in empty set
683
+ if idx in rc:
684
+ return cast(List[int], [])
685
+ rc.append(idx)
686
+ return rc
687
+
688
+
689
+ def _device_count_nvml() -> int:
690
+ r"""Return number of devices as reported by NVML taking CUDA_VISIBLE_DEVICES into account.
691
+
692
+ Negative value is returned if NVML discovery or initialization has failed.
693
+ """
694
+ visible_devices = _parse_visible_devices()
695
+ if not visible_devices:
696
+ return 0
697
+ try:
698
+ if type(visible_devices[0]) is str:
699
+ # Skip MIG parsing
700
+ if visible_devices[0].startswith("MIG-"):
701
+ return -1
702
+ uuids = _raw_device_uuid_nvml()
703
+ if uuids is None:
704
+ return -1
705
+ visible_devices = _transform_uuid_to_ordinals(
706
+ cast(List[str], visible_devices), uuids
707
+ )
708
+ else:
709
+ raw_cnt = _raw_device_count_nvml()
710
+ if raw_cnt <= 0:
711
+ return raw_cnt
712
+ # Trim the list up to a maximum available device
713
+ for idx, val in enumerate(visible_devices):
714
+ if cast(int, val) >= raw_cnt:
715
+ return idx
716
+ except OSError:
717
+ return -1
718
+ except AttributeError:
719
+ return -1
720
+ return len(visible_devices)
721
+
722
+
723
+ def _get_nvml_device_index(device: Optional[Union[int, Device]]) -> int:
724
+ r"""Return the NVML index of the device, taking CUDA_VISIBLE_DEVICES into account."""
725
+ idx = _get_device_index(device, optional=True)
726
+ visible_devices = _parse_visible_devices()
727
+ if type(visible_devices[0]) is str:
728
+ uuids = _raw_device_uuid_nvml()
729
+ if uuids is None:
730
+ raise RuntimeError("Can't get device UUIDs")
731
+ visible_devices = _transform_uuid_to_ordinals(
732
+ cast(List[str], visible_devices), uuids
733
+ )
734
+ visible_devices = cast(List[int], visible_devices)
735
+ if idx < 0 or idx >= len(visible_devices):
736
+ raise RuntimeError(
737
+ f"device {idx} is not visible (CUDA_VISIBLE_DEVICES={visible_devices})"
738
+ )
739
+ return visible_devices[idx]
740
+
741
+
742
+ @lru_cache(maxsize=1)
743
+ def device_count() -> int:
744
+ r"""Return the number of GPUs available."""
745
+ if not _is_compiled():
746
+ return 0
747
+ # bypass _device_count_nvml() if rocm (not supported)
748
+ nvml_count = -1 if torch.version.hip else _device_count_nvml()
749
+ return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
750
+
751
+
752
+ def get_arch_list() -> List[str]:
753
+ r"""Return list CUDA architectures this library was compiled for."""
754
+ if not is_available():
755
+ return []
756
+ arch_flags = torch._C._cuda_getArchFlags()
757
+ if arch_flags is None:
758
+ return []
759
+ return arch_flags.split()
760
+
761
+
762
+ def get_gencode_flags() -> str:
763
+ r"""Return NVCC gencode flags this library was compiled with."""
764
+ arch_list = get_arch_list()
765
+ if len(arch_list) == 0:
766
+ return ""
767
+ arch_list_ = [arch.split("_") for arch in arch_list]
768
+ return " ".join(
769
+ [
770
+ f"-gencode compute=compute_{arch},code={kind}_{arch}"
771
+ for (kind, arch) in arch_list_
772
+ ]
773
+ )
774
+
775
+
776
+ def current_device() -> int:
777
+ r"""Return the index of a currently selected device."""
778
+ _lazy_init()
779
+ return torch._C._cuda_getDevice()
780
+
781
+
782
+ def synchronize(device: _device_t = None) -> None:
783
+ r"""Wait for all kernels in all streams on a CUDA device to complete.
784
+
785
+ Args:
786
+ device (torch.device or int, optional): device for which to synchronize.
787
+ It uses the current device, given by :func:`~torch.cuda.current_device`,
788
+ if :attr:`device` is ``None`` (default).
789
+ """
790
+ _lazy_init()
791
+ with torch.cuda.device(device):
792
+ return torch._C._cuda_synchronize()
793
+
794
+
795
+ def ipc_collect():
796
+ r"""Force collects GPU memory after it has been released by CUDA IPC.
797
+
798
+ .. note::
799
+ Checks if any sent CUDA tensors could be cleaned from the memory. Force
800
+ closes shared memory file used for reference counting if there is no
801
+ active counters. Useful when the producer process stopped actively sending
802
+ tensors and want to release unused memory.
803
+ """
804
+ _lazy_init()
805
+ return torch._C._cuda_ipc_collect()
806
+
807
+
808
+ def current_stream(device: Optional[_device_t] = None) -> Stream:
809
+ r"""Return the currently selected :class:`Stream` for a given device.
810
+
811
+ Args:
812
+ device (torch.device or int, optional): selected device. Returns
813
+ the currently selected :class:`Stream` for the current device, given
814
+ by :func:`~torch.cuda.current_device`, if :attr:`device` is ``None``
815
+ (default).
816
+ """
817
+ _lazy_init()
818
+ streamdata = torch._C._cuda_getCurrentStream(
819
+ _get_device_index(device, optional=True)
820
+ )
821
+ return Stream(
822
+ stream_id=streamdata[0], device_index=streamdata[1], device_type=streamdata[2]
823
+ )
824
+
825
+
826
+ def default_stream(device: Optional[_device_t] = None) -> Stream:
827
+ r"""Return the default :class:`Stream` for a given device.
828
+
829
+ Args:
830
+ device (torch.device or int, optional): selected device. Returns
831
+ the default :class:`Stream` for the current device, given by
832
+ :func:`~torch.cuda.current_device`, if :attr:`device` is ``None``
833
+ (default).
834
+ """
835
+ _lazy_init()
836
+ streamdata = torch._C._cuda_getDefaultStream(
837
+ _get_device_index(device, optional=True)
838
+ )
839
+ return Stream(
840
+ stream_id=streamdata[0], device_index=streamdata[1], device_type=streamdata[2]
841
+ )
842
+
843
+
844
+ def current_blas_handle():
845
+ r"""Return cublasHandle_t pointer to current cuBLAS handle"""
846
+ _lazy_init()
847
+ return torch._C._cuda_getCurrentBlasHandle()
848
+
849
+
850
+ def set_sync_debug_mode(debug_mode: Union[int, str]) -> None:
851
+ r"""Set the debug mode for cuda synchronizing operations.
852
+
853
+ Args:
854
+ debug_mode(str or int): if "default" or 0, don't error or warn on synchronizing operations,
855
+ if "warn" or 1, warn on synchronizing operations, if "error" or 2, error out synchronizing operations.
856
+
857
+ Warning:
858
+ This is an experimental feature, and not all synchronizing operations will trigger warning or error. In
859
+ particular, operations in torch.distributed and torch.sparse namespaces are not covered yet.
860
+ """
861
+ _lazy_init()
862
+ if isinstance(debug_mode, str):
863
+ if debug_mode == "default":
864
+ debug_mode = 0
865
+ elif debug_mode == "warn":
866
+ debug_mode = 1
867
+ elif debug_mode == "error":
868
+ debug_mode = 2
869
+ else:
870
+ raise RuntimeError(
871
+ "invalid value of debug_mode, expected one of `default`, `warn`, `error`"
872
+ )
873
+
874
+ torch._C._cuda_set_sync_debug_mode(debug_mode)
875
+
876
+
877
+ def get_sync_debug_mode() -> int:
878
+ r"""Return current value of debug mode for cuda synchronizing operations."""
879
+ _lazy_init()
880
+ return torch._C._cuda_get_sync_debug_mode()
881
+
882
+
883
+ def _get_pynvml_handler(device: Optional[Union[Device, int]] = None):
884
+ if not _HAS_PYNVML:
885
+ raise ModuleNotFoundError(
886
+ "pynvml does not seem to be installed or it can't be imported."
887
+ ) from _PYNVML_ERR
888
+ from pynvml import NVMLError_DriverNotLoaded
889
+
890
+ try:
891
+ pynvml.nvmlInit()
892
+ except NVMLError_DriverNotLoaded as e:
893
+ raise RuntimeError("cuda driver can't be loaded, is cuda enabled?") from e
894
+
895
+ device = _get_nvml_device_index(device)
896
+ handle = pynvml.nvmlDeviceGetHandleByIndex(device)
897
+ return handle
898
+
899
+
900
+ def memory_usage(device: Optional[Union[Device, int]] = None) -> int:
901
+ r"""Return the percent of time over the past sample period during which global (device)
902
+ memory was being read or written as given by `nvidia-smi`.
903
+
904
+ Args:
905
+ device (torch.device or int, optional): selected device. Returns
906
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
907
+ if :attr:`device` is ``None`` (default).
908
+
909
+ Warning: Each sample period may be between 1 second and 1/6 second,
910
+ depending on the product being queried.
911
+ """
912
+ handle = _get_pynvml_handler()
913
+
914
+ device = _get_nvml_device_index(device)
915
+ handle = pynvml.nvmlDeviceGetHandleByIndex(device)
916
+ return pynvml.nvmlDeviceGetUtilizationRates(handle).memory
917
+
918
+
919
+ def utilization(device: Optional[Union[Device, int]] = None) -> int:
920
+ r"""Return the percent of time over the past sample period during which one or
921
+ more kernels was executing on the GPU as given by `nvidia-smi`.
922
+
923
+ Args:
924
+ device (torch.device or int, optional): selected device. Returns
925
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
926
+ if :attr:`device` is ``None`` (default).
927
+
928
+ Warning: Each sample period may be between 1 second and 1/6 second,
929
+ depending on the product being queried.
930
+ """
931
+ handle = _get_pynvml_handler(device)
932
+ device = _get_nvml_device_index(device)
933
+ handle = pynvml.nvmlDeviceGetHandleByIndex(device)
934
+ return pynvml.nvmlDeviceGetUtilizationRates(handle).gpu
935
+
936
+
937
+ def temperature(device: Optional[Union[Device, int]] = None) -> int:
938
+ r"""Return the average temperature of the GPU sensor in Degrees C (Centigrades).
939
+
940
+ The average temperature is computed based on past sample period as given by `nvidia-smi`.
941
+
942
+ Args:
943
+ device (torch.device or int, optional): selected device. Returns
944
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
945
+ if :attr:`device` is ``None`` (default).
946
+
947
+ Warning: Each sample period may be between 1 second and 1/6 second,
948
+ depending on the product being queried.
949
+ """
950
+ handle = _get_pynvml_handler(device)
951
+ # 0 refers to the temperature sensor for the GPU die.
952
+ return pynvml.nvmlDeviceGetTemperature(handle, 0)
953
+
954
+
955
+ def power_draw(device: Optional[Union[Device, int]] = None) -> int:
956
+ r"""Return the average power draw of the GPU sensor in mW (MilliWatts)
957
+ over the past sample period as given by `nvidia-smi` for Fermi or newer fully supported devices.
958
+
959
+ Args:
960
+ device (torch.device or int, optional): selected device. Returns
961
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
962
+ if :attr:`device` is ``None`` (default).
963
+
964
+ Warning: Each sample period may be between 1 second and 1/6 second,
965
+ depending on the product being queried.
966
+ """
967
+ handle = _get_pynvml_handler(device)
968
+ return pynvml.nvmlDeviceGetPowerUsage(handle)
969
+
970
+
971
+ def clock_rate(device: Optional[Union[Device, int]] = None) -> int:
972
+ r"""Return the clock speed of the GPU SM in Hz Hertz over the past sample period as given by `nvidia-smi`.
973
+
974
+ Args:
975
+ device (torch.device or int, optional): selected device. Returns
976
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
977
+ if :attr:`device` is ``None`` (default).
978
+
979
+ Warning: Each sample period may be between 1 second and 1/6 second,
980
+ depending on the product being queried.
981
+ """
982
+ handle = _get_pynvml_handler(device)
983
+ return pynvml.nvmlDeviceGetClockInfo(handle, 1)
984
+
985
+
986
+ def _get_device(device: Union[int, str, torch.device]) -> torch.device:
987
+ r"""Return the torch.device type object from the passed in device.
988
+
989
+ Args:
990
+ device (torch.device or int): selected device.
991
+ """
992
+ if isinstance(device, str):
993
+ device = torch.device(device)
994
+ elif isinstance(device, int):
995
+ device = torch.device("cuda", device)
996
+ return device
997
+
998
+
999
+ def _get_generator(device: torch.device) -> torch._C.Generator:
1000
+ r"""Return the CUDA Generator object for the given device.
1001
+
1002
+ Args:
1003
+ device (torch.device): selected device.
1004
+ """
1005
+ idx = device.index
1006
+ if idx is None:
1007
+ idx = current_device()
1008
+ return torch.cuda.default_generators[idx]
1009
+
1010
+
1011
+ def _set_rng_state_offset(
1012
+ offset: int, device: Union[int, str, torch.device] = "cuda"
1013
+ ) -> None:
1014
+ r"""Set the random number generator state offset of the specified GPU.
1015
+
1016
+ Args:
1017
+ offset (int): The desired offset
1018
+ device (torch.device or int, optional): The device to set the RNG state.
1019
+ Default: ``'cuda'`` (i.e., ``torch.device('cuda')``, the current CUDA device).
1020
+ """
1021
+ final_device = _get_device(device)
1022
+
1023
+ def cb():
1024
+ default_generator = _get_generator(final_device)
1025
+ default_generator.set_offset(offset)
1026
+
1027
+ _lazy_call(cb)
1028
+
1029
+
1030
+ def _get_rng_state_offset(device: Union[int, str, torch.device] = "cuda") -> int:
1031
+ r"""Return the random number generator state offset of the specified GPU.
1032
+
1033
+ Args:
1034
+ device (torch.device or int, optional): The device to return the RNG state offset of.
1035
+ Default: ``'cuda'`` (i.e., ``torch.device('cuda')``, the current CUDA device).
1036
+
1037
+ .. warning::
1038
+ This function eagerly initializes CUDA.
1039
+ """
1040
+ _lazy_init()
1041
+ final_device = _get_device(device)
1042
+ default_generator = _get_generator(final_device)
1043
+ return default_generator.get_offset()
1044
+
1045
+
1046
+ from .memory import * # noqa: F403
1047
+
1048
+
1049
+ from .random import * # noqa: F403
1050
+
1051
+ ################################################################################
1052
+ # Define Storage and Tensor classes
1053
+ ################################################################################
1054
+
1055
+
1056
+ @staticmethod # type: ignore[misc]
1057
+ def _lazy_new(cls, *args, **kwargs):
1058
+ _lazy_init()
1059
+ # We may need to call lazy init again if we are a forked child
1060
+ # del _CudaBase.__new__
1061
+ return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
1062
+
1063
+
1064
+ class _CudaBase:
1065
+ is_cuda = True
1066
+ is_sparse = False
1067
+
1068
+ def type(self, *args, **kwargs):
1069
+ # We could use a Protocol here to tell mypy that self has `get_device` method
1070
+ # but it is only available in the typing module on Python >= 3.8
1071
+ # or on typing_extensions module on Python >= 3.6
1072
+ with device(self.get_device()): # type: ignore[attr-defined]
1073
+ return super().type(*args, **kwargs) # type: ignore[misc]
1074
+
1075
+ __new__ = _lazy_new
1076
+
1077
+
1078
+ from torch.storage import _LegacyStorage, _warn_typed_storage_removal
1079
+
1080
+
1081
+ class _CudaLegacyStorage(_LegacyStorage):
1082
+ @classmethod
1083
+ def from_buffer(cls, *args, **kwargs):
1084
+ _warn_typed_storage_removal()
1085
+ raise RuntimeError("from_buffer: Not available for CUDA storage")
1086
+
1087
+ @classmethod
1088
+ def _new_with_weak_ptr(cls, *args, **kwargs):
1089
+ raise RuntimeError("_new_with_weak_ptr: Not available for CUDA storage")
1090
+
1091
+ @classmethod
1092
+ def _new_shared_filename(cls, manager, obj, size, *, device=None, dtype=None):
1093
+ raise RuntimeError("_new_shared_filename: Not available for CUDA storage")
1094
+
1095
+
1096
+ class ByteStorage(_CudaLegacyStorage):
1097
+ @classproperty
1098
+ def dtype(self):
1099
+ _warn_typed_storage_removal()
1100
+ return self._dtype
1101
+
1102
+ @classproperty
1103
+ def _dtype(self):
1104
+ return torch.uint8
1105
+
1106
+
1107
+ class DoubleStorage(_CudaLegacyStorage):
1108
+ @classproperty
1109
+ def dtype(self):
1110
+ _warn_typed_storage_removal()
1111
+ return self._dtype
1112
+
1113
+ @classproperty
1114
+ def _dtype(self):
1115
+ return torch.double
1116
+
1117
+
1118
+ class FloatStorage(_CudaLegacyStorage):
1119
+ @classproperty
1120
+ def dtype(self):
1121
+ _warn_typed_storage_removal()
1122
+ return self._dtype
1123
+
1124
+ @classproperty
1125
+ def _dtype(self):
1126
+ return torch.float
1127
+
1128
+
1129
+ class HalfStorage(_CudaLegacyStorage):
1130
+ @classproperty
1131
+ def dtype(self):
1132
+ _warn_typed_storage_removal()
1133
+ return self._dtype
1134
+
1135
+ @classproperty
1136
+ def _dtype(self):
1137
+ return torch.half
1138
+
1139
+
1140
+ class LongStorage(_CudaLegacyStorage):
1141
+ @classproperty
1142
+ def dtype(self):
1143
+ _warn_typed_storage_removal()
1144
+ return self._dtype
1145
+
1146
+ @classproperty
1147
+ def _dtype(self):
1148
+ return torch.long
1149
+
1150
+
1151
+ class IntStorage(_CudaLegacyStorage):
1152
+ @classproperty
1153
+ def dtype(self):
1154
+ _warn_typed_storage_removal()
1155
+ return self._dtype
1156
+
1157
+ @classproperty
1158
+ def _dtype(self):
1159
+ return torch.int
1160
+
1161
+
1162
+ class ShortStorage(_CudaLegacyStorage):
1163
+ @classproperty
1164
+ def dtype(self):
1165
+ _warn_typed_storage_removal()
1166
+ return self._dtype
1167
+
1168
+ @classproperty
1169
+ def _dtype(self):
1170
+ return torch.short
1171
+
1172
+
1173
+ class CharStorage(_CudaLegacyStorage):
1174
+ @classproperty
1175
+ def dtype(self):
1176
+ _warn_typed_storage_removal()
1177
+ return self._dtype
1178
+
1179
+ @classproperty
1180
+ def _dtype(self):
1181
+ return torch.int8
1182
+
1183
+
1184
+ class BoolStorage(_CudaLegacyStorage):
1185
+ @classproperty
1186
+ def dtype(self):
1187
+ _warn_typed_storage_removal()
1188
+ return self._dtype
1189
+
1190
+ @classproperty
1191
+ def _dtype(self):
1192
+ return torch.bool
1193
+
1194
+
1195
+ class BFloat16Storage(_CudaLegacyStorage):
1196
+ @classproperty
1197
+ def dtype(self):
1198
+ _warn_typed_storage_removal()
1199
+ return self._dtype
1200
+
1201
+ @classproperty
1202
+ def _dtype(self):
1203
+ return torch.bfloat16
1204
+
1205
+
1206
+ class ComplexDoubleStorage(_CudaLegacyStorage):
1207
+ @classproperty
1208
+ def dtype(self):
1209
+ _warn_typed_storage_removal()
1210
+ return self._dtype
1211
+
1212
+ @classproperty
1213
+ def _dtype(self):
1214
+ return torch.cdouble
1215
+
1216
+
1217
+ class ComplexFloatStorage(_CudaLegacyStorage):
1218
+ @classproperty
1219
+ def dtype(self):
1220
+ _warn_typed_storage_removal()
1221
+ return self._dtype
1222
+
1223
+ @classproperty
1224
+ def _dtype(self):
1225
+ return torch.cfloat
1226
+
1227
+
1228
+ del _LegacyStorage
1229
+ del _CudaLegacyStorage
1230
+
1231
+ torch._storage_classes.add(DoubleStorage)
1232
+ torch._storage_classes.add(FloatStorage)
1233
+ torch._storage_classes.add(LongStorage)
1234
+ torch._storage_classes.add(IntStorage)
1235
+ torch._storage_classes.add(ShortStorage)
1236
+ torch._storage_classes.add(CharStorage)
1237
+ torch._storage_classes.add(ByteStorage)
1238
+ torch._storage_classes.add(HalfStorage)
1239
+ torch._storage_classes.add(BoolStorage)
1240
+ torch._storage_classes.add(BFloat16Storage)
1241
+ torch._storage_classes.add(ComplexDoubleStorage)
1242
+ torch._storage_classes.add(ComplexFloatStorage)
1243
+
1244
+
1245
+ class _WrappedTritonKernel:
1246
+ """Just a simple wrapper to store some metadata for testing purposes."""
1247
+
1248
+ def __init__(self, kernel):
1249
+ self.kernel = kernel
1250
+ self.kernel_invoked = False
1251
+
1252
+ def __call__(self, *args, **kwargs):
1253
+ res = self.kernel(*args, **kwargs)
1254
+ self.kernel_invoked = True
1255
+ return res
1256
+
1257
+
1258
+ def _register_triton_kernels():
1259
+ if torch._running_with_deploy():
1260
+ return
1261
+
1262
+ @_WrappedTritonKernel
1263
+ def kernel_impl(*args, **kwargs):
1264
+ from torch.sparse._triton_ops import bsr_dense_mm
1265
+
1266
+ return bsr_dense_mm(*args, skip_checks=True, **kwargs)
1267
+
1268
+ @_WrappedTritonKernel
1269
+ def addmm_kernel_impl(*args, **kwargs):
1270
+ from torch.sparse._triton_ops import bsr_dense_addmm
1271
+
1272
+ return bsr_dense_addmm(*args, skip_checks=True, **kwargs)
1273
+
1274
+ has_triton = importlib.util.find_spec("triton") is not None
1275
+ if has_triton:
1276
+ torch._TritonLibrary.registerOp(
1277
+ "_triton_bsr_dense_mm_out",
1278
+ "_triton_bsr_dense_mm_out(Tensor bsr, Tensor dense, *, Tensor(a!) out) -> Tensor(a!)",
1279
+ kernel_impl,
1280
+ "SparseCsrCUDA",
1281
+ )
1282
+
1283
+ torch._TritonLibrary.registerOp(
1284
+ "_triton_bsr_dense_addmm_out",
1285
+ (
1286
+ "_triton_bsr_dense_addmm_out(Tensor input, Tensor bsr, Tensor dense,"
1287
+ " *, Scalar beta, Scalar alpha, Tensor(a!) out) -> Tensor(a!)"
1288
+ ),
1289
+ addmm_kernel_impl,
1290
+ "SparseCsrCUDA",
1291
+ )
1292
+
1293
+
1294
+ _lazy_call(_register_triton_kernels)
1295
+
1296
+
1297
+ from . import amp, jiterator, nvtx, profiler, sparse
1298
+
1299
+ __all__ = [
1300
+ # Typed storage and tensors
1301
+ "BFloat16Storage",
1302
+ "BFloat16Tensor",
1303
+ "BoolStorage",
1304
+ "BoolTensor",
1305
+ "ByteStorage",
1306
+ "ByteTensor",
1307
+ "CharStorage",
1308
+ "CharTensor",
1309
+ "ComplexDoubleStorage",
1310
+ "ComplexFloatStorage",
1311
+ "DoubleStorage",
1312
+ "DoubleTensor",
1313
+ "FloatStorage",
1314
+ "FloatTensor",
1315
+ "HalfStorage",
1316
+ "HalfTensor",
1317
+ "IntStorage",
1318
+ "IntTensor",
1319
+ "LongStorage",
1320
+ "LongTensor",
1321
+ "ShortStorage",
1322
+ "ShortTensor",
1323
+ "CUDAGraph",
1324
+ "CudaError",
1325
+ "DeferredCudaCallError",
1326
+ "Event",
1327
+ "ExternalStream",
1328
+ "OutOfMemoryError",
1329
+ "Stream",
1330
+ "StreamContext",
1331
+ "amp",
1332
+ "caching_allocator_alloc",
1333
+ "caching_allocator_delete",
1334
+ "can_device_access_peer",
1335
+ "check_error",
1336
+ "cudaStatus",
1337
+ "cudart",
1338
+ "current_blas_handle",
1339
+ "current_device",
1340
+ "current_stream",
1341
+ "default_generators",
1342
+ "default_stream",
1343
+ "device",
1344
+ "device_count",
1345
+ "device_of",
1346
+ "empty_cache",
1347
+ "get_allocator_backend",
1348
+ "CUDAPluggableAllocator",
1349
+ "change_current_allocator",
1350
+ "get_arch_list",
1351
+ "get_device_capability",
1352
+ "get_device_name",
1353
+ "get_device_properties",
1354
+ "get_gencode_flags",
1355
+ "get_rng_state",
1356
+ "get_rng_state_all",
1357
+ "get_sync_debug_mode",
1358
+ "graph",
1359
+ "graph_pool_handle",
1360
+ "graphs",
1361
+ "has_half",
1362
+ "has_magma",
1363
+ "init",
1364
+ "initial_seed",
1365
+ "ipc_collect",
1366
+ "is_available",
1367
+ "is_bf16_supported",
1368
+ "is_current_stream_capturing",
1369
+ "is_initialized",
1370
+ "jiterator",
1371
+ "list_gpu_processes",
1372
+ "make_graphed_callables",
1373
+ "manual_seed",
1374
+ "manual_seed_all",
1375
+ "max_memory_allocated",
1376
+ "max_memory_cached",
1377
+ "max_memory_reserved",
1378
+ "mem_get_info",
1379
+ "memory",
1380
+ "memory_allocated",
1381
+ "memory_cached",
1382
+ "memory_reserved",
1383
+ "memory_snapshot",
1384
+ "memory_stats",
1385
+ "memory_stats_as_nested_dict",
1386
+ "memory_summary",
1387
+ "memory_usage",
1388
+ "temperature",
1389
+ "power_draw",
1390
+ "clock_rate",
1391
+ "nccl",
1392
+ "nvtx",
1393
+ "profiler",
1394
+ "random",
1395
+ "reset_accumulated_memory_stats",
1396
+ "reset_max_memory_allocated",
1397
+ "reset_max_memory_cached",
1398
+ "reset_peak_memory_stats",
1399
+ "seed",
1400
+ "seed_all",
1401
+ "set_device",
1402
+ "set_per_process_memory_fraction",
1403
+ "set_rng_state",
1404
+ "set_rng_state_all",
1405
+ "set_stream",
1406
+ "set_sync_debug_mode",
1407
+ "sparse",
1408
+ "stream",
1409
+ "streams",
1410
+ "synchronize",
1411
+ "utilization",
1412
+ ]
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (40.8 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/_memory_viz.cpython-310.pyc ADDED
Binary file (20.8 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/_sanitizer.cpython-310.pyc ADDED
Binary file (22.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/_utils.cpython-310.pyc ADDED
Binary file (1.54 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/comm.cpython-310.pyc ADDED
Binary file (403 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/error.cpython-310.pyc ADDED
Binary file (180 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/graphs.cpython-310.pyc ADDED
Binary file (19 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/jiterator.cpython-310.pyc ADDED
Binary file (6.42 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/memory.cpython-310.pyc ADDED
Binary file (34 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/nccl.cpython-310.pyc ADDED
Binary file (3.69 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/nvtx.cpython-310.pyc ADDED
Binary file (3.03 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/profiler.cpython-310.pyc ADDED
Binary file (1.92 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/random.cpython-310.pyc ADDED
Binary file (5.54 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/sparse.cpython-310.pyc ADDED
Binary file (181 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/__pycache__/streams.cpython-310.pyc ADDED
Binary file (9.61 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/_memory_viz.py ADDED
@@ -0,0 +1,626 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pickle
2
+ import sys
3
+ import os
4
+ import io
5
+ import subprocess
6
+ import json
7
+ from functools import lru_cache
8
+ from typing import Any
9
+ from itertools import groupby
10
+ import base64
11
+ import warnings
12
+
13
+ cache = lru_cache(None)
14
+
15
+ __all__ = ["format_flamegraph", "segments", "memory", "compare"]
16
+
17
+ def _frame_fmt(f, full_filename=False):
18
+ i = f['line']
19
+ fname = f['filename']
20
+ if not full_filename:
21
+ fname = fname.split('/')[-1]
22
+ func = f['name']
23
+ return f'{fname}:{i}:{func}'
24
+
25
+ @cache
26
+ def _frame_filter(name, filename):
27
+ omit_functions = [
28
+ "unwind::unwind",
29
+ "CapturedTraceback::gather",
30
+ "gather_with_cpp",
31
+ "_start",
32
+ "__libc_start_main",
33
+ "PyEval_",
34
+ "PyObject_",
35
+ "PyFunction_",
36
+ ]
37
+ omit_filenames = [
38
+ "core/boxing",
39
+ "/Register",
40
+ "/Redispatch",
41
+ "pythonrun.c",
42
+ "Modules/main.c",
43
+ "Objects/call.c",
44
+ "Objects/methodobject.c",
45
+ "pycore_ceval.h",
46
+ "ceval.c",
47
+ "cpython/abstract.h",
48
+ ]
49
+ for of in omit_functions:
50
+ if of in name:
51
+ return False
52
+ for of in omit_filenames:
53
+ if of in filename:
54
+ return False
55
+ return True
56
+
57
+ def _frames_fmt(frames, full_filename=False, reverse=False):
58
+ if reverse:
59
+ frames = reversed(frames)
60
+ return [_frame_fmt(f, full_filename) for f in frames if _frame_filter(f['name'], f['filename'])]
61
+
62
+ def _block_extra_legacy(b):
63
+ if 'history' in b:
64
+ frames = b['history'][0].get('frames', [])
65
+ real_size = b['history'][0]['real_size']
66
+ else:
67
+ real_size = b.get('requested_size', b['size'])
68
+ frames = []
69
+ return frames, real_size
70
+
71
+ def _block_extra(b):
72
+ if 'frames' not in b:
73
+ # old snapshot format made it more complicated to get frames/allocated size
74
+ return _block_extra_legacy(b)
75
+ return b['frames'], b['requested_size']
76
+
77
+ def format_flamegraph(flamegraph_lines, flamegraph_script=None):
78
+ if flamegraph_script is None:
79
+ flamegraph_script = f'/tmp/{os.getuid()}_flamegraph.pl'
80
+ if not os.path.exists(flamegraph_script):
81
+ import urllib.request
82
+ print(f"Downloading flamegraph.pl to: {flamegraph_script}")
83
+ urllib.request.urlretrieve(
84
+ 'https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph.pl', flamegraph_script)
85
+ subprocess.check_call(['chmod', '+x', flamegraph_script])
86
+ args = [flamegraph_script, '--countname', 'bytes']
87
+ p = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding='utf-8')
88
+ assert p.stdin is not None
89
+ assert p.stdout is not None
90
+ p.stdin.write(flamegraph_lines)
91
+ p.stdin.close()
92
+ result = p.stdout.read()
93
+ p.stdout.close()
94
+ p.wait()
95
+ assert p.wait() == 0
96
+ return result
97
+
98
+ def _write_blocks(f, prefix, blocks):
99
+ def frames_fragment(frames):
100
+ if not frames:
101
+ return "<non-python>"
102
+ return ';'.join(_frames_fmt(frames, reverse=True))
103
+ for b in blocks:
104
+ if 'history' not in b:
105
+ frames, accounted_for_size = _block_extra(b)
106
+ f.write(f'{prefix};{b["state"]};{frames_fragment(frames)} {accounted_for_size}\n')
107
+ else:
108
+ accounted_for_size = 0
109
+ for h in b['history']:
110
+ sz = h['real_size']
111
+ accounted_for_size += sz
112
+ if 'frames' in h:
113
+ frames = h['frames']
114
+ f.write(f'{prefix};{b["state"]};{frames_fragment(frames)} {sz}\n')
115
+ else:
116
+ f.write(f'{prefix};{b["state"]};<no-context> {sz}\n')
117
+ gaps = b['size'] - accounted_for_size
118
+ if gaps:
119
+ f.write(f'{prefix};{b["state"]};<gaps> {gaps}\n')
120
+
121
+ def segments(snapshot, format_flamegraph=format_flamegraph):
122
+ f = io.StringIO()
123
+ for seg in snapshot['segments']:
124
+ prefix = f'stream_{seg["stream"]};seg_{seg["address"]}'
125
+ _write_blocks(f, prefix, seg['blocks'])
126
+ return format_flamegraph(f.getvalue())
127
+
128
+ def memory(snapshot, format_flamegraph=format_flamegraph):
129
+ f = io.StringIO()
130
+ for seg in snapshot['segments']:
131
+ prefix = f'stream_{seg["stream"]}'
132
+ _write_blocks(f, prefix, seg['blocks'])
133
+ return format_flamegraph(f.getvalue())
134
+
135
+ def compare(before, after, format_flamegraph=format_flamegraph):
136
+ def _seg_key(seg):
137
+ return (seg['address'], seg['total_size'])
138
+
139
+ def _seg_info(seg):
140
+ return f'stream_{seg["stream"]};seg_{seg["address"]}'
141
+
142
+ f = io.StringIO()
143
+
144
+ before_segs = {_seg_key(seg) for seg in before}
145
+ after_segs = {_seg_key(seg) for seg in after}
146
+
147
+ print(f'only_before = {[a for a,_ in (before_segs - after_segs)]}')
148
+ print(f'only_after = {[a for a,_ in (after_segs - before_segs)]}')
149
+
150
+ for seg in before:
151
+ if _seg_key(seg) not in after_segs:
152
+ _write_blocks(f, f'only_before;{_seg_info(seg)}', seg['blocks'])
153
+
154
+ for seg in after:
155
+ if _seg_key(seg) not in before_segs:
156
+ _write_blocks(f, f'only_after;{_seg_info(seg)}', seg['blocks'])
157
+
158
+ return format_flamegraph(f.getvalue())
159
+
160
+ def _format_size(num):
161
+ # https://stackoverflow.com/questions/1094841/get-human-readable-version-of-file-size
162
+ for unit in ["", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi"]:
163
+ if abs(num) < 1024.0:
164
+ return f"{num:3.1f}{unit}B"
165
+ num /= 1024.0
166
+ return f"{num:.1f}YiB"
167
+
168
+ class Bytes:
169
+ def __init__(self, value):
170
+ self.value = value
171
+
172
+ def __add__(self, rhs):
173
+ return Bytes(self.value + rhs)
174
+
175
+ def __repr__(self):
176
+ return _format_size(self.value)
177
+
178
+ def calc_active(seg):
179
+ return sum(b['size'] for b in seg['blocks'] if b['state'] == 'active_allocated')
180
+
181
+ def _report_free(free_external, free_internal):
182
+ total = free_external + free_internal
183
+ suffix = ''
184
+ if total != 0:
185
+ pct = (free_internal / total) * 100
186
+ suffix = f' ({pct:.1f}% internal)'
187
+ return f'{Bytes(total)}{suffix}'
188
+
189
+ PAGE_SIZE = 1024 * 1024 * 20
190
+ legend = f"""\
191
+
192
+ Legend:
193
+ [a ] - a segment in the allocator
194
+ ^-- a page {Bytes(PAGE_SIZE)} of memory in the segment
195
+ a-z: pages filled with a single block's content
196
+ ' ': page is completely free
197
+ *: page if completely full with multiple blocks
198
+ 0-9: page is partially full with tensors of multiple blocks (9 == 90% full)
199
+ (X% internal) - of the free memory, X% is free because we rounded the size of the allocation.
200
+ """
201
+
202
+ def segsum(data):
203
+ r"""Visually reports how the allocator has filled its segments.
204
+
205
+ This printout can help debug fragmentation issues since free fragments
206
+ will appear as gaps in this printout. The amount of free space is reported
207
+ for each segment.
208
+ We distinguish between internal free memory which occurs because the
209
+ allocator rounds the allocation size, and external free memory, which are
210
+ the gaps between allocations in a segment.
211
+ Args:
212
+ data: snapshot dictionary created from _snapshot()
213
+ """
214
+ segments = []
215
+ out = io.StringIO()
216
+ out.write(f"Summary of segments >= {Bytes(PAGE_SIZE)} in size\n")
217
+ total_reserved = 0
218
+ total_allocated = 0
219
+ free_external = 0
220
+ free_internal = 0
221
+ for seg in sorted(data['segments'], key=lambda x: (x['total_size'], calc_active(x))):
222
+ total_reserved += seg['total_size']
223
+
224
+ seg_free_external = 0
225
+ seg_free_internal = 0
226
+ seg_allocated = 0
227
+ all_ranges = []
228
+ boffset = 0
229
+ for b in seg['blocks']:
230
+ active = b['state'] == 'active_allocated'
231
+ if active:
232
+ _, allocated_size = _block_extra(b)
233
+ all_ranges.append((boffset, allocated_size, True))
234
+ seg_allocated += allocated_size
235
+ seg_free_internal += b['size'] - allocated_size
236
+ else:
237
+ seg_free_external += b['size']
238
+
239
+ boffset += b['size']
240
+
241
+ total_allocated += seg_allocated
242
+ free_external += seg_free_external
243
+ free_internal += seg_free_internal
244
+
245
+ nseg = (seg['total_size'] - 1) // PAGE_SIZE + 1
246
+ occupied = [' ' for _ in range(nseg)]
247
+ frac = [0.0 for _ in range(nseg)]
248
+ active_size = 0
249
+ for i, (start_, size, active) in enumerate(all_ranges):
250
+ active_size += size
251
+ finish_ = (start_ + size)
252
+ start = start_ // PAGE_SIZE
253
+ finish = (finish_ - 1) // PAGE_SIZE + 1
254
+ m = chr(ord('a' if active else 'A') + (i % 26))
255
+ for j in range(start, finish):
256
+ s = max(start_, j * PAGE_SIZE)
257
+ e = min(finish_, (j + 1) * PAGE_SIZE)
258
+ frac[j] += (e - s) / PAGE_SIZE
259
+ if occupied[j] != ' ':
260
+ occupied[j] = '0123456789*'[int(frac[j] * 10)]
261
+ else:
262
+ occupied[j] = m
263
+ stream = '' if seg['stream'] == 0 else f', stream_{seg["stream"]}'
264
+ body = ''.join(occupied)
265
+ assert seg_free_external + seg_free_internal + seg_allocated == seg['total_size']
266
+ stream = f' stream_{seg["stream"]}' if seg['stream'] != 0 else ''
267
+ if seg['total_size'] >= PAGE_SIZE:
268
+ out.write(f'[{body}] {Bytes(seg["total_size"])} allocated, '
269
+ f'{_report_free(seg_free_external, seg_free_internal)} free{stream}\n')
270
+ out.write(f'segments: {len(data["segments"])}\n')
271
+ out.write(f'total_reserved: {Bytes(total_reserved)}\n')
272
+ out.write(f'total_allocated: {Bytes(total_allocated)}\n')
273
+ internal_external = f' ({Bytes(free_internal)} internal + {Bytes(free_external)} external)' if free_internal else ''
274
+ out.write(f'total_free: {_report_free(free_external, free_internal)}\n')
275
+ out.write(legend)
276
+ assert free_internal + free_external + total_allocated == total_reserved
277
+ return out.getvalue()
278
+
279
+ def trace(data):
280
+ out = io.StringIO()
281
+
282
+ def format(entries):
283
+ segment_intervals : list = []
284
+ segment_addr_to_name = {}
285
+ allocation_addr_to_name = {}
286
+
287
+ free_names : list = []
288
+ next_name = 0
289
+
290
+ def _name():
291
+ nonlocal next_name
292
+ if free_names:
293
+ return free_names.pop()
294
+ r, m = next_name // 26, next_name % 26
295
+ next_name += 1
296
+ return f'{chr(ord("a") + m)}{"" if r == 0 else r}'
297
+
298
+ def find_segment(addr):
299
+ for name, saddr, size in segment_intervals:
300
+ if addr >= saddr and addr < saddr + size:
301
+ return name, saddr
302
+ for i, seg in enumerate(data['segments']):
303
+ saddr = seg['address']
304
+ size = seg['allocated_size']
305
+ if addr >= saddr and addr < saddr + size:
306
+ return f'seg_{i}', saddr
307
+ return None, None
308
+ count = 0
309
+ out.write(f'{len(entries)} entries\n')
310
+
311
+
312
+ total_reserved = 0
313
+ for seg in data['segments']:
314
+ total_reserved += seg['total_size']
315
+
316
+ for count, e in enumerate(entries):
317
+ if e['action'] == 'alloc':
318
+ addr, size = e['addr'], e['size']
319
+ n = _name()
320
+ seg_name, seg_addr = find_segment(addr)
321
+ if seg_name is None:
322
+ seg_name = "MEM"
323
+ offset = addr
324
+ else:
325
+ offset = addr - seg_addr
326
+ out.write(f'{n} = {seg_name}[{offset}:{Bytes(size)}]\n')
327
+ allocation_addr_to_name[addr] = (n, size, count)
328
+ count += size
329
+ elif e['action'] == 'free_requested':
330
+ addr, size = e['addr'], e['size']
331
+ name, _, _ = allocation_addr_to_name.get(addr, (addr, None, None))
332
+ out.write(f'del {name} # {Bytes(size)}\n')
333
+ elif e['action'] == 'free_completed':
334
+ addr, size = e['addr'], e['size']
335
+ count -= size
336
+ name, _, _ = allocation_addr_to_name.get(addr, (addr, None, None))
337
+ out.write(f'# free completed for {name} {Bytes(size)}\n')
338
+ if name in allocation_addr_to_name:
339
+ free_names.append(name)
340
+ del allocation_addr_to_name[name]
341
+ elif e['action'] == 'segment_alloc':
342
+ addr, size = e['addr'], e['size']
343
+ name = _name()
344
+ out.write(f'{name} = cudaMalloc({addr}, {Bytes(size)})\n')
345
+ segment_intervals.append((name, addr, size))
346
+ segment_addr_to_name[addr] = name
347
+ elif e['action'] == 'segment_free':
348
+ addr, size = e['addr'], e['size']
349
+ name = segment_addr_to_name.get(addr, addr)
350
+ out.write(f'cudaFree({name}) # {Bytes(size)}\n')
351
+ if name in segment_addr_to_name:
352
+ free_names.append(name)
353
+ del segment_addr_to_name[name]
354
+ elif e['action'] == 'oom':
355
+ size = e['size']
356
+ free = e['device_free']
357
+ out.write(f'raise OutOfMemoryError() # {Bytes(size)} requested, {Bytes(free)} free in CUDA\n')
358
+ else:
359
+ out.write(f'{e}\n')
360
+ out.write(f"TOTAL MEM: {Bytes(count)}")
361
+ for i, d in enumerate(data['device_traces']):
362
+ if d:
363
+ out.write(f'Device {i} ----------------\n')
364
+ format(d)
365
+ return out.getvalue()
366
+
367
+
368
+ _memory_viz_template = r"""
369
+ <!DOCTYPE html>
370
+ <html>
371
+ <head>
372
+ </head>
373
+ <body>
374
+ <script type="module">
375
+ import {add_local_files} from "https://cdn.jsdelivr.net/gh/pytorch/pytorch@main/torch/utils/viz/MemoryViz.js"
376
+ const local_files = $SNAPSHOT
377
+ add_local_files(local_files, $VIZ_KIND)
378
+ </script>
379
+ </body>
380
+ """
381
+
382
+ def _format_viz(data, viz_kind, device):
383
+ if device is not None:
384
+ warnings.warn('device argument is deprecated, plots now contain all device')
385
+ buffer = pickle.dumps(data)
386
+ buffer += b'\x00' * (3 - len(buffer) % 3)
387
+ # Encode the buffer with base64
388
+ encoded_buffer = base64.b64encode(buffer).decode('utf-8')
389
+
390
+ json_format = json.dumps([{"name": 'snapshot.pickle', "base64": encoded_buffer}])
391
+ return _memory_viz_template.replace('$VIZ_KIND', repr(viz_kind)) \
392
+ .replace('$SNAPSHOT', json_format)
393
+
394
+ def trace_plot(data, device=None, plot_segments=False):
395
+ """Generate a visualization over time of the memory usage recorded by the trace as an html file.
396
+
397
+ Args:
398
+ data: Memory snapshot as generated from torch.cuda.memory._snapshot()
399
+ device (torch.device, optional): Generate the trace for this device, needed if multiple devices have allocations.
400
+ plot_segments (bool, optional): Plots memory returned from cudaMalloc, rather than individual allocations.
401
+ Defaults to False.
402
+
403
+ Returns:
404
+ str: HTML of visualization
405
+ """
406
+ return _format_viz(data, 'Active Memory Timeline' if not plot_segments else 'Active Cached Memory Timeline', device)
407
+
408
+
409
+ def _profile_to_snapshot(profile):
410
+ import torch
411
+ from torch.profiler._memory_profiler import Action, TensorKey
412
+ from torch._C._profiler import _EventType
413
+ memory_profile = profile._memory_profile()
414
+
415
+ allocation_stacks = {}
416
+ for event in memory_profile._op_tree.sorted_nodes:
417
+ if event.tag == _EventType.Allocation:
418
+ parent = event.parent
419
+ python_parents = []
420
+ while parent:
421
+ if parent.tag in (_EventType.PyCall, _EventType.PyCCall):
422
+ python_parents.append(parent)
423
+ parent = parent.parent
424
+ key = TensorKey.from_allocation(event.extra_fields)
425
+
426
+ # Corner case: If allocation doesn't have an ID (can't prove it was used as a Tensor)
427
+ # key will be None. I should add some way to identify these, I just haven't yet.
428
+ if key and event.extra_fields.alloc_size > 0:
429
+ allocation_stacks[key] = python_parents
430
+
431
+
432
+ device_count = torch.cuda.device_count()
433
+ snapshot = {
434
+ 'device_traces': [[] for _ in range(device_count + 1)],
435
+ 'segments': [{'device': device,
436
+ 'address': None,
437
+ 'total_size': 0,
438
+ 'stream': 0,
439
+ 'blocks': []} for device in range(device_count + 1)]
440
+ }
441
+
442
+ def to_device(device):
443
+ if device.type == 'cuda':
444
+ return device.index
445
+ else:
446
+ return device_count
447
+
448
+ def allocate(size, tensor_key, version, during_trace=True):
449
+ device = to_device(tensor_key.device)
450
+ addr = tensor_key.storage.ptr
451
+
452
+ seg = snapshot['segments'][device] # type: ignore[index]
453
+ if seg['address'] is None or seg['address'] > addr:
454
+ seg['address'] = addr
455
+ seg['total_size'] = max(seg['total_size'], addr + size) # record max addr for now, we will make it the size later
456
+ category = memory_profile._categories.get(tensor_key, version)
457
+ category = category.name.lower() if category is not None else "unknown"
458
+ stack = allocation_stacks.get(tensor_key, ())
459
+ stack = [{'filename': 'none', 'line': 0, 'name': p.name} for p in stack]
460
+ r = {'action': 'alloc', 'addr': addr, 'size': size, 'stream': 0, 'frames': stack, 'category': category}
461
+ if during_trace:
462
+ snapshot['device_traces'][device].append(r) # type: ignore[index]
463
+ return r
464
+
465
+ def free(alloc, device):
466
+ for e in ('free_requested', 'free_completed'):
467
+ snapshot['device_traces'][device].append({'action': e, # type: ignore[index]
468
+ 'addr': alloc['addr'],
469
+ 'size': alloc['size'],
470
+ 'stream': 0,
471
+ 'frames': alloc['frames']})
472
+
473
+ kv_to_elem = {}
474
+
475
+
476
+
477
+ # create the device trace
478
+ for time, action, (tensor_key, version), size in memory_profile.timeline:
479
+ if not isinstance(tensor_key, TensorKey):
480
+ continue
481
+ if action == Action.CREATE:
482
+ kv_to_elem[(tensor_key, version)] = allocate(size, tensor_key, version)
483
+ elif action == Action.DESTROY:
484
+ free(kv_to_elem.pop((tensor_key, version)), to_device(tensor_key.device))
485
+ elif action == Action.INCREMENT_VERSION:
486
+ free(kv_to_elem.pop((tensor_key, version)), to_device(tensor_key.device))
487
+ kv_to_elem[(tensor_key, version + 1)] = allocate(size, tensor_key, version + 1)
488
+ elif action == Action.PREEXISTING:
489
+ kv_to_elem[(tensor_key, version)] = allocate(size, tensor_key, version, during_trace=False)
490
+
491
+
492
+ # create the final snapshot state
493
+ blocks_at_end = [(to_device(tensor_key.device), event['addr'], event['size'], event['frames'])
494
+ for (tensor_key, version), event in kv_to_elem.items()]
495
+ for device, blocks in groupby(sorted(blocks_at_end), key=lambda x: x[0]):
496
+ seg = snapshot['segments'][device] # type: ignore[index]
497
+ last_addr = seg['address']
498
+ for _, addr, size, frames in blocks:
499
+ if last_addr < addr:
500
+ seg['blocks'].append({'size': addr - last_addr, 'state': 'inactive'})
501
+ seg['blocks'].append({'size': size, 'state': 'active_allocated', 'requested_size': size, 'frames': frames})
502
+ last_addr = addr + size
503
+ if last_addr < seg['total_size']:
504
+ seg['blocks'].append({'size': seg['total_size'] - last_addr, 'state': 'inactive'})
505
+
506
+ snapshot['segments'] = [seg for seg in snapshot['segments'] if seg['blocks']] # type: ignore[attr-defined]
507
+ for seg in snapshot['segments']: # type: ignore[attr-defined, name-defined, no-redef]
508
+ seg['total_size'] -= seg['address']
509
+ if not seg['blocks']:
510
+ seg['blocks'].append({'size': seg['total_size'], 'state': 'inactive'})
511
+
512
+ return snapshot
513
+
514
+ def profile_plot(profile, device=None):
515
+ """Generate a visualization over time of the memory usage recorded by kineto memory profiling as an html file.
516
+
517
+ Args:
518
+ profile: profile as generated by `torch.profiler.profile(profile_memory=True)`
519
+ device (torch.device, optional): Generate the trace for this device, needed if multiple devices have allocations.
520
+
521
+ Returns:
522
+ str: HTML of visualization
523
+ """
524
+ snapshot = _profile_to_snapshot(profile)
525
+ return _format_viz(snapshot, 'Active Memory Timeline', device)
526
+
527
+
528
+ def segment_plot(data: Any, device=None):
529
+ return _format_viz(data, 'Allocator State History', device)
530
+
531
+ if __name__ == "__main__":
532
+ import os.path
533
+ thedir = os.path.realpath(os.path.dirname(__file__))
534
+ if thedir in sys.path:
535
+ # otherwise we find cuda/random.py as random...
536
+ sys.path.remove(thedir)
537
+ import argparse
538
+
539
+ fn_name = 'torch.cuda.memory._snapshot()'
540
+ pickled = f'pickled memory statistics from {fn_name}'
541
+ parser = argparse.ArgumentParser(description=f'Visualize memory dumps produced by {fn_name}')
542
+
543
+ subparsers = parser.add_subparsers(dest='action')
544
+
545
+ def _output(p):
546
+ p.add_argument('-o', '--output', default='output.svg', help='flamegraph svg (default: output.svg)')
547
+
548
+ description = 'Prints overall allocation statistics and a visualization of how the allocators segments are currently filled.'
549
+ stats_a = subparsers.add_parser('stats', description=description)
550
+ stats_a.add_argument('input', help=pickled)
551
+
552
+ description = 'Prints buffer of the most recent allocation events embedded in the snapshot in a Pythonic style.'
553
+ trace_a = subparsers.add_parser('trace', description=description)
554
+ trace_a.add_argument('input', help=pickled)
555
+
556
+ description = 'Generate a flamegraph that visualizes what memory is stored in each allocator segment (aka block)'
557
+ segments_a = subparsers.add_parser('segments', description=description)
558
+ segments_a.add_argument('input', help=pickled)
559
+ _output(segments_a)
560
+
561
+ description = "Generate a flamegraph the program locations contributing to CUDA memory usage."
562
+ memory_a = subparsers.add_parser('memory', description=description)
563
+ memory_a.add_argument('input', help=pickled)
564
+ _output(memory_a)
565
+
566
+ description = 'Generate a flamegraph that shows segments (aka blocks) that have been added ' \
567
+ 'or removed between two different memorys snapshots.'
568
+ compare_a = subparsers.add_parser('compare', description=description)
569
+ compare_a.add_argument('before', help=pickled)
570
+ compare_a.add_argument('after', help=pickled)
571
+ _output(compare_a)
572
+
573
+ plots = (
574
+ ("trace_plot", "Generate a visualization over time of the memory usage recorded by the trace as an html file."),
575
+ ("segment_plot", "Visualize how allocations are packed into allocator segments at each point in a trace as an html file.")
576
+ )
577
+ for cmd, description in plots:
578
+ trace_plot_a = subparsers.add_parser(cmd, description=description)
579
+ trace_plot_a.add_argument('input', help=pickled)
580
+ help = 'visualize trace from this device (default: chooses the only device with trace info or errors)'
581
+ trace_plot_a.add_argument('-d', '--device', type=int, default=None, help=help)
582
+ help = 'path to save the visualization(default: output.html)'
583
+ trace_plot_a.add_argument('-o', '--output', default='output.html', help=help)
584
+ if cmd == "trace_plot":
585
+ help = 'visualize change to segments rather than individual allocations'
586
+ trace_plot_a.add_argument('-s', '--segments', action='store_true', help=help)
587
+
588
+
589
+ args = parser.parse_args()
590
+
591
+ def _read(name):
592
+ if name == '-':
593
+ f = sys.stdin.buffer
594
+ else:
595
+ f = open(name, 'rb')
596
+ data = pickle.load(f)
597
+ if isinstance(data, list): # segments only...
598
+ data = {'segments': data, 'traces': []}
599
+ return data
600
+
601
+ def _write(name, data):
602
+ with open(name, 'w') as f:
603
+ f.write(data)
604
+
605
+ if args.action == 'segments':
606
+ data = _read(args.input)
607
+ _write(args.output, segments(data))
608
+ elif args.action == 'memory':
609
+ data = _read(args.input)
610
+ _write(args.output, memory(data))
611
+ elif args.action == 'stats':
612
+ data = _read(args.input)
613
+ print(segsum(data))
614
+ elif args.action == 'trace':
615
+ data = _read(args.input)
616
+ print(trace(data))
617
+ elif args.action == 'compare':
618
+ before = _read(args.before)
619
+ after = _read(args.after)
620
+ _write(args.output, compare(before, after))
621
+ elif args.action == 'trace_plot':
622
+ data = _read(args.input)
623
+ _write(args.output, trace_plot(data, device=args.device, plot_segments=args.segments))
624
+ elif args.action == 'segment_plot':
625
+ data = _read(args.input)
626
+ _write(args.output, segment_plot(data, device=args.device))
llmeval-env/lib/python3.10/site-packages/torch/cuda/_sanitizer.py ADDED
@@ -0,0 +1,622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ r"""
2
+ This module introduces CUDA Sanitizer, a tool for detecting synchronization errors between kernels ran on different streams.
3
+
4
+ It stores information on accesses to tensors to determine if they are synchronized
5
+ or not. When enabled in a python program and a possible data race is detected, a
6
+ detailed warning will be printed and the program will exit.
7
+
8
+ It can be enabled either by importing this module and calling
9
+ :func:`enable_cuda_sanitizer()` or by exporting the ``TORCH_CUDA_SANITIZER``
10
+ environment variable.
11
+ """
12
+
13
+ import enum
14
+ import functools
15
+ import inspect
16
+ import io
17
+ import logging
18
+ import sys
19
+ import textwrap
20
+ import traceback
21
+ from dataclasses import dataclass, field
22
+ from typing import Any, Dict, Iterator, List, Optional, Set, Tuple, TypeVar
23
+
24
+ import torch
25
+ import torch.utils._cuda_trace as cuda_trace
26
+ from torch.utils import _pytree as pytree
27
+ from torch.utils._python_dispatch import TorchDispatchMode
28
+
29
+
30
+ DEFAULT_STREAM_ID = 0
31
+
32
+ TK = TypeVar("TK")
33
+ TVa = TypeVar("TVa")
34
+ TVb = TypeVar("TVb")
35
+
36
+ DataPtr = int
37
+ StreamId = int
38
+ EventId = int
39
+ SeqNum = int
40
+
41
+ logger = logging.getLogger(__name__)
42
+
43
+
44
+ class AccessType(enum.Enum):
45
+ READ = enum.auto()
46
+ WRITE = enum.auto()
47
+
48
+ def __str__(self):
49
+ return "reading from" if self is AccessType.READ else "writing to"
50
+
51
+
52
+ @dataclass
53
+ class Access:
54
+ r"""Stores information about a single access to a tensor by a kernel.
55
+
56
+ Args:
57
+ type: either AccessType.READ or AccessType.Write.
58
+ seq_num: the sequential number of the kernel performing the access.
59
+ stream: the stream id of the stream executing the kernel.
60
+ operator: the schema of the launched kernel, which lists the
61
+ arguments and return type.
62
+ aliases: the arguments in the schema this access corresponds to.
63
+ is_output: Whether the tensor was an output of the kernel.
64
+ stack_trace: the stack summary object captured during access.
65
+ """
66
+
67
+ type: AccessType
68
+ seq_num: SeqNum
69
+ stream: StreamId
70
+ operator: str
71
+ aliases: List[str]
72
+ is_output: bool
73
+ stack_trace: traceback.StackSummary
74
+
75
+
76
+ class SynchronizationError(Exception):
77
+ """Base class for errors detected by CUDA Sanitizer."""
78
+
79
+ pass
80
+
81
+
82
+ class UnsynchronizedAccessError(SynchronizationError):
83
+ """Stores information about two unsynchronized accesses to one data pointer."""
84
+
85
+ def __init__(
86
+ self,
87
+ data_ptr: DataPtr,
88
+ allocation_stack_trace: Optional[traceback.StackSummary],
89
+ current_access: Access,
90
+ previous_access: Access,
91
+ ):
92
+ self.data_ptr = data_ptr
93
+ self.allocation_stack_trace = allocation_stack_trace
94
+ self.current_access = current_access
95
+ self.previous_access = previous_access
96
+
97
+ def __str__(self):
98
+ def format_access(access: Access):
99
+ message.write(f"{access.operator}\n{access.type}")
100
+ if access.aliases:
101
+ message.write(" argument(s) " + ", ".join(access.aliases))
102
+ if access.is_output:
103
+ message.write(", and to")
104
+ if access.is_output:
105
+ message.write(" the output")
106
+ message.write(
107
+ f"\nWith stack trace:\n{''.join(access.stack_trace.format())}\n"
108
+ )
109
+
110
+ with io.StringIO() as message:
111
+ message.write(
112
+ textwrap.dedent(
113
+ f"""\
114
+ ============================
115
+ CSAN detected a possible data race on tensor with data pointer {self.data_ptr}
116
+ Access by stream {self.current_access.stream} during kernel:
117
+ """
118
+ )
119
+ )
120
+ format_access(self.current_access)
121
+
122
+ message.write(
123
+ f"Previous access by stream {self.previous_access.stream} during kernel:\n"
124
+ )
125
+ format_access(self.previous_access)
126
+
127
+ if self.allocation_stack_trace:
128
+ message.write(
129
+ "Tensor was allocated with stack trace:\n"
130
+ f"{''.join(self.allocation_stack_trace.format())}"
131
+ )
132
+ else:
133
+ message.write("Trace for tensor allocation not found.")
134
+ return message.getvalue()
135
+
136
+
137
+ class CUDASanitizerErrors(Exception):
138
+ """Wrapper class for errors reported by CUDA Sanitizer."""
139
+
140
+ def __init__(self, errors: List[SynchronizationError]):
141
+ self.errors = errors
142
+
143
+ def __str__(self):
144
+ return f"detected {len(self.errors)} errors"
145
+
146
+
147
+ @dataclass
148
+ class TensorInfo:
149
+ r"""Stores information about a single tensor and recent accesses to it.
150
+
151
+ Args:
152
+ allocation_stack_trace: the stack summary object captured during tensor
153
+ allocation. Can be ``None`` if the allocation wasn't caught by CSAN.
154
+ reads: list of read accesses to the tensor that were performed since
155
+ the last write.
156
+ write: the last write access to the tensor.
157
+ """
158
+
159
+ allocation_stack_trace: Optional[traceback.StackSummary]
160
+ reads: List[Access] = field(default_factory=list)
161
+ write: Optional[Access] = None
162
+
163
+
164
+ class _TensorsAccessed:
165
+ def __init__(self):
166
+ self.accesses: Dict[DataPtr, TensorInfo] = {}
167
+
168
+ def ensure_tensor_exists(self, data_ptr: DataPtr) -> None:
169
+ if data_ptr not in self.accesses:
170
+ logger.info(
171
+ "Found tensor with pointer: %s, but no matching tensor "
172
+ "allocation in the trace. Backfilling the trace now. "
173
+ "Perhaps the sanitizer was enabled after some torch operations?",
174
+ data_ptr,
175
+ )
176
+ self.create_tensor(data_ptr, None)
177
+
178
+ def ensure_tensor_does_not_exist(self, data_ptr: DataPtr) -> None:
179
+ if data_ptr in self.accesses:
180
+ logger.info(
181
+ "Found duplicate tensor allocation in the trace for tensor with "
182
+ "pointer: %s. Assuming the trace for tensor deallocation "
183
+ "wasn't caught and backfilling it now. "
184
+ "Perhaps the sanitizer was enabled after some torch operations?",
185
+ data_ptr,
186
+ )
187
+ self.delete_tensor(data_ptr)
188
+
189
+ def create_tensor(
190
+ self, data_ptr: DataPtr, stack_trace: Optional[traceback.StackSummary]
191
+ ) -> None:
192
+ self.accesses[data_ptr] = TensorInfo(stack_trace)
193
+
194
+ def delete_tensor(self, data_ptr: DataPtr) -> None:
195
+ del self.accesses[data_ptr]
196
+
197
+ def were_there_reads_since_last_write(self, data_ptr: DataPtr) -> bool:
198
+ return True if self.accesses[data_ptr].reads else False
199
+
200
+ def get_allocation_stack_trace(
201
+ self, data_ptr: DataPtr
202
+ ) -> Optional[traceback.StackSummary]:
203
+ return self.accesses[data_ptr].allocation_stack_trace
204
+
205
+ def get_write(self, data_ptr: DataPtr) -> Optional[Access]:
206
+ return self.accesses[data_ptr].write
207
+
208
+ def get_reads(self, data_ptr: DataPtr) -> List[Access]:
209
+ return self.accesses[data_ptr].reads
210
+
211
+ def add_read(self, data_ptr: DataPtr, access: Access) -> None:
212
+ self.accesses[data_ptr].reads.append(access)
213
+
214
+ def set_write(self, data_ptr: DataPtr, access: Access) -> None:
215
+ self.accesses[data_ptr].write = access
216
+ self.accesses[data_ptr].reads = []
217
+
218
+
219
+ class StreamSynchronizations:
220
+ def __init__(self):
221
+ self.current_sync_states: Dict[StreamId, Dict[StreamId, SeqNum]] = {}
222
+ self.recorded_sync_states: Dict[EventId, Dict[StreamId, SeqNum]] = {}
223
+ self.host_sync_state: Dict[StreamId, SeqNum] = {}
224
+ self.create_stream(DEFAULT_STREAM_ID)
225
+
226
+ def _ensure_stream_exists(self, stream: StreamId) -> None:
227
+ if stream not in self.current_sync_states:
228
+ logger.info(
229
+ "Found Stream with id: %s, but no matching stream "
230
+ "creation in the trace. Backfilling the trace now. "
231
+ "Perhaps the sanitizer was enabled after some torch operations?",
232
+ stream,
233
+ )
234
+ self.create_stream(stream)
235
+
236
+ def _ensure_event_exists(self, event: EventId) -> None:
237
+ if event not in self.recorded_sync_states:
238
+ logger.info(
239
+ "Found Event with id: %s, but no matching event "
240
+ "creation in the trace. Backfilling the trace now. "
241
+ "Perhaps the sanitizer was enabled after some torch operations?",
242
+ event,
243
+ )
244
+ self.create_event(event)
245
+
246
+ def _ensure_event_does_not_exist(self, event: EventId) -> None:
247
+ if event in self.recorded_sync_states:
248
+ logger.info(
249
+ "Found duplicate event creation in the trace for event with "
250
+ "id: %s. Assuming the trace for event deletion wasn't caught "
251
+ "and backfilling it now. "
252
+ "Perhaps the sanitizer was enabled after some torch operations?",
253
+ event,
254
+ )
255
+ self.delete_event(event)
256
+
257
+ def create_stream(self, stream: StreamId) -> None:
258
+ if stream in self.current_sync_states:
259
+ logger.info(
260
+ "Found duplicate Stream creation in the trace for Stream with "
261
+ "id: %s. PyTorch Streams are only created once, so this "
262
+ "trace entry is ignored.",
263
+ stream,
264
+ )
265
+ else:
266
+ self.host_sync_state[stream] = 0
267
+ self.current_sync_states[stream] = self.host_sync_state.copy()
268
+
269
+ def create_event(self, event: EventId) -> None:
270
+ self._ensure_event_does_not_exist(event)
271
+ self.recorded_sync_states[event] = {}
272
+
273
+ def delete_event(self, event: EventId) -> None:
274
+ self._ensure_event_exists(event)
275
+ del self.recorded_sync_states[event]
276
+
277
+ def update_seq_num(self, stream: StreamId, seq_num: SeqNum) -> None:
278
+ self._ensure_stream_exists(stream)
279
+ self.current_sync_states[stream][stream] = seq_num
280
+
281
+ def record_state(self, event: EventId, stream: StreamId) -> None:
282
+ self._ensure_event_exists(event)
283
+ self._ensure_stream_exists(stream)
284
+ self.recorded_sync_states[event] = self.current_sync_states[stream].copy()
285
+
286
+ def _state_wait_for_other(
287
+ self, state: Dict[StreamId, SeqNum], other: Dict[StreamId, SeqNum]
288
+ ) -> None:
289
+ for stream, seq_num in other.items():
290
+ state[stream] = max(state.get(stream, -1), seq_num)
291
+
292
+ def stream_wait_for_event(self, stream: StreamId, event: EventId) -> None:
293
+ self._ensure_stream_exists(stream)
294
+ self._ensure_event_exists(event)
295
+ self._state_wait_for_other(
296
+ self.current_sync_states[stream], self.recorded_sync_states[event]
297
+ )
298
+
299
+ def all_streams_wait_for_event(self, event: EventId) -> None:
300
+ self._ensure_event_exists(event)
301
+ for stream in self.current_sync_states.keys():
302
+ self.stream_wait_for_event(stream, event)
303
+
304
+ self._state_wait_for_other(
305
+ self.host_sync_state, self.recorded_sync_states[event]
306
+ )
307
+
308
+ def all_streams_wait_for_stream(self, stream: StreamId) -> None:
309
+ self._ensure_stream_exists(stream)
310
+ for state in self.current_sync_states.values():
311
+ self._state_wait_for_other(state, self.current_sync_states[stream])
312
+
313
+ self._state_wait_for_other(
314
+ self.host_sync_state, self.current_sync_states[stream]
315
+ )
316
+
317
+ def sync_all_streams(self) -> None:
318
+ for stream, state in self.current_sync_states.items():
319
+ self.host_sync_state[stream] = state[stream]
320
+
321
+ for state in self.current_sync_states.values():
322
+ self._state_wait_for_other(state, self.host_sync_state)
323
+
324
+ def is_ordered_after(
325
+ self, current_stream: StreamId, seq_num: SeqNum, other_stream: StreamId
326
+ ) -> bool:
327
+ self._ensure_stream_exists(current_stream)
328
+ self._ensure_stream_exists(other_stream)
329
+ return seq_num <= self.current_sync_states[current_stream].get(other_stream, -1)
330
+
331
+
332
+ class EventHandler:
333
+ """Analyzes CSAN trace for synchronization errors.
334
+
335
+ Stores information on each stream's synchronizations with other streams as well
336
+ as tensor accesses to determine whether a given kernel launch might cause a
337
+ data race.
338
+ """
339
+
340
+ def __init__(self):
341
+ self.tensors_accessed = _TensorsAccessed()
342
+ self.syncs = StreamSynchronizations()
343
+ self.seq_num: SeqNum = 0
344
+
345
+ def _handle_kernel_launch(
346
+ self,
347
+ stream: StreamId,
348
+ read_only: Set[DataPtr],
349
+ read_write: Set[DataPtr],
350
+ outputs: Set[DataPtr],
351
+ operator: str,
352
+ tensor_aliases: Dict[int, List[str]],
353
+ ) -> List[SynchronizationError]:
354
+ def check_conflict(
355
+ data_ptr: DataPtr, current_access: Access, previous_access: Optional[Access]
356
+ ) -> None:
357
+ if previous_access is None:
358
+ return
359
+ if not self.syncs.is_ordered_after(
360
+ current_access.stream, previous_access.seq_num, previous_access.stream
361
+ ):
362
+ error_list.append(
363
+ UnsynchronizedAccessError(
364
+ data_ptr,
365
+ self.tensors_accessed.get_allocation_stack_trace(data_ptr),
366
+ current_access,
367
+ previous_access,
368
+ )
369
+ )
370
+
371
+ error_list: List[SynchronizationError] = []
372
+ self.seq_num += 1
373
+ self.syncs.update_seq_num(stream, self.seq_num)
374
+ stack_trace = traceback.StackSummary.extract(
375
+ traceback.walk_stack(inspect.currentframe()), lookup_lines=False
376
+ )
377
+ # The stack trace generated in this way is in the inverse order, so it must be
378
+ # reversed.
379
+ stack_trace.reverse()
380
+
381
+ for data_ptr in read_only:
382
+ self.tensors_accessed.ensure_tensor_exists(data_ptr)
383
+ current_access = Access(
384
+ AccessType.READ,
385
+ self.seq_num,
386
+ stream,
387
+ operator,
388
+ tensor_aliases[data_ptr],
389
+ data_ptr in outputs,
390
+ stack_trace,
391
+ )
392
+ check_conflict(
393
+ data_ptr, current_access, self.tensors_accessed.get_write(data_ptr)
394
+ )
395
+ self.tensors_accessed.add_read(data_ptr, current_access)
396
+
397
+ for data_ptr in read_write:
398
+ self.tensors_accessed.ensure_tensor_exists(data_ptr)
399
+ current_access = Access(
400
+ AccessType.WRITE,
401
+ self.seq_num,
402
+ stream,
403
+ operator,
404
+ tensor_aliases[data_ptr],
405
+ data_ptr in outputs,
406
+ stack_trace,
407
+ )
408
+ if self.tensors_accessed.were_there_reads_since_last_write(data_ptr):
409
+ for previous_access in self.tensors_accessed.get_reads(data_ptr):
410
+ check_conflict(data_ptr, current_access, previous_access)
411
+ else:
412
+ check_conflict(
413
+ data_ptr, current_access, self.tensors_accessed.get_write(data_ptr)
414
+ )
415
+ self.tensors_accessed.set_write(data_ptr, current_access)
416
+
417
+ return error_list
418
+
419
+ def _handle_event_creation(self, event: EventId) -> None:
420
+ self.syncs.create_event(event)
421
+
422
+ def _handle_event_deletion(self, event: EventId) -> None:
423
+ self.syncs.delete_event(event)
424
+
425
+ def _handle_event_record(self, event: EventId, stream: StreamId) -> None:
426
+ self.syncs.record_state(event, stream)
427
+
428
+ def _handle_event_wait(self, event: EventId, stream: StreamId) -> None:
429
+ self.syncs.stream_wait_for_event(stream, event)
430
+
431
+ def _handle_memory_allocation(self, data_ptr: DataPtr) -> None:
432
+ self.tensors_accessed.ensure_tensor_does_not_exist(data_ptr)
433
+ stack_trace = traceback.StackSummary.extract(
434
+ traceback.walk_stack(inspect.currentframe()), lookup_lines=False
435
+ )
436
+ # The stack trace generated in this way is in the inverse order, so it must be
437
+ # reversed.
438
+ stack_trace.reverse()
439
+ self.tensors_accessed.create_tensor(
440
+ data_ptr,
441
+ stack_trace,
442
+ )
443
+
444
+ def _handle_memory_deallocation(self, data_ptr: DataPtr) -> None:
445
+ self.tensors_accessed.ensure_tensor_exists(data_ptr)
446
+ self.tensors_accessed.delete_tensor(data_ptr)
447
+
448
+ def _handle_stream_creation(self, stream: StreamId) -> None:
449
+ self.syncs.create_stream(stream)
450
+
451
+ def _handle_device_synchronization(self) -> None:
452
+ self.syncs.sync_all_streams()
453
+
454
+ def _handle_stream_synchronization(self, stream: StreamId) -> None:
455
+ self.syncs.all_streams_wait_for_stream(stream)
456
+
457
+ def _handle_event_synchronization(self, event: EventId) -> None:
458
+ self.syncs.all_streams_wait_for_event(event)
459
+
460
+
461
+ def zip_by_key(a: Dict[TK, TVa], b: Dict[TK, TVb]) -> Iterator[Tuple[TK, TVa, TVb]]:
462
+ for arg, value in a.items():
463
+ if arg in b:
464
+ yield arg, value, b[arg]
465
+
466
+
467
+ def zip_arguments(
468
+ schema: torch.FunctionSchema, args: Tuple[Any, ...], kwargs: Dict[str, Any]
469
+ ) -> Iterator[Tuple[torch.Argument, Any]]:
470
+ schema_args = schema.arguments[: len(args)]
471
+ schema_kwargs = {arg.name: arg for arg in schema.arguments[len(args) :]}
472
+
473
+ yield from zip(schema_args, args)
474
+
475
+ for _, argument, value in zip_by_key(schema_kwargs, kwargs):
476
+ yield (argument, value)
477
+
478
+
479
+ class ArgumentHandler:
480
+ def __init__(self):
481
+ self.dataptrs_read: Set[DataPtr] = set()
482
+ self.dataptrs_written: Set[DataPtr] = set()
483
+ self.tensor_aliases: Dict[DataPtr, List[str]] = dict()
484
+ self.outputs: Set[DataPtr] = set()
485
+
486
+ def _handle_argument(
487
+ self,
488
+ value: Any,
489
+ is_write: bool,
490
+ name: Optional[str] = None,
491
+ is_output: bool = False,
492
+ ) -> None:
493
+ if isinstance(value, torch.Tensor) and value.is_cuda:
494
+ data_ptr = value.data_ptr()
495
+ if is_write:
496
+ self.dataptrs_written.add(data_ptr)
497
+ else:
498
+ self.dataptrs_read.add(data_ptr)
499
+
500
+ self.tensor_aliases.setdefault(data_ptr, [])
501
+ if name is not None:
502
+ self.tensor_aliases[data_ptr].append(name)
503
+ if is_output:
504
+ self.outputs.add(data_ptr)
505
+
506
+ def parse_inputs(
507
+ self,
508
+ schema: torch.FunctionSchema,
509
+ args: Tuple[Any, ...],
510
+ kwargs: Dict[str, Any],
511
+ ) -> None:
512
+ for argument, value in zip_arguments(schema, args, kwargs):
513
+ is_write = argument.alias_info is not None and argument.alias_info.is_write
514
+ pytree.tree_map_(
515
+ functools.partial(
516
+ self._handle_argument, is_write=is_write, name=argument.name
517
+ ),
518
+ value,
519
+ )
520
+
521
+ def parse_outputs(self, outputs: Any) -> None:
522
+ pytree.tree_map_(
523
+ functools.partial(self._handle_argument, is_write=True, is_output=True),
524
+ outputs,
525
+ )
526
+
527
+
528
+ class CUDASanitizerDispatchMode(TorchDispatchMode):
529
+ def __init__(self):
530
+ self.event_handler = EventHandler()
531
+ torch._C._activate_cuda_trace()
532
+ cuda_trace.register_callback_for_cuda_event_creation(
533
+ self.event_handler._handle_event_creation
534
+ )
535
+ cuda_trace.register_callback_for_cuda_event_deletion(
536
+ self.event_handler._handle_event_deletion
537
+ )
538
+ cuda_trace.register_callback_for_cuda_event_record(
539
+ self.event_handler._handle_event_record
540
+ )
541
+ cuda_trace.register_callback_for_cuda_event_wait(
542
+ self.event_handler._handle_event_wait
543
+ )
544
+ cuda_trace.register_callback_for_cuda_memory_allocation(
545
+ self.event_handler._handle_memory_allocation
546
+ )
547
+ cuda_trace.register_callback_for_cuda_memory_deallocation(
548
+ self.event_handler._handle_memory_deallocation
549
+ )
550
+ cuda_trace.register_callback_for_cuda_stream_creation(
551
+ self.event_handler._handle_stream_creation
552
+ )
553
+ cuda_trace.register_callback_for_cuda_device_synchronization(
554
+ self.event_handler._handle_device_synchronization
555
+ )
556
+ cuda_trace.register_callback_for_cuda_stream_synchronization(
557
+ self.event_handler._handle_stream_synchronization
558
+ )
559
+ cuda_trace.register_callback_for_cuda_event_synchronization(
560
+ self.event_handler._handle_event_synchronization
561
+ )
562
+
563
+ def __torch_dispatch__(self, func, types, args=(), kwargs=None):
564
+ if kwargs is None:
565
+ kwargs = {}
566
+
567
+ argument_handler = ArgumentHandler()
568
+ argument_handler.parse_inputs(func._schema, args, kwargs)
569
+
570
+ outputs = func(*args, **kwargs)
571
+
572
+ argument_handler.parse_outputs(outputs)
573
+ errors = self.event_handler._handle_kernel_launch(
574
+ torch.cuda.current_stream().cuda_stream,
575
+ argument_handler.dataptrs_read - argument_handler.dataptrs_written,
576
+ argument_handler.dataptrs_written,
577
+ argument_handler.outputs,
578
+ func._schema,
579
+ argument_handler.tensor_aliases,
580
+ )
581
+ if errors:
582
+ for error in errors:
583
+ print(error, file=sys.stderr)
584
+ raise CUDASanitizerErrors(errors)
585
+
586
+ return outputs
587
+
588
+
589
+ class CUDASanitizer:
590
+ """Manages the lifetime of a CUDASanitizer dispatch mode object.
591
+
592
+ The CUDASanitizer class wraps the entering/exiting functions of the dispatch mode
593
+ context manager in the enable function/destructor, respectively. This is to
594
+ explicitly set the lifetime of the dispatch mode object to that of the application.
595
+ This approach was deemed more elegant than using the atexit module.
596
+ """
597
+
598
+ def __init__(self):
599
+ self.dispatch = CUDASanitizerDispatchMode()
600
+ self.enabled = False
601
+
602
+ def enable(self):
603
+ self.dispatch.__enter__()
604
+ self.enabled = True
605
+
606
+ def __del__(self):
607
+ if self.enabled:
608
+ self.dispatch.__exit__(None, None, None)
609
+
610
+
611
+ def enable_cuda_sanitizer():
612
+ """Enable CUDA Sanitizer.
613
+
614
+ The sanitizer will begin to analyze low-level CUDA calls invoked by torch functions
615
+ for synchronization errors. All data races found will be printed to the standard
616
+ error output along with stack traces of suspected causes. For best results, the
617
+ sanitizer should be enabled at the very beginning of the program.
618
+ """
619
+ cuda_sanitizer.enable()
620
+
621
+
622
+ cuda_sanitizer = CUDASanitizer()
llmeval-env/lib/python3.10/site-packages/torch/cuda/_utils.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any
2
+
3
+ import torch
4
+
5
+ # The _get_device_index has been moved to torch.utils._get_device_index
6
+ from torch._utils import _get_device_index as _torch_get_device_index
7
+
8
+
9
+ def _get_device_index(
10
+ device: Any, optional: bool = False, allow_cpu: bool = False
11
+ ) -> int:
12
+ r"""Get the device index from :attr:`device`, which can be a torch.device object, a Python integer, or ``None``.
13
+
14
+ If :attr:`device` is a torch.device object, returns the device index if it
15
+ is a CUDA device. Note that for a CUDA device without a specified index,
16
+ i.e., ``torch.device('cuda')``, this will return the current default CUDA
17
+ device if :attr:`optional` is ``True``. If :attr:`allow_cpu` is ``True``,
18
+ CPU devices will be accepted and ``-1`` will be returned in this case.
19
+
20
+ If :attr:`device` is a Python integer, it is returned as is.
21
+
22
+ If :attr:`device` is ``None``, this will return the current default CUDA
23
+ device if :attr:`optional` is ``True``.
24
+ """
25
+ if isinstance(device, int):
26
+ return device
27
+ if isinstance(device, str):
28
+ device = torch.device(device)
29
+ if isinstance(device, torch.device):
30
+ if allow_cpu:
31
+ if device.type not in ["cuda", "cpu"]:
32
+ raise ValueError(f"Expected a cuda or cpu device, but got: {device}")
33
+ elif device.type != "cuda":
34
+ raise ValueError(f"Expected a cuda device, but got: {device}")
35
+ if not torch.jit.is_scripting():
36
+ if isinstance(device, torch.cuda.device):
37
+ return device.idx
38
+ return _torch_get_device_index(device, optional, allow_cpu)
llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .autocast_mode import autocast, custom_bwd, custom_fwd
2
+ from .common import amp_definitely_not_available
3
+ from .grad_scaler import GradScaler
4
+
5
+ __all__ = [
6
+ "amp_definitely_not_available",
7
+ "autocast",
8
+ "custom_bwd",
9
+ "custom_fwd",
10
+ "GradScaler",
11
+ ]
llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (426 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/__pycache__/autocast_mode.cpython-310.pyc ADDED
Binary file (4.79 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import functools
3
+
4
+ import torch
5
+
6
+ try:
7
+ import numpy as np
8
+
9
+ HAS_NUMPY = True
10
+ except ModuleNotFoundError:
11
+ np = None # type: ignore[assignment]
12
+ from typing import Any
13
+
14
+ __all__ = ["autocast", "custom_fwd", "custom_bwd"]
15
+
16
+
17
+ class autocast(torch.amp.autocast_mode.autocast):
18
+ r"""See :class:`torch.autocast`.
19
+
20
+ ``torch.cuda.amp.autocast(args...)`` is equivalent to ``torch.autocast("cuda", args...)``
21
+ """
22
+
23
+ def __init__(
24
+ self,
25
+ enabled: bool = True,
26
+ dtype: torch.dtype = torch.float16,
27
+ cache_enabled: bool = True,
28
+ ):
29
+ if torch._jit_internal.is_scripting():
30
+ self._enabled = enabled
31
+ self.device = "cuda"
32
+ self.fast_dtype = dtype
33
+ return
34
+ super().__init__(
35
+ "cuda", enabled=enabled, dtype=dtype, cache_enabled=cache_enabled
36
+ )
37
+
38
+ def __enter__(self):
39
+ if torch._jit_internal.is_scripting():
40
+ return self
41
+ return super().__enter__()
42
+
43
+ # TODO: discuss a unified TorchScript-friendly API for autocast
44
+ def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any): # type: ignore[override]
45
+ if torch._jit_internal.is_scripting():
46
+ return
47
+ return super().__exit__(exc_type, exc_val, exc_tb)
48
+
49
+ def __call__(self, func):
50
+ if torch._jit_internal.is_scripting():
51
+ return func
52
+ return super().__call__(func)
53
+
54
+
55
+ # Casts Tensors and containers of Tensors. Special-cases passthroughs for strings and np.ndarrays, which
56
+ # may be falsely detected as "Iterables."
57
+ def _cast(value, dtype):
58
+ if isinstance(value, torch.Tensor):
59
+ is_eligible = (
60
+ value.is_floating_point()
61
+ and value.is_cuda
62
+ and (value.dtype is not torch.float64)
63
+ )
64
+ return value.to(dtype) if is_eligible else value
65
+ elif isinstance(value, (str, bytes)):
66
+ return value
67
+ elif HAS_NUMPY and isinstance(value, np.ndarray):
68
+ return value
69
+ elif isinstance(value, collections.abc.Mapping):
70
+ return {_cast(k, dtype): _cast(v, dtype) for k, v in value.items()}
71
+ elif isinstance(value, collections.abc.Iterable):
72
+ iterable = (_cast(v, dtype) for v in value)
73
+ if isinstance(value, (list, tuple)):
74
+ return type(value)(iterable)
75
+ else:
76
+ return iterable
77
+ else:
78
+ return value
79
+
80
+
81
+ # custom_fwd is a decorator that may or may not be used with arguments, following
82
+ # https://github.com/dabeaz/python-cookbook/tree/master/src/9/defining_a_decorator_that_takes_an_optional_argument.
83
+ # this works:
84
+ # @custom_fwd
85
+ # def forward(...):
86
+ # this also works:
87
+ # @custom_fwd(cast_inputs=torch.float)
88
+ # def forward(...):
89
+ def custom_fwd(fwd=None, *, cast_inputs=None):
90
+ """
91
+ Create a helper decorator for ``forward`` methods of custom autograd functions.
92
+
93
+ Autograd functions are subclasses of :class:`torch.autograd.Function`.
94
+ See the :ref:`example page<amp-custom-examples>` for more detail.
95
+
96
+ Args:
97
+ cast_inputs (:class:`torch.dtype` or None, optional, default=None): If not ``None``,
98
+ when ``forward`` runs in an autocast-enabled region, casts incoming
99
+ floating-point CUDA Tensors to the target dtype (non-floating-point Tensors are not affected),
100
+ then executes ``forward`` with autocast disabled.
101
+ If ``None``, ``forward``'s internal ops execute with the current autocast state.
102
+
103
+ .. note::
104
+ If the decorated ``forward`` is called outside an autocast-enabled region,
105
+ :func:`custom_fwd<custom_fwd>` is a no-op and ``cast_inputs`` has no effect.
106
+ """
107
+ if fwd is None:
108
+ return functools.partial(custom_fwd, cast_inputs=cast_inputs)
109
+
110
+ @functools.wraps(fwd)
111
+ def decorate_fwd(*args, **kwargs):
112
+ args[0]._dtype = torch.get_autocast_gpu_dtype()
113
+ if cast_inputs is None:
114
+ args[0]._fwd_used_autocast = torch.is_autocast_enabled()
115
+ return fwd(*args, **kwargs)
116
+ else:
117
+ autocast_context = torch.is_autocast_enabled()
118
+ args[0]._fwd_used_autocast = False
119
+ if autocast_context:
120
+ with autocast(enabled=False):
121
+ return fwd(*_cast(args, cast_inputs), **_cast(kwargs, cast_inputs))
122
+ else:
123
+ return fwd(*args, **kwargs)
124
+
125
+ return decorate_fwd
126
+
127
+
128
+ # Autograd ensures incoming gradients are the same type as forward outputs. Allowing a separate
129
+ # cast_inputs argument on custom_bwd is unnecessary and could cause errors if it doesn't match
130
+ # cast_inputs supplied to custom_fwd.
131
+ def custom_bwd(bwd):
132
+ """Create a helper decorator for backward methods of custom autograd functions.
133
+
134
+ Autograd functions are subclasses of :class:`torch.autograd.Function`.
135
+ Ensures that ``backward`` executes with the same autocast state as ``forward``.
136
+ See the :ref:`example page<amp-custom-examples>` for more detail.
137
+ """
138
+
139
+ @functools.wraps(bwd)
140
+ def decorate_bwd(*args, **kwargs):
141
+ with autocast(enabled=args[0]._fwd_used_autocast, dtype=args[0]._dtype):
142
+ return bwd(*args, **kwargs)
143
+
144
+ return decorate_bwd
llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/common.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ from importlib.util import find_spec
2
+
3
+ import torch
4
+
5
+ __all__ = ["amp_definitely_not_available"]
6
+
7
+
8
+ def amp_definitely_not_available():
9
+ return not (torch.cuda.is_available() or find_spec("torch_xla"))
llmeval-env/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.amp.grad_scaler import OptState
3
+
4
+ __all__ = ["GradScaler", "OptState"]
5
+
6
+
7
+ class GradScaler(torch.amp.GradScaler):
8
+ r"""
9
+ See :class:`torch.amp.GradScaler`.
10
+ ``torch.cuda.amp.GradScaler(args...)`` is equivalent to ``torch.amp.GradScaler("cuda", args...)``
11
+ """
12
+
13
+ def __init__(
14
+ self,
15
+ init_scale: float = 2.0**16,
16
+ growth_factor: float = 2.0,
17
+ backoff_factor: float = 0.5,
18
+ growth_interval: int = 2000,
19
+ enabled: bool = True,
20
+ ) -> None:
21
+ super().__init__(
22
+ "cuda",
23
+ init_scale=init_scale,
24
+ growth_factor=growth_factor,
25
+ backoff_factor=backoff_factor,
26
+ growth_interval=growth_interval,
27
+ enabled=enabled,
28
+ )
llmeval-env/lib/python3.10/site-packages/torch/cuda/comm.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The functions here have been moved to torch.nn.parallel.comm
2
+ from torch.nn.parallel.comm import (
3
+ broadcast,
4
+ broadcast_coalesced,
5
+ gather,
6
+ reduce_add,
7
+ reduce_add_coalesced,
8
+ scatter,
9
+ )
10
+
11
+ __all__ = [
12
+ "broadcast",
13
+ "broadcast_coalesced",
14
+ "reduce_add",
15
+ "reduce_add_coalesced",
16
+ "scatter",
17
+ "gather",
18
+ ]
llmeval-env/lib/python3.10/site-packages/torch/cuda/error.py ADDED
File without changes
llmeval-env/lib/python3.10/site-packages/torch/cuda/graphs.py ADDED
@@ -0,0 +1,479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gc
2
+ from typing import Optional
3
+
4
+ import torch
5
+ from torch.utils import _pytree
6
+ from .._utils import _dummy_type
7
+
8
+ if not hasattr(torch._C, "_CudaStreamBase"):
9
+ # Define dummy base classes
10
+ torch._C.__dict__["_CUDAGraph"] = _dummy_type("_CUDAGraph")
11
+ torch._C.__dict__["_graph_pool_handle"] = _dummy_type("_graph_pool_handle")
12
+ torch._C.__dict__["_cuda_isCurrentStreamCapturing"] = _dummy_type(
13
+ "_cuda_isCurrentStreamCapturing"
14
+ )
15
+
16
+ from torch._C import ( # noqa: F401
17
+ _cuda_isCurrentStreamCapturing,
18
+ _CUDAGraph,
19
+ _graph_pool_handle,
20
+ )
21
+
22
+
23
+ def is_current_stream_capturing():
24
+ r"""Return True if CUDA graph capture is underway on the current CUDA stream, False otherwise.
25
+
26
+ If a CUDA context does not exist on the current device, returns False without initializing the context.
27
+ """
28
+ return _cuda_isCurrentStreamCapturing()
29
+
30
+
31
+ # Python shim helps Sphinx process docstrings more reliably.
32
+ def graph_pool_handle():
33
+ r"""Return an opaque token representing the id of a graph memory pool.
34
+
35
+ See :ref:`Graph memory management<graph-memory-management>`.
36
+
37
+ .. warning::
38
+ This API is in beta and may change in future releases.
39
+ """
40
+ return _graph_pool_handle()
41
+
42
+
43
+ # Python shim helps Sphinx process docstrings more reliably.
44
+ class CUDAGraph(torch._C._CUDAGraph):
45
+ r"""Wrapper around a CUDA graph.
46
+
47
+ .. warning::
48
+ This API is in beta and may change in future releases.
49
+ """
50
+
51
+ def __new__(cls):
52
+ return super().__new__(cls)
53
+
54
+ def capture_begin(self, pool=None, capture_error_mode="global"):
55
+ r"""Begin capturing CUDA work on the current stream.
56
+
57
+ Typically, you shouldn't call ``capture_begin`` yourself.
58
+ Use :class:`~torch.cuda.graph` or :func:`~torch.cuda.make_graphed_callables`,
59
+ which call ``capture_begin`` internally.
60
+
61
+ Arguments:
62
+ pool (optional): Token (returned by :func:`~torch.cuda.graph_pool_handle` or
63
+ :meth:`other_Graph_instance.pool()<torch.cuda.CUDAGraph.pool>`) that hints this graph may share memory
64
+ with the indicated pool. See :ref:`Graph memory management<graph-memory-management>`.
65
+ capture_error_mode (str, optional): specifies the cudaStreamCaptureMode for the graph capture stream.
66
+ Can be "global", "thread_local" or "relaxed". During cuda graph capture, some actions, such as cudaMalloc,
67
+ may be unsafe. "global" will error on actions in other threads, "thread_local" will only error for
68
+ actions in the current thread, and "relaxed" will not error on these actions. Do NOT change this setting
69
+ unless you're familiar with `cudaStreamCaptureMode <https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html#group__CUDART__STREAM_1g9d0535d93a214cbf126835257b16ba85>`_
70
+ """ # noqa: B950
71
+ super().capture_begin(pool=pool, capture_error_mode=capture_error_mode)
72
+
73
+ def capture_end(self):
74
+ r"""End CUDA graph capture on the current stream.
75
+
76
+ After ``capture_end``, ``replay`` may be called on this instance.
77
+
78
+ Typically, you shouldn't call ``capture_end`` yourself.
79
+ Use :class:`~torch.cuda.graph` or :func:`~torch.cuda.make_graphed_callables`,
80
+ which call ``capture_end`` internally.
81
+ """
82
+ super().capture_end()
83
+
84
+ def replay(self):
85
+ r"""Replay the CUDA work captured by this graph."""
86
+ super().replay()
87
+
88
+ def reset(self):
89
+ r"""Delete the graph currently held by this instance."""
90
+ super().reset()
91
+
92
+ def pool(self):
93
+ r"""Return an opaque token representing the id of this graph's memory pool.
94
+
95
+ This id can optionally be passed to another graph's ``capture_begin``,
96
+ which hints the other graph may share the same memory pool.
97
+ """
98
+ return super().pool()
99
+
100
+ def enable_debug_mode(self):
101
+ r"""Enable debugging mode for CUDAGraph.debug_dump."""
102
+ return super().enable_debug_mode()
103
+
104
+ def debug_dump(self, debug_path):
105
+ r"""
106
+ Arguments:
107
+ debug_path (required): Path to dump the graph to.
108
+
109
+ Calls a debugging function to dump the graph if the debugging is
110
+ enabled via CUDAGraph.enable_debug_mode()
111
+ """
112
+ return super().debug_dump(debug_path)
113
+
114
+
115
+ class graph:
116
+ r"""Context-manager that captures CUDA work into a :class:`torch.cuda.CUDAGraph` object for later replay.
117
+
118
+ See :ref:`CUDA Graphs <cuda-graph-semantics>` for a general introduction,
119
+ detailed use, and constraints.
120
+
121
+ Arguments:
122
+ cuda_graph (torch.cuda.CUDAGraph): Graph object used for capture.
123
+ pool (optional): Opaque token (returned by a call to :func:`~torch.cuda.graph_pool_handle()` or
124
+ :meth:`other_Graph_instance.pool()<torch.cuda.CUDAGraph.pool>`) hinting this graph's capture
125
+ may share memory from the specified pool. See :ref:`Graph memory management<graph-memory-management>`.
126
+ stream (torch.cuda.Stream, optional): If supplied, will be set as the current stream in the context.
127
+ If not supplied, ``graph`` sets its own internal side stream as the current stream in the context.
128
+ capture_error_mode (str, optional): specifies the cudaStreamCaptureMode for the graph capture stream.
129
+ Can be "global", "thread_local" or "relaxed". During cuda graph capture, some actions, such as cudaMalloc,
130
+ may be unsafe. "global" will error on actions in other threads, "thread_local" will only error for
131
+ actions in the current thread, and "relaxed" will not error on actions. Do NOT change this setting
132
+ unless you're familiar with `cudaStreamCaptureMode <https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html#group__CUDART__STREAM_1g9d0535d93a214cbf126835257b16ba85>`_
133
+
134
+ .. note::
135
+ For effective memory sharing, if you pass a ``pool`` used by a previous capture and the previous capture
136
+ used an explicit ``stream`` argument, you should pass the same ``stream`` argument to this capture.
137
+
138
+ .. warning::
139
+ This API is in beta and may change in future releases.
140
+
141
+ .. _cudaStreamCaptureMode:
142
+ https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html#group__CUDART__STREAM_1g9d0535d93a214cbf126835257b16ba85
143
+ """ # noqa: B950
144
+
145
+ default_capture_stream: Optional["torch.cuda.Stream"] = None
146
+
147
+ def __init__(
148
+ self,
149
+ cuda_graph,
150
+ pool=None,
151
+ stream=None,
152
+ capture_error_mode: str = "global",
153
+ ):
154
+ # Lazy-init of default_capture_stream helps avoid circular-import errors.
155
+ # Not thread safe, but graphs already have the general (explicitly documented)
156
+ # restriction that only one capture may be underway at a time in the process.
157
+ if self.__class__.default_capture_stream is None:
158
+ self.__class__.default_capture_stream = torch.cuda.Stream()
159
+
160
+ self.pool = () if pool is None else (pool,)
161
+ self.capture_stream = (
162
+ stream if stream is not None else self.__class__.default_capture_stream
163
+ )
164
+ assert self.capture_stream is not None
165
+ self.stream_ctx = torch.cuda.stream(self.capture_stream)
166
+ self.cuda_graph = cuda_graph
167
+ self.capture_error_mode = capture_error_mode
168
+
169
+ def __enter__(self):
170
+ # Free as much memory as we can for the graph
171
+ torch.cuda.synchronize()
172
+ gc.collect()
173
+ torch.cuda.empty_cache()
174
+
175
+ # Stackoverflow seems comfortable with this pattern
176
+ # https://stackoverflow.com/questions/26635684/calling-enter-and-exit-manually#39172487
177
+ self.stream_ctx.__enter__()
178
+
179
+ self.cuda_graph.capture_begin(
180
+ *self.pool, capture_error_mode=self.capture_error_mode
181
+ )
182
+
183
+ def __exit__(self, exc_type, exc_value, traceback):
184
+ self.cuda_graph.capture_end()
185
+ self.stream_ctx.__exit__(exc_type, exc_value, traceback)
186
+ # returning None should propagate exceptions from either capture_end or stream_ctx.__exit__()
187
+
188
+
189
+ def make_graphed_callables(
190
+ callables, sample_args, num_warmup_iters=3, allow_unused_input=False, pool=None
191
+ ):
192
+ r"""Accept callables (functions or :class:`nn.Module<torch.nn.Module>`\ s) and returns graphed versions.
193
+
194
+ Each graphed callable's forward pass runs its source callable's
195
+ forward CUDA work as a CUDA graph inside a single autograd node.
196
+
197
+ The graphed callable's forward pass also appends
198
+ a backward node to the autograd graph. During backward, this node runs the
199
+ callable's backward work as a CUDA graph.
200
+
201
+ Therefore, each graphed callable should be a drop-in replacement for its source callable
202
+ in an autograd-enabled training loop.
203
+
204
+ See :ref:`Partial-network capture<partial-network-capture>` for detailed use and constraints.
205
+
206
+ If you pass a tuple of several callables, their captures will use the same memory pool.
207
+ See :ref:`Graph memory management<graph-memory-management>` for when this is appropriate.
208
+
209
+ Arguments:
210
+ callables (torch.nn.Module or Python function, or tuple of these): Callable or callables to graph.
211
+ See :ref:`Graph memory management<graph-memory-management>` for when passing a tuple of callables
212
+ is appropriate. If you pass a tuple of callables, their order in the tuple must be the same order
213
+ they'll run in the live workload.
214
+ sample_args (tuple of Tensors, or tuple of tuples of Tensors): Samples args for each callable.
215
+ If a single callable was passed, ``sample_args`` must be a single tuple of argument Tensors.
216
+ If a tuple of callables was passed, ``sample_args`` must be tuple of tuples of argument Tensors.
217
+ num_warmup_iters (int): The number of warmup iterations. Currently, ``DataDistributedParallel`` needs
218
+ 11 iterations for warm up. Default: ``3``.
219
+ allow_unused_input (bool): If False, specifying inputs that were not used when computing outputs
220
+ (and therefore their grad is always zero) is an error. Defaults to False.
221
+ pool (optional): Token (returned by :func:`~torch.cuda.graph_pool_handle` or
222
+ :meth:`other_Graph_instance.pool()<torch.cuda.CUDAGraph.pool>`) that hints this graph may share memory
223
+ with the indicated pool. See :ref:`Graph memory management<graph-memory-management>`.
224
+ .. note::
225
+ The ``requires_grad`` state of each Tensor in ``sample_args`` must match the state
226
+ that's expected for the corresponding real input in the training loop.
227
+
228
+ .. warning::
229
+ This API is in beta and may change in future releases.
230
+
231
+ .. warning::
232
+ ``sample_args`` for each callable must contain only Tensors. Other types are not allowed.
233
+
234
+ .. warning::
235
+ Returned callables do not support higher order differentiation (e.g., double backward).
236
+
237
+ .. warning::
238
+ In any :class:`~torch.nn.Module` passed to :func:`~make_graphed_callables`, only parameters
239
+ may be trainable. Buffers must have ``requires_grad=False``.
240
+
241
+ .. warning::
242
+ After you pass a :class:`torch.nn.Module` through :func:`~make_graphed_callables`,
243
+ you may not add or remove any of that Module's parameters or buffers.
244
+
245
+ .. warning::
246
+ :class:`torch.nn.Module`\s passed to :func:`~torch.cuda.make_graphed_callables` must not have module hooks
247
+ registered on them at the time they are passed. However, registering hooks on modules *after* passing them
248
+ through :func:`~torch.cuda.make_graphed_callables` is allowed.
249
+
250
+ .. warning::
251
+ When running a graphed callable, you must pass its arguments in the same order and format
252
+ they appeared in that callable's ``sample_args``.
253
+
254
+ .. warning::
255
+ The automatic mixed precision is supported in :func:`~torch.cuda.make_graphed_callables` only with disabled
256
+ caching. The context manager `torch.cuda.amp.autocast()` must have `cache_enabled=False`.
257
+ """
258
+ if torch.is_autocast_enabled() and torch.is_autocast_cache_enabled():
259
+ raise RuntimeError(
260
+ "make_graphed_callables does not support the autocast caching. Please set `cache_enabled=False`."
261
+ )
262
+
263
+ just_one_callable = False
264
+
265
+ if not isinstance(callables, tuple):
266
+ just_one_callable = True
267
+ callables = (callables,)
268
+ sample_args = (sample_args,)
269
+
270
+ flatten_sample_args = []
271
+
272
+ for c, args in zip(callables, sample_args):
273
+ if isinstance(c, torch.nn.Module):
274
+ assert (
275
+ len(c._backward_hooks) == 0
276
+ and len(c._forward_hooks) == 0
277
+ and len(c._forward_pre_hooks) == 0
278
+ ), (
279
+ "Modules must not have hooks registered at the time they are passed. However, registering hooks "
280
+ + "on modules after passing them through make_graphed_callables is allowed."
281
+ )
282
+ assert all(b.requires_grad is False for b in c.buffers()), (
283
+ "In any :class:`~torch.nn.Module` passed to "
284
+ + ":func:`~make_graphed_callables`, only parameters may be trainable. All buffers must have "
285
+ + "``requires_grad=False``."
286
+ )
287
+ flatten_arg = _pytree.arg_tree_leaves(*args)
288
+ flatten_sample_args.append(tuple(flatten_arg))
289
+ assert all(isinstance(arg, torch.Tensor) for arg in flatten_arg), (
290
+ "In the beta API, sample_args "
291
+ + "for each callable must contain only Tensors. Other types are not allowed."
292
+ )
293
+
294
+ # If a callable is an nn.Module, its graph's full input surface is the args the user explicitly
295
+ # passes to forward (ie, its sample_args) AND the module's parameter attributes.
296
+ per_callable_len_user_args = [len(args) for args in flatten_sample_args]
297
+ per_callable_module_params = [
298
+ tuple(c.parameters()) if isinstance(c, torch.nn.Module) else ()
299
+ for c in callables
300
+ ]
301
+ per_callable_static_input_surfaces = [
302
+ flatten_sample_args[i] + per_callable_module_params[i]
303
+ for i in range(len(callables))
304
+ ]
305
+
306
+ fwd_graphs = [torch.cuda.CUDAGraph() for _ in range(len(callables))]
307
+ bwd_graphs = [torch.cuda.CUDAGraph() for _ in range(len(callables))]
308
+
309
+ mempool = graph_pool_handle() if pool is None else pool
310
+
311
+ # Warmup
312
+ # Hopefully prevents cudnn benchmarking and other lazy-initialization cuda work
313
+ # from ending up in any captures.
314
+ torch.cuda.synchronize()
315
+ with torch.cuda.stream(torch.cuda.Stream()):
316
+ for func, args, static_input_surface in zip(
317
+ callables, sample_args, per_callable_static_input_surfaces
318
+ ):
319
+ for _ in range(num_warmup_iters):
320
+ outputs = _pytree.tree_leaves(func(*args))
321
+ grad_inputs = torch.autograd.grad(
322
+ outputs=tuple(o for o in outputs if o.requires_grad),
323
+ inputs=tuple(i for i in static_input_surface if i.requires_grad),
324
+ grad_outputs=tuple(
325
+ torch.empty_like(o) for o in outputs if o.requires_grad
326
+ ),
327
+ only_inputs=True,
328
+ allow_unused=allow_unused_input,
329
+ )
330
+ del outputs, grad_inputs # type: ignore[possibly-undefined]
331
+ torch.cuda.synchronize()
332
+
333
+ # All captures here share a mempool. To avoid replays corrupting each other's memory,
334
+ # the safest approach is to capture all passes in the same order they'll run:
335
+ # fwd 1, fwd 2, ... fwd N, then bwd N, bwd N-1, ... bwd 1.
336
+
337
+ # Capture forward graphs
338
+ per_callable_static_outputs = []
339
+ per_callable_output_unflatten_spec = []
340
+ for func, args, fwd_graph in zip(callables, sample_args, fwd_graphs):
341
+ with torch.cuda.graph(fwd_graph, pool=mempool):
342
+ outputs = func(*args)
343
+
344
+ flatten_outputs, spec = _pytree.tree_flatten(outputs)
345
+ per_callable_static_outputs.append(tuple(flatten_outputs))
346
+ per_callable_output_unflatten_spec.append(spec)
347
+
348
+ # Capture backward graphs in reverse order
349
+ per_callable_static_grad_outputs = []
350
+ per_callable_static_grad_inputs = []
351
+ for static_input_surface, static_outputs, bwd_graph, module_params in zip(
352
+ reversed(per_callable_static_input_surfaces),
353
+ reversed(per_callable_static_outputs),
354
+ reversed(bwd_graphs),
355
+ reversed(per_callable_module_params),
356
+ ):
357
+ # For now, assumes all static_outputs require grad
358
+ # assert all(o.requires_grad for o in static_outputs), "Outputs of graphed callables must require grad."
359
+ static_grad_outputs = tuple(
360
+ torch.empty_like(o) if o.requires_grad else None for o in static_outputs
361
+ )
362
+
363
+ with torch.cuda.graph(bwd_graph, pool=mempool):
364
+ grad_inputs = torch.autograd.grad(
365
+ outputs=tuple(o for o in static_outputs if o.requires_grad),
366
+ inputs=tuple(i for i in static_input_surface if i.requires_grad),
367
+ grad_outputs=tuple(o for o in static_grad_outputs if o is not None),
368
+ only_inputs=True,
369
+ allow_unused=allow_unused_input,
370
+ )
371
+
372
+ # Constructs a tuple suitable for returning from Graphed.backward:
373
+ # Pads out the actually-needed grads with Nones in gradient slots for inputs that don't require grad.
374
+ # I couldn't think of a slick one-liner for this pattern.
375
+ static_grad_inputs = []
376
+ grad_idx = 0
377
+ for arg in static_input_surface:
378
+ if arg.requires_grad:
379
+ static_grad_inputs.append(grad_inputs[grad_idx])
380
+ grad_idx += 1
381
+ else:
382
+ static_grad_inputs.append(None) # type: ignore[arg-type]
383
+ static_grad_inputs = tuple(static_grad_inputs) # type: ignore[assignment]
384
+
385
+ per_callable_static_grad_outputs.append(static_grad_outputs)
386
+ per_callable_static_grad_inputs.append(static_grad_inputs)
387
+
388
+ # Reverses the most recent two lists
389
+ per_callable_static_grad_outputs.reverse()
390
+ per_callable_static_grad_inputs.reverse()
391
+ # Now for every per_callable list, per_callable_*[i] holds the stuff for the ith callable.
392
+
393
+ def make_graphed_autograd_function(
394
+ fwd_graph,
395
+ bwd_graph,
396
+ module_params,
397
+ len_user_args,
398
+ output_unflatten_spec,
399
+ static_input_surface,
400
+ static_outputs,
401
+ static_grad_outputs,
402
+ static_grad_inputs,
403
+ ):
404
+ class Graphed(torch.autograd.Function):
405
+ @staticmethod
406
+ def forward(ctx, *inputs):
407
+ # At this stage, only the user args may (potentially) be new tensors.
408
+ for i in range(len_user_args):
409
+ if static_input_surface[i].data_ptr() != inputs[i].data_ptr():
410
+ static_input_surface[i].copy_(inputs[i])
411
+ fwd_graph.replay()
412
+ assert isinstance(static_outputs, tuple)
413
+ return tuple(o.detach() for o in static_outputs)
414
+
415
+ @staticmethod
416
+ @torch.autograd.function.once_differentiable
417
+ def backward(ctx, *grads):
418
+ assert len(grads) == len(static_grad_outputs)
419
+ for g, grad in zip(static_grad_outputs, grads):
420
+ if g is not None:
421
+ # don't copy if autograd gods have been kind and the
422
+ # incoming grad is already in the right place
423
+ if g.data_ptr() != grad.data_ptr():
424
+ g.copy_(grad)
425
+ bwd_graph.replay()
426
+
427
+ # Input args that didn't require grad expect a None gradient.
428
+ assert isinstance(static_grad_inputs, tuple)
429
+ return tuple(
430
+ b.detach() if b is not None else b for b in static_grad_inputs
431
+ )
432
+
433
+ def functionalized(*user_args):
434
+ # Runs the autograd function with inputs == all inputs to the graph that might require grad
435
+ # (explicit user args + module parameters)
436
+ # Assumes module params didn't change since capture.
437
+ flatten_user_args = _pytree.arg_tree_leaves(*user_args)
438
+ out = Graphed.apply(*(tuple(flatten_user_args) + module_params))
439
+ return _pytree.tree_unflatten(out, output_unflatten_spec)
440
+
441
+ return functionalized
442
+
443
+ # Put together the final graphed callables
444
+ ret = []
445
+ for i, func in enumerate(callables):
446
+ graphed = make_graphed_autograd_function(
447
+ fwd_graphs[i],
448
+ bwd_graphs[i],
449
+ per_callable_module_params[i],
450
+ per_callable_len_user_args[i],
451
+ per_callable_output_unflatten_spec[i],
452
+ per_callable_static_input_surfaces[i],
453
+ per_callable_static_outputs[i],
454
+ per_callable_static_grad_outputs[i],
455
+ per_callable_static_grad_inputs[i],
456
+ )
457
+
458
+ if isinstance(func, torch.nn.Module):
459
+
460
+ def make_graphed_forward(func, graph_training_state, graphed, orig_fwd):
461
+ def new_fwd(*user_args):
462
+ # If the module's training-or-eval state matches what we graphed,
463
+ # run the graph, otherwise run the original forward method
464
+ if func.training == graph_training_state:
465
+ return graphed(*user_args)
466
+ else:
467
+ return orig_fwd(*user_args)
468
+
469
+ return new_fwd
470
+
471
+ func.forward = make_graphed_forward(func, func.training, graphed, func.forward) # type: ignore[assignment]
472
+ ret.append(func)
473
+ else:
474
+ ret.append(graphed)
475
+
476
+ if just_one_callable:
477
+ return ret[0]
478
+
479
+ return tuple(ret)
llmeval-env/lib/python3.10/site-packages/torch/cuda/jiterator.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from typing import Callable, List
3
+
4
+ import torch
5
+ from torch import Tensor
6
+
7
+ __all__: List[str] = []
8
+
9
+
10
+ class _CodeParser:
11
+ def __init__(self, code_string: str):
12
+ optional_ws = r"\s*"
13
+ required_ws = r"\s+"
14
+ template_params = r"(?P<template_params>\<.+\>)"
15
+ return_type = r"(?P<return_type>\w+)"
16
+ function_name = r"(?P<function_name>\w+)"
17
+ function_params = r"(?P<function_params>\(.+\))"
18
+ function_body = r"(?P<function_body>\{.+\})"
19
+
20
+ pattern = (
21
+ optional_ws
22
+ + "template"
23
+ + optional_ws
24
+ + template_params
25
+ + optional_ws
26
+ + return_type
27
+ + required_ws
28
+ + function_name
29
+ + optional_ws
30
+ + function_params
31
+ + optional_ws
32
+ + function_body
33
+ + optional_ws
34
+ )
35
+
36
+ result = re.match(
37
+ pattern, code_string, re.DOTALL
38
+ ) # DOTALL for matching multiline
39
+
40
+ if result is None:
41
+ raise Exception(
42
+ f"Couldn't parse code, please check correctness:\n {code_string}"
43
+ )
44
+
45
+ self.template_params = result["template_params"]
46
+ self.return_type = result["return_type"]
47
+ self.function_name = result["function_name"]
48
+ self.function_params = result["function_params"]
49
+ self.function_body = result["function_body"]
50
+
51
+
52
+ class _JittedFunction:
53
+ def __init__(
54
+ self, code_string: str, return_by_ref: bool, num_outputs: int, **kwargs
55
+ ):
56
+ self.code_string = code_string
57
+
58
+ assert (
59
+ return_by_ref or num_outputs == 1
60
+ ), "Return by value only works for single output. "
61
+ self.return_by_ref = return_by_ref
62
+ self.num_outputs = num_outputs
63
+
64
+ parsed_code = _CodeParser(code_string)
65
+ self.kernel_name = parsed_code.function_name
66
+
67
+ self.kwargs_dict = kwargs
68
+ self.is_cuda_available = torch.cuda.is_available()
69
+
70
+ def __call__(self, *tensors: Tensor, **kwargs):
71
+ # Jiterator follow torch.cuda's lazy initialization behavior
72
+ # Defer checking cuda's availability at the function invocation time
73
+ assert (
74
+ self.is_cuda_available
75
+ ), "Jiterator is only supported on CUDA and ROCm GPUs, none are available."
76
+
77
+ assert len(tensors) <= 8, "jiterator only supports up to 8 tensor inputs."
78
+
79
+ expanded_kwargs = self.kwargs_dict.copy()
80
+ for key, value in kwargs.items():
81
+ if key in self.kwargs_dict:
82
+ expanded_kwargs[key] = value
83
+ else:
84
+ raise KeyError(f"{key} is not declared in function definition")
85
+
86
+ return torch._C._cuda_jiterator_compile_and_launch_kernel(
87
+ self.code_string,
88
+ self.kernel_name,
89
+ self.return_by_ref,
90
+ self.num_outputs,
91
+ tensors,
92
+ expanded_kwargs,
93
+ )
94
+
95
+
96
+ def _create_jit_fn(code_string: str, **kwargs) -> Callable:
97
+ """
98
+ Create a jiterator-generated cuda kernel for an elementwise op.
99
+
100
+ The code string has to be a valid CUDA function that describes the computation for a single element. The code
101
+ string has to follow the c++ template pattern, as shown in the example below. This function will be inlined
102
+ into elementwise kernel template, and compiled on the fly. Compiled kernel will be cached in memory, as well as
103
+ local temp dir.
104
+
105
+ Jiterator-generated kernels accepts noncontiguous tensors, and supports broadcasting and type promotion.
106
+
107
+ Args:
108
+ code_string (str): CUDA code string to be compiled by jiterator. The entry functor must return by value.
109
+ kwargs (Dict, optional): Keyword arguments for generated function
110
+
111
+ Example::
112
+
113
+ code_string = "template <typename T> T my_kernel(T x, T y, T alpha) { return -x + alpha * y; }"
114
+ jitted_fn = create_jit_fn(code_string, alpha=1.0)
115
+ a = torch.rand(3, device='cuda')
116
+ b = torch.rand(3, device='cuda')
117
+ # invoke jitted function like a regular python function
118
+ result = jitted_fn(a, b, alpha=3.14)
119
+
120
+ code_string also allows multiple function definitions, and the last function will be treated as the entry function.
121
+
122
+ Example::
123
+
124
+ code_string = "template <typename T> T util_fn(T x, T y) { return ::sin(x) + ::cos(y); }"
125
+ code_string += "template <typename T> T my_kernel(T x, T y, T val) { return ::min(val, util_fn(x, y)); }"
126
+ jitted_fn = create_jit_fn(code_string, val=0.0)
127
+ a = torch.rand(3, device='cuda')
128
+ b = torch.rand(3, device='cuda')
129
+ # invoke jitted function like a regular python function
130
+ result = jitted_fn(a, b) # using default val=0.0
131
+
132
+ Jiterator can be used together with python registration to override an operator's cuda kernel.
133
+ Following example is overriding gelu's cuda kernel with relu.
134
+
135
+ Example::
136
+
137
+ code_string = "template <typename T> T my_gelu(T a) { return a > 0 ? a : 0; }"
138
+ my_gelu = create_jit_fn(code_string)
139
+ my_lib = torch.library.Library("aten", "IMPL")
140
+ my_lib.impl('aten::gelu', my_gelu, "CUDA")
141
+ # torch.nn.GELU and torch.nn.function.gelu are now overridden
142
+ a = torch.rand(3, device='cuda')
143
+ torch.allclose(torch.nn.functional.gelu(a), torch.nn.functional.relu(a))
144
+
145
+ .. warning::
146
+ This API is in beta and may change in future releases.
147
+
148
+ .. warning::
149
+ This API only supports up to 8 inputs and 1 output
150
+
151
+ .. warning::
152
+ All input tensors must live in CUDA device
153
+ """
154
+ return _JittedFunction(code_string, return_by_ref=False, num_outputs=1, **kwargs)
155
+
156
+
157
+ def _create_multi_output_jit_fn(
158
+ code_string: str, num_outputs: int, **kwargs
159
+ ) -> Callable:
160
+ """
161
+ Create a jiterator-generated cuda kernel for an elementwise op that supports returning one or more outputs.
162
+
163
+ Args:
164
+ code_string (str): CUDA code string to be compiled by jiterator. The entry functor must return value by reference.
165
+ num_outputs(int): number of outputs return by the kernel
166
+ kwargs (Dict, optional): Keyword arguments for generated function
167
+
168
+ Example::
169
+
170
+ code_string = "template <typename T> void my_kernel(T x, T y, T alpha, T& out) { out = -x + alpha * y; }"
171
+ jitted_fn = create_jit_fn(code_string, alpha=1.0)
172
+ a = torch.rand(3, device='cuda')
173
+ b = torch.rand(3, device='cuda')
174
+ # invoke jitted function like a regular python function
175
+ result = jitted_fn(a, b, alpha=3.14)
176
+
177
+ .. warning::
178
+ This API is in beta and may change in future releases.
179
+
180
+ .. warning::
181
+ This API only supports up to 8 inputs and 8 outputs
182
+ """
183
+ return _JittedFunction(
184
+ code_string, return_by_ref=True, num_outputs=num_outputs, **kwargs
185
+ )
llmeval-env/lib/python3.10/site-packages/torch/cuda/memory.py ADDED
@@ -0,0 +1,914 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ r"""This package adds support for device memory management implemented in CUDA."""
2
+
3
+ import collections
4
+ import contextlib
5
+ import ctypes
6
+ import pickle
7
+ import sys
8
+ import warnings
9
+ from inspect import signature
10
+
11
+ from typing import Any, Dict, Optional, Tuple, Union
12
+
13
+ import torch
14
+ from torch import _C
15
+
16
+ from torch.types import Device
17
+ from .._utils import _dummy_type
18
+ from . import _get_device_index, _get_nvml_device_index, _lazy_init, is_initialized
19
+
20
+ from ._memory_viz import memory as _memory, segments as _segments
21
+
22
+ __all__ = [
23
+ "caching_allocator_alloc",
24
+ "caching_allocator_delete",
25
+ "set_per_process_memory_fraction",
26
+ "empty_cache",
27
+ "memory_stats",
28
+ "memory_stats_as_nested_dict",
29
+ "reset_accumulated_memory_stats",
30
+ "reset_peak_memory_stats",
31
+ "reset_max_memory_allocated",
32
+ "reset_max_memory_cached",
33
+ "memory_allocated",
34
+ "max_memory_allocated",
35
+ "memory_reserved",
36
+ "max_memory_reserved",
37
+ "memory_cached",
38
+ "max_memory_cached",
39
+ "memory_snapshot",
40
+ "memory_summary",
41
+ "list_gpu_processes",
42
+ "mem_get_info",
43
+ "get_allocator_backend",
44
+ "CUDAPluggableAllocator",
45
+ "change_current_allocator",
46
+ ]
47
+
48
+
49
+ if not hasattr(torch._C, "_cuda_CUDAAllocator"):
50
+ # Define dummy base classes
51
+ torch._C.__dict__["_cuda_CUDAAllocator"] = _dummy_type("_cuda_CUDAAllocator")
52
+
53
+
54
+ def _host_allocator():
55
+ _lazy_init()
56
+ return torch._C._cuda_cudaHostAllocator()
57
+
58
+
59
+ @contextlib.contextmanager
60
+ def _free_mutex():
61
+ torch._C._cuda_lock_mutex()
62
+ try:
63
+ yield
64
+ finally:
65
+ torch._C._cuda_unlock_mutex()
66
+
67
+
68
+ def caching_allocator_alloc(size, device: Union[Device, int] = None, stream=None):
69
+ r"""Perform a memory allocation using the CUDA memory allocator.
70
+
71
+ Memory is allocated for a given device and a stream, this
72
+ function is intended to be used for interoperability with other
73
+ frameworks. Allocated memory is released through
74
+ :func:`~torch.cuda.caching_allocator_delete`.
75
+
76
+ Args:
77
+ size (int): number of bytes to be allocated.
78
+ device (torch.device or int, optional): selected device. If it is
79
+ ``None`` the default CUDA device is used.
80
+ stream (torch.cuda.Stream or int, optional): selected stream. If is ``None`` then
81
+ the default stream for the selected device is used.
82
+
83
+ .. note::
84
+ See :ref:`cuda-memory-management` for more details about GPU memory
85
+ management.
86
+ """
87
+ if device is None:
88
+ device = torch.cuda.current_device()
89
+ device = _get_device_index(device)
90
+ if stream is None:
91
+ stream = torch.cuda.current_stream(device)
92
+ if isinstance(stream, torch.cuda.streams.Stream):
93
+ stream = stream.cuda_stream
94
+ if not isinstance(stream, int):
95
+ raise TypeError(
96
+ "Invalid type for stream argument, must be "
97
+ "`torch.cuda.Stream` or `int` representing a pointer "
98
+ "to a existing stream"
99
+ )
100
+ with torch.cuda.device(device):
101
+ return torch._C._cuda_cudaCachingAllocator_raw_alloc(size, stream)
102
+
103
+
104
+ def caching_allocator_delete(mem_ptr):
105
+ r"""Delete memory allocated using the CUDA memory allocator.
106
+
107
+ Memory allocated with :func:`~torch.cuda.caching_allocator_alloc`.
108
+ is freed here. The associated device and stream are tracked inside
109
+ the allocator.
110
+
111
+ Args:
112
+ mem_ptr (int): memory address to be freed by the allocator.
113
+
114
+ .. note::
115
+ See :ref:`cuda-memory-management` for more details about GPU memory
116
+ management.
117
+ """
118
+ torch._C._cuda_cudaCachingAllocator_raw_delete(mem_ptr)
119
+
120
+
121
+ def set_per_process_memory_fraction(
122
+ fraction, device: Union[Device, int] = None
123
+ ) -> None:
124
+ r"""Set memory fraction for a process.
125
+
126
+ The fraction is used to limit an caching allocator to allocated memory on a CUDA device.
127
+ The allowed value equals the total visible memory multiplied fraction.
128
+ If trying to allocate more than the allowed value in a process, will raise an out of
129
+ memory error in allocator.
130
+
131
+ Args:
132
+ fraction(float): Range: 0~1. Allowed memory equals total_memory * fraction.
133
+ device (torch.device or int, optional): selected device. If it is
134
+ ``None`` the default CUDA device is used.
135
+ .. note::
136
+ In general, the total available free memory is less than the total capacity.
137
+ """
138
+ _lazy_init()
139
+ if device is None:
140
+ device = torch.cuda.current_device()
141
+ device = _get_device_index(device)
142
+ if not isinstance(fraction, float):
143
+ raise TypeError("Invalid type for fraction argument, must be `float`")
144
+ if fraction < 0 or fraction > 1:
145
+ raise ValueError(f"Invalid fraction value: {fraction}. Allowed range: 0~1")
146
+
147
+ torch._C._cuda_setMemoryFraction(fraction, device)
148
+
149
+
150
+ def empty_cache() -> None:
151
+ r"""Release all unoccupied cached memory currently held by the caching
152
+ allocator so that those can be used in other GPU application and visible in
153
+ `nvidia-smi`.
154
+
155
+ .. note::
156
+ :func:`~torch.cuda.empty_cache` doesn't increase the amount of GPU
157
+ memory available for PyTorch. However, it may help reduce fragmentation
158
+ of GPU memory in certain cases. See :ref:`cuda-memory-management` for
159
+ more details about GPU memory management.
160
+ """
161
+ if is_initialized():
162
+ torch._C._cuda_emptyCache()
163
+
164
+
165
+ def memory_stats(device: Union[Device, int] = None) -> Dict[str, Any]:
166
+ r"""Return a dictionary of CUDA memory allocator statistics for a given device.
167
+
168
+ The return value of this function is a dictionary of statistics, each of
169
+ which is a non-negative integer.
170
+
171
+ Core statistics:
172
+
173
+ - ``"allocated.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
174
+ number of allocation requests received by the memory allocator.
175
+ - ``"allocated_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
176
+ amount of allocated memory.
177
+ - ``"segment.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
178
+ number of reserved segments from ``cudaMalloc()``.
179
+ - ``"reserved_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
180
+ amount of reserved memory.
181
+ - ``"active.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
182
+ number of active memory blocks.
183
+ - ``"active_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
184
+ amount of active memory.
185
+ - ``"inactive_split.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
186
+ number of inactive, non-releasable memory blocks.
187
+ - ``"inactive_split_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
188
+ amount of inactive, non-releasable memory.
189
+
190
+ For these core statistics, values are broken down as follows.
191
+
192
+ Pool type:
193
+
194
+ - ``all``: combined statistics across all memory pools.
195
+ - ``large_pool``: statistics for the large allocation pool
196
+ (as of October 2019, for size >= 1MB allocations).
197
+ - ``small_pool``: statistics for the small allocation pool
198
+ (as of October 2019, for size < 1MB allocations).
199
+
200
+ Metric type:
201
+
202
+ - ``current``: current value of this metric.
203
+ - ``peak``: maximum value of this metric.
204
+ - ``allocated``: historical total increase in this metric.
205
+ - ``freed``: historical total decrease in this metric.
206
+
207
+ In addition to the core statistics, we also provide some simple event
208
+ counters:
209
+
210
+ - ``"num_alloc_retries"``: number of failed ``cudaMalloc`` calls that
211
+ result in a cache flush and retry.
212
+ - ``"num_ooms"``: number of out-of-memory errors thrown.
213
+
214
+ The caching allocator can be configured via ENV to not split blocks larger than a
215
+ defined size (see Memory Management section of the Cuda Semantics documentation).
216
+ This helps avoid memory fragmentation but may have a performance
217
+ penalty. Additional outputs to assist with tuning and evaluating impact:
218
+
219
+ - ``"max_split_size"``: blocks above this size will not be split.
220
+ - ``"oversize_allocations.{current,peak,allocated,freed}"``:
221
+ number of over-size allocation requests received by the memory allocator.
222
+ - ``"oversize_segments.{current,peak,allocated,freed}"``:
223
+ number of over-size reserved segments from ``cudaMalloc()``.
224
+
225
+ The caching allocator can be configured via ENV to round memory allocations in order
226
+ to reduce fragmentation. Sometimes the overhead from rounding can be higher than
227
+ the fragmentation it helps reduce. The following stat can be used to check if
228
+ rounding adds too much overhead:
229
+
230
+ - ``"requested_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"``:
231
+ memory requested by client code, compare this with allocated_bytes to check if
232
+ allocation rounding adds too much overhead.
233
+
234
+ Args:
235
+ device (torch.device or int, optional): selected device. Returns
236
+ statistics for the current device, given by :func:`~torch.cuda.current_device`,
237
+ if :attr:`device` is ``None`` (default).
238
+
239
+ .. note::
240
+ See :ref:`cuda-memory-management` for more details about GPU memory
241
+ management.
242
+
243
+ .. note::
244
+ With :ref:`backend:cudaMallocAsync<cuda-memory-envvars>`, some stats are not
245
+ meaningful, and are always reported as zero.
246
+ """
247
+ result = []
248
+
249
+ def _recurse_add_to_result(prefix, obj):
250
+ if isinstance(obj, dict):
251
+ if len(prefix) > 0:
252
+ prefix += "."
253
+ for k, v in obj.items():
254
+ _recurse_add_to_result(prefix + k, v)
255
+ else:
256
+ result.append((prefix, obj))
257
+
258
+ stats = memory_stats_as_nested_dict(device=device)
259
+ _recurse_add_to_result("", stats)
260
+ result.sort()
261
+
262
+ return collections.OrderedDict(result)
263
+
264
+
265
+ def memory_stats_as_nested_dict(device: Union[Device, int] = None) -> Dict[str, Any]:
266
+ r"""Return the result of :func:`~torch.cuda.memory_stats` as a nested dictionary."""
267
+ if not is_initialized():
268
+ return {}
269
+ device = _get_device_index(device, optional=True)
270
+ return torch._C._cuda_memoryStats(device)
271
+
272
+
273
+ def reset_accumulated_memory_stats(device: Union[Device, int] = None) -> None:
274
+ r"""Reset the "accumulated" (historical) stats tracked by the CUDA memory allocator.
275
+
276
+ See :func:`~torch.cuda.memory_stats` for details. Accumulated stats correspond to
277
+ the `"allocated"` and `"freed"` keys in each individual stat dict, as well as
278
+ `"num_alloc_retries"` and `"num_ooms"`.
279
+
280
+ Args:
281
+ device (torch.device or int, optional): selected device. Returns
282
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
283
+ if :attr:`device` is ``None`` (default).
284
+
285
+ .. note::
286
+ See :ref:`cuda-memory-management` for more details about GPU memory
287
+ management.
288
+ """
289
+ device = _get_device_index(device, optional=True)
290
+ return torch._C._cuda_resetAccumulatedMemoryStats(device)
291
+
292
+
293
+ def reset_peak_memory_stats(device: Union[Device, int] = None) -> None:
294
+ r"""Reset the "peak" stats tracked by the CUDA memory allocator.
295
+
296
+ See :func:`~torch.cuda.memory_stats` for details. Peak stats correspond to the
297
+ `"peak"` key in each individual stat dict.
298
+
299
+ Args:
300
+ device (torch.device or int, optional): selected device. Returns
301
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
302
+ if :attr:`device` is ``None`` (default).
303
+
304
+ .. note::
305
+ See :ref:`cuda-memory-management` for more details about GPU memory
306
+ management.
307
+ """
308
+ device = _get_device_index(device, optional=True)
309
+ return torch._C._cuda_resetPeakMemoryStats(device)
310
+
311
+
312
+ def reset_max_memory_allocated(device: Union[Device, int] = None) -> None:
313
+ r"""Reset the starting point in tracking maximum GPU memory occupied by tensors for a given device.
314
+
315
+ See :func:`~torch.cuda.max_memory_allocated` for details.
316
+
317
+ Args:
318
+ device (torch.device or int, optional): selected device. Returns
319
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
320
+ if :attr:`device` is ``None`` (default).
321
+
322
+ .. warning::
323
+ This function now calls :func:`~torch.cuda.reset_peak_memory_stats`, which resets
324
+ /all/ peak memory stats.
325
+
326
+ .. note::
327
+ See :ref:`cuda-memory-management` for more details about GPU memory
328
+ management.
329
+ """
330
+ warnings.warn(
331
+ "torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, "
332
+ "which resets /all/ peak memory stats.",
333
+ FutureWarning,
334
+ )
335
+ return reset_peak_memory_stats(device=device)
336
+
337
+
338
+ def reset_max_memory_cached(device: Union[Device, int] = None) -> None:
339
+ r"""Reset the starting point in tracking maximum GPU memory managed by the caching allocator for a given device.
340
+
341
+ See :func:`~torch.cuda.max_memory_cached` for details.
342
+
343
+ Args:
344
+ device (torch.device or int, optional): selected device. Returns
345
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
346
+ if :attr:`device` is ``None`` (default).
347
+
348
+ .. warning::
349
+ This function now calls :func:`~torch.cuda.reset_peak_memory_stats`, which resets
350
+ /all/ peak memory stats.
351
+
352
+ .. note::
353
+ See :ref:`cuda-memory-management` for more details about GPU memory
354
+ management.
355
+ """
356
+ warnings.warn(
357
+ "torch.cuda.reset_max_memory_cached now calls torch.cuda.reset_peak_memory_stats, "
358
+ "which resets /all/ peak memory stats.",
359
+ FutureWarning,
360
+ )
361
+ return reset_peak_memory_stats(device=device)
362
+
363
+
364
+ def memory_allocated(device: Union[Device, int] = None) -> int:
365
+ r"""Return the current GPU memory occupied by tensors in bytes for a given device.
366
+
367
+ Args:
368
+ device (torch.device or int, optional): selected device. Returns
369
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
370
+ if :attr:`device` is ``None`` (default).
371
+
372
+ .. note::
373
+ This is likely less than the amount shown in `nvidia-smi` since some
374
+ unused memory can be held by the caching allocator and some context
375
+ needs to be created on GPU. See :ref:`cuda-memory-management` for more
376
+ details about GPU memory management.
377
+ """
378
+ return memory_stats(device=device).get("allocated_bytes.all.current", 0)
379
+
380
+
381
+ def max_memory_allocated(device: Union[Device, int] = None) -> int:
382
+ r"""Return the maximum GPU memory occupied by tensors in bytes for a given device.
383
+
384
+ By default, this returns the peak allocated memory since the beginning of
385
+ this program. :func:`~torch.cuda.reset_peak_memory_stats` can be used to
386
+ reset the starting point in tracking this metric. For example, these two
387
+ functions can measure the peak allocated memory usage of each iteration in a
388
+ training loop.
389
+
390
+ Args:
391
+ device (torch.device or int, optional): selected device. Returns
392
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
393
+ if :attr:`device` is ``None`` (default).
394
+
395
+ .. note::
396
+ See :ref:`cuda-memory-management` for more details about GPU memory
397
+ management.
398
+ """
399
+ return memory_stats(device=device).get("allocated_bytes.all.peak", 0)
400
+
401
+
402
+ def memory_reserved(device: Union[Device, int] = None) -> int:
403
+ r"""Return the current GPU memory managed by the caching allocator in bytes for a given device.
404
+
405
+ Args:
406
+ device (torch.device or int, optional): selected device. Returns
407
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
408
+ if :attr:`device` is ``None`` (default).
409
+
410
+ .. note::
411
+ See :ref:`cuda-memory-management` for more details about GPU memory
412
+ management.
413
+ """
414
+ return memory_stats(device=device).get("reserved_bytes.all.current", 0)
415
+
416
+
417
+ def max_memory_reserved(device: Union[Device, int] = None) -> int:
418
+ r"""Return the maximum GPU memory managed by the caching allocator in bytes for a given device.
419
+
420
+ By default, this returns the peak cached memory since the beginning of this
421
+ program. :func:`~torch.cuda.reset_peak_memory_stats` can be used to reset
422
+ the starting point in tracking this metric. For example, these two functions
423
+ can measure the peak cached memory amount of each iteration in a training
424
+ loop.
425
+
426
+ Args:
427
+ device (torch.device or int, optional): selected device. Returns
428
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
429
+ if :attr:`device` is ``None`` (default).
430
+
431
+ .. note::
432
+ See :ref:`cuda-memory-management` for more details about GPU memory
433
+ management.
434
+ """
435
+ return memory_stats(device=device).get("reserved_bytes.all.peak", 0)
436
+
437
+
438
+ def memory_cached(device: Union[Device, int] = None) -> int:
439
+ r"""Deprecated; see :func:`~torch.cuda.memory_reserved`."""
440
+ warnings.warn(
441
+ "torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved",
442
+ FutureWarning,
443
+ )
444
+ return memory_reserved(device=device)
445
+
446
+
447
+ def max_memory_cached(device: Union[Device, int] = None) -> int:
448
+ r"""Deprecated; see :func:`~torch.cuda.max_memory_reserved`."""
449
+ warnings.warn(
450
+ "torch.cuda.max_memory_cached has been renamed to torch.cuda.max_memory_reserved",
451
+ FutureWarning,
452
+ )
453
+ return max_memory_reserved(device=device)
454
+
455
+
456
+ def memory_snapshot():
457
+ r"""Return a snapshot of the CUDA memory allocator state across all devices.
458
+
459
+ Interpreting the output of this function requires familiarity with the
460
+ memory allocator internals.
461
+
462
+ .. note::
463
+ See :ref:`cuda-memory-management` for more details about GPU memory
464
+ management.
465
+ """
466
+ return torch._C._cuda_memorySnapshot()["segments"]
467
+
468
+
469
+ def memory_summary(device: Union[Device, int] = None, abbreviated: bool = False) -> str:
470
+ r"""Return a human-readable printout of the current memory allocator statistics for a given device.
471
+
472
+ This can be useful to display periodically during training, or when
473
+ handling out-of-memory exceptions.
474
+
475
+ Args:
476
+ device (torch.device or int, optional): selected device. Returns
477
+ printout for the current device, given by :func:`~torch.cuda.current_device`,
478
+ if :attr:`device` is ``None`` (default).
479
+ abbreviated (bool, optional): whether to return an abbreviated summary
480
+ (default: False).
481
+
482
+ .. note::
483
+ See :ref:`cuda-memory-management` for more details about GPU memory
484
+ management.
485
+ """
486
+ device = _get_device_index(device, optional=True)
487
+ stats = memory_stats(device=device)
488
+
489
+ def _format_size(sz, pref_sz):
490
+ prefixes = ["B ", "KiB", "MiB", "GiB", "TiB", "PiB"]
491
+ prefix = prefixes[0]
492
+ for new_prefix in prefixes[1:]:
493
+ if pref_sz < 768 * 1024:
494
+ break
495
+ prefix = new_prefix
496
+ sz //= 1024
497
+ pref_sz /= 1024
498
+ return f"{sz:6d} {prefix}"
499
+
500
+ def _format_count(cnt, pref_cnt):
501
+ prefixes = [" ", "K", "M"]
502
+ prefix = prefixes[0]
503
+ for new_prefix in prefixes[1:]:
504
+ if pref_cnt < 750 * 1000:
505
+ break
506
+ prefix = new_prefix
507
+ cnt //= 1000
508
+ pref_cnt /= 1000
509
+ return f"{cnt:7d} {prefix} "
510
+
511
+ metrics_to_display = [
512
+ ("allocated_bytes", "Allocated memory", _format_size),
513
+ ("active_bytes", "Active memory", _format_size),
514
+ ("requested_bytes", "Requested memory", _format_size),
515
+ ("reserved_bytes", "GPU reserved memory", _format_size),
516
+ ("inactive_split_bytes", "Non-releasable memory", _format_size),
517
+ ("allocation", "Allocations", _format_count),
518
+ ("active", "Active allocs", _format_count),
519
+ ("segment", "GPU reserved segments", _format_count),
520
+ ("inactive_split", "Non-releasable allocs", _format_count),
521
+ ]
522
+
523
+ lines = []
524
+ lines.append("=" * 75)
525
+ lines.append(" {_:16} PyTorch CUDA memory summary, device ID {device:<17d} ")
526
+ lines.append("-" * 75)
527
+ lines.append(
528
+ " {_:9} CUDA OOMs: {num_ooms:<12d} | {_:6} cudaMalloc retries: {num_alloc_retries:<8d} "
529
+ )
530
+ lines.append("=" * 75)
531
+ lines.append(
532
+ " Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed "
533
+ )
534
+
535
+ for metric_key, metric_name, formatter in metrics_to_display:
536
+ lines.append("-" * 75)
537
+ submetrics = [("all", metric_name)]
538
+ if not abbreviated:
539
+ submetrics.append(("large_pool", " from large pool"))
540
+ submetrics.append(("small_pool", " from small pool"))
541
+
542
+ current_prefval, peak_prefval, allocated_prefval, freed_prefval = (
543
+ None,
544
+ None,
545
+ None,
546
+ None,
547
+ )
548
+
549
+ for submetric_key, submetric_name in submetrics:
550
+ prefix = metric_key + "." + submetric_key + "."
551
+
552
+ current = stats[prefix + "current"]
553
+ peak = stats[prefix + "peak"]
554
+ allocated = stats[prefix + "allocated"]
555
+ freed = stats[prefix + "freed"]
556
+
557
+ if current_prefval is None:
558
+ current_prefval = current
559
+ peak_prefval = peak
560
+ allocated_prefval = allocated
561
+ freed_prefval = freed
562
+
563
+ lines.append(
564
+ " {:<21} | {} | {} | {} | {} ".format(
565
+ submetric_name,
566
+ formatter(current, current_prefval),
567
+ formatter(peak, peak_prefval),
568
+ formatter(allocated, allocated_prefval),
569
+ formatter(freed, freed_prefval),
570
+ ),
571
+ )
572
+
573
+ metrics_to_display = [
574
+ ("oversize_allocations", "Oversize allocations", _format_count),
575
+ ("oversize_segments", "Oversize GPU segments", _format_count),
576
+ ]
577
+
578
+ for metric_key, metric_name, formatter in metrics_to_display:
579
+ lines.append("-" * 75)
580
+
581
+ prefix = metric_key + "."
582
+
583
+ current = stats[prefix + "current"]
584
+ peak = stats[prefix + "peak"]
585
+ allocated = stats[prefix + "allocated"]
586
+ freed = stats[prefix + "freed"]
587
+
588
+ lines.append(
589
+ " {:<21} | {} | {} | {} | {} ".format(
590
+ metric_name,
591
+ formatter(current, current),
592
+ formatter(peak, peak),
593
+ formatter(allocated, allocated),
594
+ formatter(freed, freed),
595
+ ),
596
+ )
597
+
598
+ lines.append("=" * 75)
599
+
600
+ fmt_dict = {"_": "", "device": device}
601
+ for k, v in stats.items():
602
+ fmt_dict[k.replace(".", "-")] = v
603
+ return "|" + "|\n|".join(lines).format(**fmt_dict) + "|\n"
604
+
605
+
606
+ def list_gpu_processes(device: Union[Device, int] = None) -> str:
607
+ r"""Return a human-readable printout of the running processes and their GPU memory use for a given device.
608
+
609
+ This can be useful to display periodically during training, or when
610
+ handling out-of-memory exceptions.
611
+
612
+ Args:
613
+ device (torch.device or int, optional): selected device. Returns
614
+ printout for the current device, given by :func:`~torch.cuda.current_device`,
615
+ if :attr:`device` is ``None`` (default).
616
+ """
617
+ try:
618
+ import pynvml # type: ignore[import]
619
+ except ModuleNotFoundError:
620
+ return "pynvml module not found, please install pynvml"
621
+ from pynvml import NVMLError_DriverNotLoaded
622
+
623
+ try:
624
+ pynvml.nvmlInit()
625
+ except NVMLError_DriverNotLoaded:
626
+ return "cuda driver can't be loaded, is cuda enabled?"
627
+ device = _get_nvml_device_index(device)
628
+ handle = pynvml.nvmlDeviceGetHandleByIndex(device)
629
+ procs = pynvml.nvmlDeviceGetComputeRunningProcesses(handle)
630
+ lines = []
631
+ lines.append(f"GPU:{device}")
632
+ if len(procs) == 0:
633
+ lines.append("no processes are running")
634
+ for p in procs:
635
+ mem = p.usedGpuMemory / (1024 * 1024)
636
+ lines.append(f"process {p.pid:>10d} uses {mem:>12.3f} MB GPU memory")
637
+ return "\n".join(lines)
638
+
639
+
640
+ def mem_get_info(device: Union[Device, int] = None) -> Tuple[int, int]:
641
+ r"""Return the global free and total GPU memory for a given device using cudaMemGetInfo.
642
+
643
+ Args:
644
+ device (torch.device or int, optional): selected device. Returns
645
+ statistic for the current device, given by :func:`~torch.cuda.current_device`,
646
+ if :attr:`device` is ``None`` (default).
647
+
648
+ .. note::
649
+ See :ref:`cuda-memory-management` for more
650
+ details about GPU memory management.
651
+ """
652
+ if device is None:
653
+ device = torch.cuda.current_device()
654
+ device = _get_device_index(device)
655
+ return torch.cuda.cudart().cudaMemGetInfo(device)
656
+
657
+
658
+ def _record_memory_history_legacy(
659
+ enabled: bool,
660
+ record_context=True,
661
+ trace_alloc_max_entries=1,
662
+ trace_alloc_record_context=False,
663
+ device: Union[Device, int] = None,
664
+ record_context_cpp=False,
665
+ ):
666
+ _C._cuda_record_memory_history_legacy(
667
+ enabled,
668
+ record_context,
669
+ trace_alloc_max_entries,
670
+ trace_alloc_record_context,
671
+ record_context_cpp,
672
+ )
673
+
674
+
675
+ def _record_memory_history(enabled="all", *args, **kwargs):
676
+ """Enable recording of stack traces associated with memory
677
+ allocations, so you can tell what allocated any piece of memory in
678
+ :func:`torch.cuda.memory._snapshot()`.
679
+
680
+ In addition too keeping stack traces with each current allocation and free,
681
+ this will also enable recording of a history of all alloc/free events.
682
+
683
+ Use :func:`torch.cuda.memory._snapshot()` to retrieve this information,
684
+ and the tools in `_memory_viz.py` to visualize snapshots.
685
+
686
+ The Python trace collection is fast (2us per trace), so you may consider
687
+ enabling this on production jobs if you anticipate ever having to debug
688
+ memory issues.
689
+
690
+ C++ trace collection is also fast (~50ns/frame), which for many typical programs
691
+ works out to ~2us per trace, but can vary depending on stack depth.
692
+
693
+ Args:
694
+ enabled (Literal[None, "state", "all"], optional):
695
+ `None`, disable recording memory history.
696
+ `"state"`, keep information for currenly allocated memory.
697
+ `"all"`, additionally keep a history of all alloc/free calls.
698
+ Defaults to "all".
699
+ context (Literal[None, "state", "alloc", "all"], optional):
700
+ `None`, Do not record any tracebacks.
701
+ `"state"`, Record tracebacks for currently allocated memory.
702
+ `"alloc"`, additionally keep tracebacks for alloc calls.
703
+ `"all"`, additionally keep tracebacks for free calls.
704
+ Defaults to "all".
705
+ stacks (Literal["python", "all"], optional):
706
+ `"python"`, include Python, TorchScript, and inductor frames in tracebacks
707
+ `"all"`, additionally include C++ frames
708
+ Defaults to "all".
709
+ max_entries (int, optional): Keep a maximum of `max_entries`
710
+ alloc/free events in the recorded history recorded.
711
+ """
712
+ if isinstance(enabled, bool):
713
+ return _record_memory_history_legacy(enabled, *args, **kwargs)
714
+ else:
715
+ return _record_memory_history_impl(enabled, *args, **kwargs)
716
+
717
+
718
+ def _record_memory_history_impl(
719
+ enabled: Optional[str] = "all",
720
+ context: Optional[str] = "all",
721
+ stacks: str = "all",
722
+ max_entries: int = sys.maxsize,
723
+ device: Union[Device, int] = None,
724
+ ):
725
+ _C._cuda_record_memory_history(enabled, context, stacks, max_entries)
726
+
727
+
728
+ _record_memory_history.__signature__ = signature(_record_memory_history_impl) # type: ignore[attr-defined]
729
+
730
+
731
+ def _snapshot(device: Union[Device, int] = None):
732
+ """Save a snapshot of CUDA memory state at the time it was called.
733
+
734
+ The state is represented as a dictionary with the following structure.
735
+
736
+ .. code-block:: python
737
+
738
+ class Snapshot(TypedDict):
739
+ segments : List[Segment]
740
+ device_traces: List[List[TraceEntry]]
741
+
742
+ class Segment(TypedDict):
743
+ # Segments are memory returned from a cudaMalloc call.
744
+ # The size of reserved memory is the sum of all Segments.
745
+ # Segments are cached and reused for future allocations.
746
+ # If the reuse is smaller than the segment, the segment
747
+ # is split into more then one Block.
748
+ # empty_cache() frees Segments that are entirely inactive.
749
+ address: int
750
+ total_size: int # cudaMalloc'd size of segment
751
+ stream: int
752
+ segment_type: Literal['small', 'large'] # 'large' (>1MB)
753
+ allocated_size: int # size of memory in use
754
+ active_size: int # size of memory in use or in active_awaiting_free state
755
+ blocks : List[Block]
756
+
757
+ class Block(TypedDict):
758
+ # A piece of memory returned from the allocator, or
759
+ # current cached but inactive.
760
+ size: int
761
+ requested_size: int # size requested during malloc, may be smaller than
762
+ # size due to rounding
763
+ address: int
764
+ state: Literal['active_allocated', # used by a tensor
765
+ 'active_awaiting_free', # waiting for another stream to finish using
766
+ # this, then it will become free
767
+ 'inactive',] # free for reuse
768
+ frames: List[Frame] # stack trace from where the allocation occurred
769
+
770
+ class Frame(TypedDict):
771
+ filename: str
772
+ line: int
773
+ name: str
774
+
775
+ class TraceEntry(TypedDict):
776
+ # When `torch.cuda.memory._record_memory_history()` is enabled,
777
+ # the snapshot will contain TraceEntry objects that record each
778
+ # action the allocator took.
779
+ action: Literal[
780
+ 'alloc' # memory allocated
781
+ 'free_requested', # the allocated received a call to free memory
782
+ 'free_completed', # the memory that was requested to be freed is now
783
+ # able to be used in future allocation calls
784
+ 'segment_alloc', # the caching allocator ask cudaMalloc for more memory
785
+ # and added it as a segment in its cache
786
+ 'segment_free', # the caching allocator called cudaFree to return memory
787
+ # to cuda possibly trying free up memory to
788
+ # allocate more segments or because empty_caches was called
789
+ 'oom', # the allocator threw an OOM exception. 'size' is
790
+ # the requested number of bytes that did not succeed
791
+ 'snapshot' # the allocator generated a memory snapshot
792
+ # useful to coorelate a previously taken
793
+ # snapshot with this trace
794
+ ]
795
+ addr: int # not present for OOM
796
+ frames: List[Frame]
797
+ size: int
798
+ stream: int
799
+ device_free: int # only present for OOM, the amount of
800
+ # memory cuda still reports to be free
801
+
802
+ Returns:
803
+ The Snapshot dictionary object
804
+ """
805
+ return _C._cuda_memorySnapshot()
806
+
807
+
808
+ def _dump_snapshot(filename="dump_snapshot.pickle"):
809
+ """
810
+ Save a pickled version of the `torch.memory._snapshot()` dictionary to a file.
811
+
812
+ This file can be opened by the interactive snapshot viewer at pytorch.org/memory_viz
813
+
814
+ Args:
815
+ filename (str, optional): Name of the file to create. Defaults to "dump_snapshot.pickle".
816
+ """
817
+ s = _snapshot()
818
+ with open(filename, "wb") as f:
819
+ pickle.dump(s, f)
820
+
821
+
822
+ def _save_segment_usage(filename="output.svg", snapshot=None):
823
+ if snapshot is None:
824
+ snapshot = _snapshot()
825
+ with open(filename, "w") as f:
826
+ f.write(_segments(snapshot))
827
+
828
+
829
+ def _save_memory_usage(filename="output.svg", snapshot=None):
830
+ if snapshot is None:
831
+ snapshot = _snapshot()
832
+ with open(filename, "w") as f:
833
+ f.write(_memory(snapshot))
834
+
835
+
836
+ def _set_allocator_settings(env: str):
837
+ return torch._C._cuda_cudaCachingAllocator_set_allocator_settings(env)
838
+
839
+
840
+ def get_allocator_backend() -> str:
841
+ r"""Return a string describing the active allocator backend as set by
842
+ ``PYTORCH_CUDA_ALLOC_CONF``. Currently available backends are
843
+ ``native`` (PyTorch's native caching allocator) and `cudaMallocAsync``
844
+ (CUDA's built-in asynchronous allocator).
845
+
846
+ .. note::
847
+ See :ref:`cuda-memory-management` for details on choosing the allocator backend.
848
+ """
849
+ return torch._C._cuda_getAllocatorBackend()
850
+
851
+
852
+ class _CUDAAllocator:
853
+ r"""Wrapper over internal CUDA memory allocators."""
854
+
855
+ def __init__(self, allocator: torch._C._cuda_CUDAAllocator):
856
+ self._allocator = allocator
857
+
858
+ def allocator(self):
859
+ return self._allocator
860
+
861
+
862
+ class CUDAPluggableAllocator(_CUDAAllocator):
863
+ r"""CUDA memory allocator loaded from a so file."""
864
+
865
+ def __init__(self, path_to_so_file: str, alloc_fn_name: str, free_fn_name: str):
866
+ r"""Memory allocators are compiled in .so files and loaded dynamically using ctypes.
867
+
868
+ To change the active allocator use the :func:`torch.memory.cuda.change_current_allocator` function.
869
+
870
+ Args:
871
+ path_to_so_file(str): Path in the filesystem to the `.so` file containing
872
+ the allocator functions
873
+ alloc_fn_name(str): Name of the function to perform the memory allocation
874
+ in the so file. The signature must be:
875
+ void* alloc_fn_name(ssize_t size, int device, cudaStream_t stream);
876
+ free_fn_name(str): Name of the function to perform the memory release
877
+ in the so file. The signature must be:
878
+ void free_fn_name(void* ptr, size_t size, cudaStream_t stream);
879
+
880
+ .. warning::
881
+ This is currently supported only in unix OSs
882
+
883
+ .. note::
884
+ See :ref:`cuda-memory-management` for details on creating and using a custom allocator
885
+ """
886
+ allocator = ctypes.CDLL(path_to_so_file)
887
+ alloc_fn = ctypes.cast(getattr(allocator, alloc_fn_name), ctypes.c_void_p).value
888
+ free_fn = ctypes.cast(getattr(allocator, free_fn_name), ctypes.c_void_p).value
889
+ assert alloc_fn is not None
890
+ assert free_fn is not None
891
+ self._allocator = torch._C._cuda_customAllocator(alloc_fn, free_fn)
892
+
893
+
894
+ def change_current_allocator(allocator: _CUDAAllocator) -> None:
895
+ r"""Change the currently used memory allocator to be the one provided.
896
+
897
+ If the current allocator has already been used/initialized, this function will error.
898
+
899
+
900
+ Args:
901
+ allocator (torch.cuda.memory._CUDAAllocator): allocator to be set as the active one.
902
+ .. note::
903
+ See :ref:`cuda-memory-management` for details on creating and using a custom allocator
904
+ """
905
+ torch._C._cuda_changeCurrentAllocator(allocator.allocator())
906
+
907
+
908
+ def _get_current_allocator() -> _CUDAAllocator:
909
+ r"""Return the allocator being currently used.
910
+
911
+ .. note::
912
+ See :ref:`cuda-memory-management` for details on creating and using a custom allocator
913
+ """
914
+ return _CUDAAllocator(torch._C._cuda_getAllocator())
llmeval-env/lib/python3.10/site-packages/torch/cuda/nccl.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import warnings
3
+ from typing import Optional, Sequence, Union
4
+
5
+ import torch.cuda
6
+
7
+
8
+ __all__ = ["all_reduce", "reduce", "broadcast", "all_gather", "reduce_scatter"]
9
+
10
+ SUM = 0 # ncclRedOp_t
11
+
12
+
13
+ def is_available(tensors):
14
+ if not hasattr(torch._C, "_nccl_all_reduce"):
15
+ warnings.warn("PyTorch is not compiled with NCCL support")
16
+ return False
17
+
18
+ devices = set()
19
+ for tensor in tensors:
20
+ if tensor.is_sparse:
21
+ return False
22
+ if not tensor.is_contiguous():
23
+ return False
24
+ if not tensor.is_cuda:
25
+ return False
26
+ device = tensor.get_device()
27
+ if device in devices:
28
+ return False
29
+ devices.add(device)
30
+
31
+ return True
32
+
33
+
34
+ def version():
35
+ ver = torch._C._nccl_version()
36
+ major = ver >> 32
37
+ minor = (ver >> 16) & 65535
38
+ patch = ver & 65535
39
+ suffix = torch._C._nccl_version_suffix().decode("utf-8")
40
+ if suffix == "":
41
+ return (major, minor, patch)
42
+ else:
43
+ return (major, minor, patch, suffix)
44
+
45
+
46
+ def unique_id():
47
+ return torch._C._nccl_unique_id()
48
+
49
+
50
+ def init_rank(num_ranks, uid, rank):
51
+ return torch._C._nccl_init_rank(num_ranks, uid, rank)
52
+
53
+
54
+ def _check_sequence_type(inputs: Union[torch.Tensor, Sequence[torch.Tensor]]) -> None:
55
+ if not isinstance(inputs, collections.abc.Container) or isinstance(
56
+ inputs, torch.Tensor
57
+ ):
58
+ raise TypeError("Inputs should be a collection of tensors")
59
+
60
+
61
+ def all_reduce(inputs, outputs=None, op=SUM, streams=None, comms=None):
62
+ _check_sequence_type(inputs)
63
+ if outputs is None:
64
+ outputs = inputs
65
+ _check_sequence_type(outputs)
66
+ torch._C._nccl_all_reduce(inputs, outputs, op, streams, comms)
67
+
68
+
69
+ # `output` used to be `outputs`, taking in a list of tensors. So we have two
70
+ # arguments for BC reasons.
71
+ def reduce(
72
+ inputs: Sequence[torch.Tensor],
73
+ output: Optional[Union[torch.Tensor, Sequence[torch.Tensor]]] = None,
74
+ root: int = 0,
75
+ op: int = SUM,
76
+ streams: Optional[Sequence[torch.cuda.Stream]] = None,
77
+ comms=None,
78
+ *,
79
+ outputs: Optional[Sequence[torch.Tensor]] = None,
80
+ ) -> None:
81
+ _check_sequence_type(inputs)
82
+ _output: torch.Tensor
83
+ if outputs is not None:
84
+ if output is not None:
85
+ raise ValueError(
86
+ "'output' and 'outputs' can not be both specified. 'outputs' is deprecated in "
87
+ "favor of 'output', taking in a single output tensor. The signature of reduce is: "
88
+ "reduce(inputs, output=None, root=0, op=SUM, streams=None, comms=None)."
89
+ )
90
+ else:
91
+ warnings.warn(
92
+ "nccl.reduce with an output tensor list is deprecated. "
93
+ "Please specify a single output tensor with argument 'output' instead instead."
94
+ )
95
+ _output = outputs[root]
96
+ elif not isinstance(output, torch.Tensor) and isinstance(
97
+ output, collections.abc.Sequence
98
+ ):
99
+ # User called old API with positional arguments of list of output tensors.
100
+ warnings.warn(
101
+ "nccl.reduce with an output tensor list is deprecated. "
102
+ "Please specify a single output tensor."
103
+ )
104
+ _output = output[root]
105
+ else:
106
+ _output = inputs[root] if output is None else output
107
+ torch._C._nccl_reduce(inputs, _output, root, op, streams, comms)
108
+
109
+
110
+ def broadcast(
111
+ inputs: Sequence[torch.Tensor], root: int = 0, streams=None, comms=None
112
+ ) -> None:
113
+ _check_sequence_type(inputs)
114
+ torch._C._nccl_broadcast(inputs, root, streams, comms)
115
+
116
+
117
+ def all_gather(
118
+ inputs: Sequence[torch.Tensor],
119
+ outputs: Sequence[torch.Tensor],
120
+ streams=None,
121
+ comms=None,
122
+ ) -> None:
123
+ _check_sequence_type(inputs)
124
+ _check_sequence_type(outputs)
125
+ torch._C._nccl_all_gather(inputs, outputs, streams, comms)
126
+
127
+
128
+ def reduce_scatter(
129
+ inputs: Sequence[torch.Tensor],
130
+ outputs: Sequence[torch.Tensor],
131
+ op: int = SUM,
132
+ streams=None,
133
+ comms=None,
134
+ ) -> None:
135
+ _check_sequence_type(inputs)
136
+ _check_sequence_type(outputs)
137
+ torch._C._nccl_reduce_scatter(inputs, outputs, op, streams, comms)
llmeval-env/lib/python3.10/site-packages/torch/cuda/nvtx.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ r"""This package adds support for NVIDIA Tools Extension (NVTX) used in profiling."""
2
+
3
+ from contextlib import contextmanager
4
+
5
+ try:
6
+ from torch._C import _nvtx
7
+ except ImportError:
8
+
9
+ class _NVTXStub:
10
+ @staticmethod
11
+ def _fail(*args, **kwargs):
12
+ raise RuntimeError(
13
+ "NVTX functions not installed. Are you sure you have a CUDA build?"
14
+ )
15
+
16
+ rangePushA = _fail
17
+ rangePop = _fail
18
+ markA = _fail
19
+
20
+ _nvtx = _NVTXStub() # type: ignore[assignment]
21
+
22
+ __all__ = ["range_push", "range_pop", "range_start", "range_end", "mark", "range"]
23
+
24
+
25
+ def range_push(msg):
26
+ """
27
+ Push a range onto a stack of nested range span. Returns zero-based depth of the range that is started.
28
+
29
+ Args:
30
+ msg (str): ASCII message to associate with range
31
+ """
32
+ return _nvtx.rangePushA(msg)
33
+
34
+
35
+ def range_pop():
36
+ """Pop a range off of a stack of nested range spans. Returns the zero-based depth of the range that is ended."""
37
+ return _nvtx.rangePop()
38
+
39
+
40
+ def range_start(msg) -> int:
41
+ """
42
+ Mark the start of a range with string message. It returns an unique handle
43
+ for this range to pass to the corresponding call to rangeEnd().
44
+
45
+ A key difference between this and range_push/range_pop is that the
46
+ range_start/range_end version supports range across threads (start on one
47
+ thread and end on another thread).
48
+
49
+ Returns: A range handle (uint64_t) that can be passed to range_end().
50
+
51
+ Args:
52
+ msg (str): ASCII message to associate with the range.
53
+ """
54
+ return _nvtx.rangeStartA(msg)
55
+
56
+
57
+ def range_end(range_id) -> None:
58
+ """
59
+ Mark the end of a range for a given range_id.
60
+
61
+ Args:
62
+ range_id (int): an unique handle for the start range.
63
+ """
64
+ _nvtx.rangeEnd(range_id)
65
+
66
+
67
+ def mark(msg):
68
+ """
69
+ Describe an instantaneous event that occurred at some point.
70
+
71
+ Args:
72
+ msg (str): ASCII message to associate with the event.
73
+ """
74
+ return _nvtx.markA(msg)
75
+
76
+
77
+ @contextmanager
78
+ def range(msg, *args, **kwargs):
79
+ """
80
+ Context manager / decorator that pushes an NVTX range at the beginning
81
+ of its scope, and pops it at the end. If extra arguments are given,
82
+ they are passed as arguments to msg.format().
83
+
84
+ Args:
85
+ msg (str): message to associate with the range
86
+ """
87
+ range_push(msg.format(*args, **kwargs))
88
+ try:
89
+ yield
90
+ finally:
91
+ range_pop()
llmeval-env/lib/python3.10/site-packages/torch/cuda/profiler.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import contextlib
2
+ import tempfile
3
+
4
+ import torch
5
+ from . import check_error, cudart
6
+
7
+ __all__ = ["init", "start", "stop", "profile"]
8
+
9
+ DEFAULT_FLAGS = [
10
+ "gpustarttimestamp",
11
+ "gpuendtimestamp",
12
+ "gridsize3d",
13
+ "threadblocksize",
14
+ "streamid",
15
+ "enableonstart 0",
16
+ "conckerneltrace",
17
+ ]
18
+
19
+
20
+ def init(output_file, flags=None, output_mode="key_value"):
21
+ rt = cudart()
22
+ if not hasattr(rt, "cudaOutputMode"):
23
+ raise AssertionError("HIP does not support profiler initialization!")
24
+ if (
25
+ hasattr(torch.version, "cuda")
26
+ and torch.version.cuda is not None
27
+ and int(torch.version.cuda.split(".")[0]) >= 12
28
+ ):
29
+ # Check https://github.com/pytorch/pytorch/pull/91118
30
+ # cudaProfilerInitialize is no longer needed after CUDA 12
31
+ raise AssertionError("CUDA12+ does not need profiler initialization!")
32
+ flags = DEFAULT_FLAGS if flags is None else flags
33
+ if output_mode == "key_value":
34
+ output_mode_enum = rt.cudaOutputMode.KeyValuePair
35
+ elif output_mode == "csv":
36
+ output_mode_enum = rt.cudaOutputMode.CSV
37
+ else:
38
+ raise RuntimeError(
39
+ "supported CUDA profiler output modes are: key_value and csv"
40
+ )
41
+ with tempfile.NamedTemporaryFile(delete=True) as f:
42
+ f.write(b"\n".join(f.encode("ascii") for f in flags))
43
+ f.flush()
44
+ check_error(rt.cudaProfilerInitialize(f.name, output_file, output_mode_enum))
45
+
46
+
47
+ def start():
48
+ check_error(cudart().cudaProfilerStart())
49
+
50
+
51
+ def stop():
52
+ check_error(cudart().cudaProfilerStop())
53
+
54
+
55
+ @contextlib.contextmanager
56
+ def profile():
57
+ try:
58
+ start()
59
+ yield
60
+ finally:
61
+ stop()
llmeval-env/lib/python3.10/site-packages/torch/cuda/random.py ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Iterable, List, Union
2
+
3
+ import torch
4
+ from .. import Tensor
5
+ from . import _lazy_call, _lazy_init, current_device, device_count
6
+
7
+ __all__ = [
8
+ "get_rng_state",
9
+ "get_rng_state_all",
10
+ "set_rng_state",
11
+ "set_rng_state_all",
12
+ "manual_seed",
13
+ "manual_seed_all",
14
+ "seed",
15
+ "seed_all",
16
+ "initial_seed",
17
+ ]
18
+
19
+
20
+ def get_rng_state(device: Union[int, str, torch.device] = "cuda") -> Tensor:
21
+ r"""Return the random number generator state of the specified GPU as a ByteTensor.
22
+
23
+ Args:
24
+ device (torch.device or int, optional): The device to return the RNG state of.
25
+ Default: ``'cuda'`` (i.e., ``torch.device('cuda')``, the current CUDA device).
26
+
27
+ .. warning::
28
+ This function eagerly initializes CUDA.
29
+ """
30
+ _lazy_init()
31
+ if isinstance(device, str):
32
+ device = torch.device(device)
33
+ elif isinstance(device, int):
34
+ device = torch.device("cuda", device)
35
+ idx = device.index
36
+ if idx is None:
37
+ idx = current_device()
38
+ default_generator = torch.cuda.default_generators[idx]
39
+ return default_generator.get_state()
40
+
41
+
42
+ def get_rng_state_all() -> List[Tensor]:
43
+ r"""Return a list of ByteTensor representing the random number states of all devices."""
44
+ results = []
45
+ for i in range(device_count()):
46
+ results.append(get_rng_state(i))
47
+ return results
48
+
49
+
50
+ def set_rng_state(
51
+ new_state: Tensor, device: Union[int, str, torch.device] = "cuda"
52
+ ) -> None:
53
+ r"""Set the random number generator state of the specified GPU.
54
+
55
+ Args:
56
+ new_state (torch.ByteTensor): The desired state
57
+ device (torch.device or int, optional): The device to set the RNG state.
58
+ Default: ``'cuda'`` (i.e., ``torch.device('cuda')``, the current CUDA device).
59
+ """
60
+ with torch._C._DisableFuncTorch():
61
+ new_state_copy = new_state.clone(memory_format=torch.contiguous_format)
62
+ if isinstance(device, str):
63
+ device = torch.device(device)
64
+ elif isinstance(device, int):
65
+ device = torch.device("cuda", device)
66
+
67
+ def cb():
68
+ idx = device.index
69
+ if idx is None:
70
+ idx = current_device()
71
+ default_generator = torch.cuda.default_generators[idx]
72
+ default_generator.set_state(new_state_copy)
73
+
74
+ _lazy_call(cb)
75
+
76
+
77
+ def set_rng_state_all(new_states: Iterable[Tensor]) -> None:
78
+ r"""Set the random number generator state of all devices.
79
+
80
+ Args:
81
+ new_states (Iterable of torch.ByteTensor): The desired state for each device.
82
+ """
83
+ for i, state in enumerate(new_states):
84
+ set_rng_state(state, i)
85
+
86
+
87
+ def manual_seed(seed: int) -> None:
88
+ r"""Set the seed for generating random numbers for the current GPU.
89
+
90
+ It's safe to call this function if CUDA is not available; in that
91
+ case, it is silently ignored.
92
+
93
+ Args:
94
+ seed (int): The desired seed.
95
+
96
+ .. warning::
97
+ If you are working with a multi-GPU model, this function is insufficient
98
+ to get determinism. To seed all GPUs, use :func:`manual_seed_all`.
99
+ """
100
+ seed = int(seed)
101
+
102
+ def cb():
103
+ idx = current_device()
104
+ default_generator = torch.cuda.default_generators[idx]
105
+ default_generator.manual_seed(seed)
106
+
107
+ _lazy_call(cb, seed=True)
108
+
109
+
110
+ def manual_seed_all(seed: int) -> None:
111
+ r"""Set the seed for generating random numbers on all GPUs.
112
+
113
+ It's safe to call this function if CUDA is not available; in that
114
+ case, it is silently ignored.
115
+
116
+ Args:
117
+ seed (int): The desired seed.
118
+ """
119
+ seed = int(seed)
120
+
121
+ def cb():
122
+ for i in range(device_count()):
123
+ default_generator = torch.cuda.default_generators[i]
124
+ default_generator.manual_seed(seed)
125
+
126
+ _lazy_call(cb, seed_all=True)
127
+
128
+
129
+ def seed() -> None:
130
+ r"""Set the seed for generating random numbers to a random number for the current GPU.
131
+
132
+ It's safe to call this function if CUDA is not available; in that
133
+ case, it is silently ignored.
134
+
135
+ .. warning::
136
+ If you are working with a multi-GPU model, this function will only initialize
137
+ the seed on one GPU. To initialize all GPUs, use :func:`seed_all`.
138
+ """
139
+
140
+ def cb():
141
+ idx = current_device()
142
+ default_generator = torch.cuda.default_generators[idx]
143
+ default_generator.seed()
144
+
145
+ _lazy_call(cb)
146
+
147
+
148
+ def seed_all() -> None:
149
+ r"""Set the seed for generating random numbers to a random number on all GPUs.
150
+
151
+ It's safe to call this function if CUDA is not available; in that
152
+ case, it is silently ignored.
153
+ """
154
+
155
+ def cb():
156
+ random_seed = 0
157
+ seeded = False
158
+ for i in range(device_count()):
159
+ default_generator = torch.cuda.default_generators[i]
160
+ if not seeded:
161
+ default_generator.seed()
162
+ random_seed = default_generator.initial_seed()
163
+ seeded = True
164
+ else:
165
+ default_generator.manual_seed(random_seed)
166
+
167
+ _lazy_call(cb)
168
+
169
+
170
+ def initial_seed() -> int:
171
+ r"""Return the current random seed of the current GPU.
172
+
173
+ .. warning::
174
+ This function eagerly initializes CUDA.
175
+ """
176
+ _lazy_init()
177
+ idx = current_device()
178
+ default_generator = torch.cuda.default_generators[idx]
179
+ return default_generator.initial_seed()
llmeval-env/lib/python3.10/site-packages/torch/cuda/sparse.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # The Tensor classes are added to this module by python_tensor.cpp
llmeval-env/lib/python3.10/site-packages/torch/cuda/streams.py ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ctypes
2
+
3
+ import torch
4
+ from torch._streambase import _EventBase, _StreamBase
5
+ from .._utils import _dummy_type
6
+
7
+
8
+ if not hasattr(torch._C, "_CudaStreamBase"):
9
+ # Define dummy base classes
10
+ torch._C.__dict__["_CudaStreamBase"] = _dummy_type("_CudaStreamBase")
11
+ torch._C.__dict__["_CudaEventBase"] = _dummy_type("_CudaEventBase")
12
+
13
+
14
+ class Stream(torch._C._CudaStreamBase, _StreamBase):
15
+ r"""Wrapper around a CUDA stream.
16
+
17
+ A CUDA stream is a linear sequence of execution that belongs to a specific
18
+ device, independent from other streams. See :ref:`cuda-semantics` for
19
+ details.
20
+
21
+ Args:
22
+ device(torch.device or int, optional): a device on which to allocate
23
+ the stream. If :attr:`device` is ``None`` (default) or a negative
24
+ integer, this will use the current device.
25
+ priority(int, optional): priority of the stream, should be 0 or
26
+ negative, where negative numbers indicate higher priority. By default,
27
+ streams have priority 0.
28
+
29
+ """
30
+
31
+ def __new__(cls, device=None, priority=0, **kwargs):
32
+ # setting device manager is expensive, so we avoid it unless necessary
33
+ if device is None or ("stream_id" in kwargs and "device_index" in kwargs):
34
+ return super().__new__(cls, priority=priority, **kwargs)
35
+ else:
36
+ with torch.cuda.device(device):
37
+ return super().__new__(cls, priority=priority, **kwargs)
38
+
39
+ def wait_event(self, event):
40
+ r"""Make all future work submitted to the stream wait for an event.
41
+
42
+ Args:
43
+ event (torch.cuda.Event): an event to wait for.
44
+
45
+ .. note:: This is a wrapper around ``cudaStreamWaitEvent()``: see
46
+ `CUDA Stream documentation`_ for more info.
47
+
48
+ This function returns without waiting for :attr:`event`: only future
49
+ operations are affected.
50
+
51
+ .. _CUDA Stream documentation:
52
+ https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html
53
+ """
54
+ event.wait(self)
55
+
56
+ def wait_stream(self, stream):
57
+ r"""Synchronize with another stream.
58
+
59
+ All future work submitted to this stream will wait until all kernels
60
+ submitted to a given stream at the time of call complete.
61
+
62
+ Args:
63
+ stream (Stream): a stream to synchronize.
64
+
65
+ .. note:: This function returns without waiting for currently enqueued
66
+ kernels in :attr:`stream`: only future operations are affected.
67
+ """
68
+ self.wait_event(stream.record_event())
69
+
70
+ def record_event(self, event=None):
71
+ r"""Record an event.
72
+
73
+ Args:
74
+ event (torch.cuda.Event, optional): event to record. If not given, a new one
75
+ will be allocated.
76
+
77
+ Returns:
78
+ Recorded event.
79
+ """
80
+ if event is None:
81
+ event = Event()
82
+ event.record(self)
83
+ return event
84
+
85
+ def query(self):
86
+ r"""Check if all the work submitted has been completed.
87
+
88
+ Returns:
89
+ A boolean indicating if all kernels in this stream are completed.
90
+ """
91
+ return super().query()
92
+
93
+ def synchronize(self):
94
+ r"""Wait for all the kernels in this stream to complete.
95
+
96
+ .. note:: This is a wrapper around ``cudaStreamSynchronize()``: see
97
+ `CUDA Stream documentation`_ for more info.
98
+ """
99
+ super().synchronize()
100
+
101
+ @property
102
+ def _as_parameter_(self):
103
+ return ctypes.c_void_p(self.cuda_stream)
104
+
105
+ def __eq__(self, o):
106
+ if isinstance(o, Stream):
107
+ return super().__eq__(o)
108
+ return False
109
+
110
+ def __hash__(self):
111
+ return hash((self.cuda_stream, self.device))
112
+
113
+ def __repr__(self):
114
+ return f"<torch.cuda.Stream device={self.device} cuda_stream={self.cuda_stream:#x}>"
115
+
116
+
117
+ class ExternalStream(Stream):
118
+ r"""Wrapper around an externally allocated CUDA stream.
119
+
120
+ This class is used to wrap streams allocated in other libraries in order
121
+ to facilitate data exchange and multi-library interactions.
122
+
123
+ .. note:: This class doesn't manage the stream life-cycle, it is the user
124
+ responsibility to keep the referenced stream alive while this class is
125
+ being used.
126
+
127
+ Args:
128
+ stream_ptr(int): Integer representation of the `cudaStream_t` value.
129
+ allocated externally.
130
+ device(torch.device or int, optional): the device where the stream
131
+ was originally allocated. if device is specified incorrectly,
132
+ subsequent launches using this stream may fail.
133
+ """
134
+
135
+ def __new__(cls, stream_ptr, device=None, **kwargs):
136
+ with torch.cuda.device(device):
137
+ return super().__new__(cls, stream_ptr=stream_ptr, **kwargs)
138
+
139
+
140
+ class Event(torch._C._CudaEventBase, _EventBase):
141
+ r"""Wrapper around a CUDA event.
142
+
143
+ CUDA events are synchronization markers that can be used to monitor the
144
+ device's progress, to accurately measure timing, and to synchronize CUDA
145
+ streams.
146
+
147
+ The underlying CUDA events are lazily initialized when the event is first
148
+ recorded or exported to another process. After creation, only streams on the
149
+ same device may record the event. However, streams on any device can wait on
150
+ the event.
151
+
152
+ Args:
153
+ enable_timing (bool, optional): indicates if the event should measure time
154
+ (default: ``False``)
155
+ blocking (bool, optional): if ``True``, :meth:`wait` will be blocking (default: ``False``)
156
+ interprocess (bool): if ``True``, the event can be shared between processes
157
+ (default: ``False``)
158
+
159
+ .. _CUDA Event Documentation:
160
+ https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__EVENT.html
161
+ """
162
+
163
+ def __new__(cls, enable_timing=False, blocking=False, interprocess=False):
164
+ return super().__new__(
165
+ cls,
166
+ enable_timing=enable_timing,
167
+ blocking=blocking,
168
+ interprocess=interprocess,
169
+ )
170
+
171
+ @classmethod
172
+ def from_ipc_handle(cls, device, handle):
173
+ r"""Reconstruct an event from an IPC handle on the given device."""
174
+ return super().from_ipc_handle(device, handle)
175
+
176
+ def record(self, stream=None):
177
+ r"""Record the event in a given stream.
178
+
179
+ Uses ``torch.cuda.current_stream()`` if no stream is specified. The
180
+ stream's device must match the event's device.
181
+ """
182
+ if stream is None:
183
+ stream = torch.cuda.current_stream()
184
+ super().record(stream)
185
+
186
+ def wait(self, stream=None):
187
+ r"""Make all future work submitted to the given stream wait for this event.
188
+
189
+ Use ``torch.cuda.current_stream()`` if no stream is specified.
190
+
191
+ .. note:: This is a wrapper around ``cudaStreamWaitEvent()``: see
192
+ `CUDA Event documentation`_ for more info.
193
+ """
194
+ if stream is None:
195
+ stream = torch.cuda.current_stream()
196
+ super().wait(stream)
197
+
198
+ def query(self):
199
+ r"""Check if all work currently captured by event has completed.
200
+
201
+ Returns:
202
+ A boolean indicating if all work currently captured by event has
203
+ completed.
204
+ """
205
+ return super().query()
206
+
207
+ def elapsed_time(self, end_event):
208
+ r"""Return the time elapsed.
209
+
210
+ Time reported in milliseconds after the event was recorded and
211
+ before the end_event was recorded.
212
+ """
213
+ return super().elapsed_time(end_event)
214
+
215
+ def synchronize(self):
216
+ r"""Wait for the event to complete.
217
+
218
+ Waits until the completion of all work currently captured in this event.
219
+ This prevents the CPU thread from proceeding until the event completes.
220
+
221
+ .. note:: This is a wrapper around ``cudaEventSynchronize()``: see
222
+ `CUDA Event documentation`_ for more info.
223
+ """
224
+ super().synchronize()
225
+
226
+ def ipc_handle(self):
227
+ r"""Return an IPC handle of this event.
228
+
229
+ If not recorded yet, the event will use the current device.
230
+ """
231
+ return super().ipc_handle()
232
+
233
+ @property
234
+ def _as_parameter_(self):
235
+ return ctypes.c_void_p(self.cuda_event)
236
+
237
+ def __repr__(self):
238
+ if self.cuda_event:
239
+ return f"<torch.cuda.Event {self._as_parameter_.value:#x}>"
240
+ else:
241
+ return "<torch.cuda.Event uninitialized>"
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (4.11 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_compatibility.cpython-310.pyc ADDED
Binary file (1.21 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_lazy_graph_module.cpython-310.pyc ADDED
Binary file (6.52 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_pytree.cpython-310.pyc ADDED
Binary file (3.63 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/_symbolic_trace.cpython-310.pyc ADDED
Binary file (34.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/annotate.cpython-310.pyc ADDED
Binary file (831 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/config.cpython-310.pyc ADDED
Binary file (224 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/graph.cpython-310.pyc ADDED
Binary file (55.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/graph_module.cpython-310.pyc ADDED
Binary file (24.9 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/immutable_collections.cpython-310.pyc ADDED
Binary file (2.98 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/interpreter.cpython-310.pyc ADDED
Binary file (20.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/node.cpython-310.pyc ADDED
Binary file (26.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/operator_schemas.cpython-310.pyc ADDED
Binary file (14.2 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/fx/__pycache__/proxy.cpython-310.pyc ADDED
Binary file (19.9 kB). View file