applied-ai-018 commited on
Commit
1232cda
·
verified ·
1 Parent(s): ca985a7

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. llmeval-env/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8 +3 -0
  3. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/__init__.cpython-310.pyc +0 -0
  4. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/autotune_process.cpython-310.pyc +0 -0
  5. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/comm_analysis.cpython-310.pyc +0 -0
  6. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/comms.cpython-310.pyc +0 -0
  7. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/config.cpython-310.pyc +0 -0
  8. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/coordinate_descent_tuner.cpython-310.pyc +0 -0
  9. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/decomposition.cpython-310.pyc +0 -0
  10. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/dependencies.cpython-310.pyc +0 -0
  11. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/freezing.cpython-310.pyc +0 -0
  12. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/fx_utils.cpython-310.pyc +0 -0
  13. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/graph.cpython-310.pyc +0 -0
  14. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/hooks.cpython-310.pyc +0 -0
  15. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/index_propagation.cpython-310.pyc +0 -0
  16. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/ir.cpython-310.pyc +0 -0
  17. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/lowering.cpython-310.pyc +0 -0
  18. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/ops_handler.cpython-310.pyc +0 -0
  19. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/optimize_indexing.cpython-310.pyc +0 -0
  20. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/pattern_matcher.cpython-310.pyc +0 -0
  21. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/scheduler.cpython-310.pyc +0 -0
  22. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/select_algorithm.cpython-310.pyc +0 -0
  23. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/sizevars.cpython-310.pyc +0 -0
  24. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/test_case.cpython-310.pyc +0 -0
  25. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/test_operators.cpython-310.pyc +0 -0
  26. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/triton_helpers.cpython-310.pyc +0 -0
  27. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/utils.cpython-310.pyc +0 -0
  28. llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/virtualized.cpython-310.pyc +0 -0
  29. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__init__.py +0 -0
  30. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__pycache__/cpp.cpython-310.pyc +0 -0
  31. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__pycache__/triton_foreach.cpython-310.pyc +0 -0
  32. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__pycache__/triton_utils.cpython-310.pyc +0 -0
  33. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/aoti_runtime/implementation.cpp +87 -0
  34. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/aoti_runtime/interface.cpp +354 -0
  35. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/common.py +1755 -0
  36. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp.py +0 -0
  37. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_prefix.h +595 -0
  38. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cpu.py +1851 -0
  39. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cuda.py +328 -0
  40. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/__init__.cpython-310.pyc +0 -0
  41. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_cpp_scheduling.cpython-310.pyc +0 -0
  42. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_env.cpython-310.pyc +0 -0
  43. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_kernel.cpython-310.pyc +0 -0
  44. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_template.cpython-310.pyc +0 -0
  45. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cutlass_epilogue_gen.cpython-310.pyc +0 -0
  46. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cutlass_utils.cpython-310.pyc +0 -0
  47. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/device_op_overrides.cpython-310.pyc +0 -0
  48. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/gemm_template.cpython-310.pyc +0 -0
  49. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/cutlass_lib_extensions/__pycache__/__init__.cpython-310.pyc +0 -0
  50. llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/cutlass_lib_extensions/__pycache__/gemm_operation_extensions.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -199,3 +199,4 @@ llmeval-env/lib/python3.10/site-packages/pandas/_libs/tslibs/offsets.cpython-310
199
  llmeval-env/lib/python3.10/site-packages/pandas/_libs/hashtable.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
200
  llmeval-env/lib/python3.10/site-packages/nvidia/cublas/lib/libcublasLt.so.12 filter=lfs diff=lfs merge=lfs -text
201
  llmeval-env/lib/python3.10/site-packages/safetensors/_safetensors_rust.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
 
 
199
  llmeval-env/lib/python3.10/site-packages/pandas/_libs/hashtable.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
200
  llmeval-env/lib/python3.10/site-packages/nvidia/cublas/lib/libcublasLt.so.12 filter=lfs diff=lfs merge=lfs -text
201
  llmeval-env/lib/python3.10/site-packages/safetensors/_safetensors_rust.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
202
+ llmeval-env/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8 filter=lfs diff=lfs merge=lfs -text
llmeval-env/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7db8a17d2c21a6f4684d99a073ca791c6600dbbbfc45e7b786ad8e42d0cf118b
3
+ size 647553136
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (3.76 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/autotune_process.cpython-310.pyc ADDED
Binary file (18 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/comm_analysis.cpython-310.pyc ADDED
Binary file (5.16 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/comms.cpython-310.pyc ADDED
Binary file (9.99 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/config.cpython-310.pyc ADDED
Binary file (10.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/coordinate_descent_tuner.cpython-310.pyc ADDED
Binary file (7.26 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/decomposition.cpython-310.pyc ADDED
Binary file (16.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/dependencies.cpython-310.pyc ADDED
Binary file (18.2 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/freezing.cpython-310.pyc ADDED
Binary file (9.31 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/fx_utils.cpython-310.pyc ADDED
Binary file (7.11 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/graph.cpython-310.pyc ADDED
Binary file (34.8 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/hooks.cpython-310.pyc ADDED
Binary file (797 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/index_propagation.cpython-310.pyc ADDED
Binary file (10.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/ir.cpython-310.pyc ADDED
Binary file (223 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/lowering.cpython-310.pyc ADDED
Binary file (145 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/ops_handler.cpython-310.pyc ADDED
Binary file (26 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/optimize_indexing.cpython-310.pyc ADDED
Binary file (2.66 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/pattern_matcher.cpython-310.pyc ADDED
Binary file (48.4 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/scheduler.cpython-310.pyc ADDED
Binary file (77 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/select_algorithm.cpython-310.pyc ADDED
Binary file (33.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/sizevars.cpython-310.pyc ADDED
Binary file (21.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/test_case.cpython-310.pyc ADDED
Binary file (1.91 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/test_operators.cpython-310.pyc ADDED
Binary file (1.28 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/triton_helpers.cpython-310.pyc ADDED
Binary file (7.94 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/utils.cpython-310.pyc ADDED
Binary file (45.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/__pycache__/virtualized.cpython-310.pyc ADDED
Binary file (15 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__init__.py ADDED
File without changes
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__pycache__/cpp.cpython-310.pyc ADDED
Binary file (119 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__pycache__/triton_foreach.cpython-310.pyc ADDED
Binary file (7.97 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/__pycache__/triton_utils.cpython-310.pyc ADDED
Binary file (3.57 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/aoti_runtime/implementation.cpp ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // NOTE: Like interface.cpp, this file will be copied into AOTInductor
2
+ // generated output. This file is intended to keep implementation
3
+ // details separate from the implementation of the AOTI public
4
+ // interface. Note also that #includes should go into interface.cpp
5
+ // for simplicity of maintenance.
6
+
7
+ namespace torch {
8
+ namespace aot_inductor {
9
+ template <typename T>
10
+ void convert_output_to_handle(
11
+ const ArrayRefTensor<T>& output,
12
+ AtenTensorHandle& handle) {
13
+ handle = output.expensiveCopyToTensor();
14
+ }
15
+
16
+ template <typename... Ts, std::size_t... Is>
17
+ void convert_outputs_to_handles_helper(
18
+ const std::tuple<ArrayRefTensor<Ts>...>& outputs,
19
+ AtenTensorHandle* output_handles,
20
+ std::index_sequence<Is...>) {
21
+ (convert_output_to_handle(std::get<Is>(outputs), output_handles[Is]), ...);
22
+ }
23
+ template <typename... Ts>
24
+ void convert_outputs_to_handles(
25
+ const std::tuple<ArrayRefTensor<Ts>...>& outputs,
26
+ AtenTensorHandle* output_handles) {
27
+ convert_outputs_to_handles_helper(
28
+ outputs, output_handles, std::make_index_sequence<sizeof...(Ts)>());
29
+ }
30
+
31
+ template <typename T>
32
+ void convert_handle_to_arrayref_tensor(
33
+ AtenTensorHandle handle,
34
+ ArrayRefTensor<T>& input) {
35
+ void* data_ptr;
36
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_data_ptr(handle, &data_ptr));
37
+ int64_t dim;
38
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_dim(handle, &dim));
39
+ int64_t numel;
40
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_numel(handle, &numel));
41
+ int64_t* sizes;
42
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_sizes(handle, &sizes));
43
+ int64_t* strides;
44
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_strides(handle, &strides));
45
+ int32_t dtype;
46
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_dtype(handle, &dtype));
47
+ int32_t device_type;
48
+ AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_device_type(handle, &device_type));
49
+ int32_t device_index;
50
+ AOTI_TORCH_ERROR_CODE_CHECK(
51
+ aoti_torch_get_device_index(handle, &device_index));
52
+
53
+ input = ArrayRefTensor<T>(
54
+ MiniArrayRef<T>(reinterpret_cast<T*>(data_ptr), numel),
55
+ MiniArrayRef<const int64_t>(sizes, dim),
56
+ MiniArrayRef<const int64_t>(strides, dim),
57
+ device_type,
58
+ device_index);
59
+ }
60
+
61
+ template <typename... Ts, std::size_t... Is>
62
+ void convert_handles_to_inputs_helper(
63
+ AtenTensorHandle* input_handles,
64
+ std::tuple<ArrayRefTensor<Ts>...>& inputs,
65
+ std::index_sequence<Is...>) {
66
+ (convert_handle_to_arrayref_tensor(input_handles[Is], std::get<Is>(inputs)),
67
+ ...);
68
+ }
69
+
70
+ template <typename... Ts>
71
+ void convert_handles_to_inputs(
72
+ AtenTensorHandle* input_handles,
73
+ std::tuple<ArrayRefTensor<Ts>...>& inputs) {
74
+ convert_handles_to_inputs_helper(
75
+ input_handles, inputs, std::make_index_sequence<sizeof...(Ts)>());
76
+ }
77
+
78
+ template <typename T>
79
+ void assert_numel(const ArrayRefTensor<T>& tensor, int64_t numel) {
80
+ if (tensor.numel() != numel) {
81
+ std::stringstream err;
82
+ err << "incorrect numel for input tensor. expected " << numel << ", got " << tensor.numel();
83
+ throw std::runtime_error(err.str());
84
+ }
85
+ }
86
+ } // namespace aot_inductor
87
+ } // namespace torch
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/aoti_runtime/interface.cpp ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #include <torch/csrc/inductor/aoti_runtime/arrayref_tensor.h>
2
+ #include <torch/csrc/inductor/aoti_runtime/interface.h>
3
+ #include <torch/csrc/inductor/aoti_runtime/model_container.h>
4
+ #include <torch/csrc/inductor/aoti_runtime/scalar_to_tensor.h>
5
+ #include <torch/csrc/inductor/aoti_runtime/thread_local.h>
6
+
7
+ #include <iostream>
8
+ #include <sstream>
9
+ #include <stdexcept>
10
+ #include <vector>
11
+
12
+ #define CONVERT_EXCEPTION_TO_ERROR_CODE(...) \
13
+ try { \
14
+ __VA_ARGS__ \
15
+ } catch (const std::exception& e) { \
16
+ std::cerr << "Error: " << e.what() << std::endl; \
17
+ return AOTI_RUNTIME_FAILURE; \
18
+ } catch (...) { \
19
+ std::cerr << "Unknown exception occurred." << std::endl; \
20
+ return AOTI_RUNTIME_FAILURE; \
21
+ } \
22
+ return AOTI_RUNTIME_SUCCESS;
23
+
24
+ #define AOTI_VECTOR_SIZE_CHECK(actual_size, expected_size, name) \
25
+ do { \
26
+ AOTI_RUNTIME_CHECK( \
27
+ actual_size == expected_size, \
28
+ "expected " + std::string(name) + " vector size to be " + \
29
+ std::to_string(expected_size) + ", but got " + \
30
+ std::to_string(actual_size)); \
31
+ } while (0)
32
+
33
+ // AOTInductor uses at::addmm_out, which doesn't supports
34
+ // arguments that requires gradient. For this reason, we
35
+ // enforce no_grad context for run APIs.
36
+ //
37
+ // A RAII, thread local (!) guard that enables or disables grad mode upon
38
+ // construction, and sets it back to the original value upon destruction.
39
+ struct AOTINoGradGuard {
40
+ AOTINoGradGuard() : prev_mode(aoti_torch_grad_mode_is_enabled()) {
41
+ aoti_torch_grad_mode_set_enabled(false);
42
+ }
43
+ ~AOTINoGradGuard() {
44
+ aoti_torch_grad_mode_set_enabled(prev_mode);
45
+ }
46
+ bool prev_mode;
47
+ };
48
+
49
+ extern "C" {
50
+
51
+ AOTIRuntimeError AOTInductorModelContainerCreate(
52
+ AOTInductorModelContainerHandle* container_handle,
53
+ size_t num_models,
54
+ bool is_cpu,
55
+ const char* cubin_dir) {
56
+ return AOTInductorModelContainerCreateWithDevice(
57
+ container_handle,
58
+ num_models,
59
+ is_cpu ? "cpu" : "cuda",
60
+ cubin_dir);
61
+ }
62
+
63
+ AOTIRuntimeError AOTInductorModelContainerCreateWithDevice(
64
+ AOTInductorModelContainerHandle* container_handle,
65
+ size_t num_models,
66
+ const char* device_str,
67
+ const char* cubin_dir) {
68
+ if (num_models == 0) {
69
+ std::cerr << "Error: num_models must be positive, but got 0" << std::endl;
70
+ return AOTI_RUNTIME_FAILURE;
71
+ }
72
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
73
+ std::optional<std::string> cubin_dir_opt;
74
+ if (cubin_dir != nullptr) {
75
+ cubin_dir_opt.emplace(cubin_dir);
76
+ }
77
+ auto* container = new torch::aot_inductor::AOTInductorModelContainer(
78
+ num_models, std::string(device_str), cubin_dir_opt);
79
+ *container_handle =
80
+ reinterpret_cast<AOTInductorModelContainerHandle>(container);
81
+ })
82
+ }
83
+
84
+ AOTIRuntimeError AOTInductorModelContainerDelete(
85
+ AOTInductorModelContainerHandle container_handle) {
86
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
87
+ auto* container =
88
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
89
+ container_handle);
90
+ delete container;
91
+ });
92
+ }
93
+
94
+ AOTIRuntimeError AOTInductorModelContainerRun(
95
+ AOTInductorModelContainerHandle container_handle,
96
+ AtenTensorHandle* input_handles, // array of input AtenTensorHandle; handles
97
+ // are stolen; the array itself is borrowed
98
+ size_t num_inputs,
99
+ AtenTensorHandle*
100
+ output_handles, // array for writing output AtenTensorHandle; handles
101
+ // will be stolen by the caller; the array itself is
102
+ // borrowed
103
+ size_t num_outputs,
104
+ AOTInductorStreamHandle stream_handle,
105
+ AOTIProxyExecutorHandle proxy_executor_handle) {
106
+ auto* container =
107
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
108
+ container_handle);
109
+ AOTI_VECTOR_SIZE_CHECK(num_inputs, container->num_inputs(), "inputs");
110
+ AOTI_VECTOR_SIZE_CHECK(num_outputs, container->num_outputs(), "outputs");
111
+
112
+ auto stream =
113
+ reinterpret_cast<torch::aot_inductor::DeviceStreamType>(stream_handle);
114
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
115
+ AOTINoGradGuard guard;
116
+ container->run(
117
+ input_handles, output_handles, stream, proxy_executor_handle);
118
+ })
119
+ }
120
+
121
+ AOTIRuntimeError AOTInductorModelContainerGetNumConstants(
122
+ AOTInductorModelContainerHandle container_handle,
123
+ size_t* num_constants) {
124
+ auto* container =
125
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
126
+ container_handle);
127
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
128
+ { *num_constants = container->num_constants(); })
129
+ }
130
+
131
+ AOTIRuntimeError AOTInductorModelContainerGetConstantName(
132
+ AOTInductorModelContainerHandle container_handle,
133
+ size_t idx,
134
+ const char** name) {
135
+ auto* container =
136
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
137
+ container_handle);
138
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
139
+ { *name = container->constant_name(idx); })
140
+ }
141
+
142
+ AOTIRuntimeError AOTInductorModelContainerGetConstantOriginalFQN(
143
+ AOTInductorModelContainerHandle container_handle,
144
+ size_t idx,
145
+ const char** original_fqn) {
146
+ auto* container =
147
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
148
+ container_handle);
149
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
150
+ { *original_fqn = container->constant_original_fqn(idx); })
151
+ }
152
+
153
+ AOTIRuntimeError AOTInductorModelContainerGetConstantFromFolded(
154
+ AOTInductorModelContainerHandle container_handle,
155
+ size_t idx,
156
+ bool* from_folded) {
157
+ auto* container =
158
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(container_handle);
159
+ CONVERT_EXCEPTION_TO_ERROR_CODE({ *from_folded = container->constant_from_folded(idx); })
160
+ }
161
+
162
+ AOTIRuntimeError AOTInductorModelContainerGetConstantDtype(
163
+ AOTInductorModelContainerHandle container_handle,
164
+ size_t idx,
165
+ int32_t* dtype) {
166
+ auto* container =
167
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
168
+ container_handle);
169
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
170
+ { *dtype = container->constant_dtype(idx); })
171
+ }
172
+
173
+ AOTIRuntimeError AOTInductorModelContainerUpdateConstantBuffer(
174
+ AOTInductorModelContainerHandle container_handle,
175
+ AOTInductorConstantMapHandle constant_map_handle,
176
+ bool use_inactive,
177
+ bool validate_full_update) {
178
+ auto* container =
179
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
180
+ container_handle);
181
+ auto input_map = reinterpret_cast<std::unordered_map<std::string, AtenTensorHandle>*>(constant_map_handle);
182
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
183
+ container->update_constant_buffer(
184
+ *input_map, use_inactive, validate_full_update);
185
+ })
186
+ }
187
+
188
+ AOTIRuntimeError AOTInductorModelContainerUpdateInactiveConstantBuffer(
189
+ AOTInductorModelContainerHandle container_handle,
190
+ AOTInductorConstantMapHandle constant_map_handle) {
191
+ return AOTInductorModelContainerUpdateConstantBuffer(container_handle,
192
+ constant_map_handle,
193
+ /*use_inactive*/ true,
194
+ /*validate_full_update*/ true);
195
+ }
196
+
197
+ AOTIRuntimeError AOTInductorModelContainerRunConstantFolding(
198
+ AOTInductorModelContainerHandle container_handle,
199
+ bool use_inactive,
200
+ AOTInductorStreamHandle stream_handle,
201
+ AOTIProxyExecutorHandle proxy_executor_handle) {
202
+ auto* container =
203
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
204
+ container_handle);
205
+ auto stream =
206
+ reinterpret_cast<torch::aot_inductor::DeviceStreamType>(stream_handle);
207
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
208
+ AOTINoGradGuard guard;
209
+ container->run_const_fold(use_inactive, stream, proxy_executor_handle);
210
+ })
211
+ }
212
+
213
+ AOTIRuntimeError AOTInductorModelContainerSwapConstantBuffer(
214
+ AOTInductorModelContainerHandle container_handle) {
215
+ auto* container =
216
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
217
+ container_handle);
218
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
219
+ container->swap_constant_buffer();
220
+ })
221
+ }
222
+
223
+ AOTIRuntimeError AOTInductorModelContainerGetNumInputs(
224
+ AOTInductorModelContainerHandle container_handle,
225
+ size_t* ret_num_inputs) {
226
+ auto* container =
227
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
228
+ container_handle);
229
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
230
+ { *ret_num_inputs = container->num_inputs(); })
231
+ }
232
+
233
+ AOTIRuntimeError AOTInductorModelContainerGetInputName(
234
+ AOTInductorModelContainerHandle container_handle,
235
+ size_t input_idx,
236
+ const char** ret_input_names) {
237
+ auto* container =
238
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
239
+ container_handle);
240
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
241
+ { *ret_input_names = container->input_name(input_idx); })
242
+ }
243
+
244
+ AOTIRuntimeError AOTInductorModelContainerGetNumOutputs(
245
+ AOTInductorModelContainerHandle container_handle,
246
+ size_t* ret_num_outputs) {
247
+ auto* container =
248
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
249
+ container_handle);
250
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
251
+ { *ret_num_outputs = container->num_outputs(); })
252
+ }
253
+
254
+ AOTIRuntimeError AOTInductorModelContainerGetOutputName(
255
+ AOTInductorModelContainerHandle container_handle,
256
+ size_t output_idx,
257
+ const char** ret_output_names) {
258
+ auto* container =
259
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
260
+ container_handle);
261
+ CONVERT_EXCEPTION_TO_ERROR_CODE(
262
+ { *ret_output_names = container->output_name(output_idx); })
263
+ }
264
+
265
+ AOTIRuntimeError AOTInductorModelContainerGetCallSpec(
266
+ AOTInductorModelContainerHandle container_handle,
267
+ const char** in_spec,
268
+ const char** out_spec) {
269
+ auto* container =
270
+ reinterpret_cast<torch::aot_inductor::AOTInductorModelContainer*>(
271
+ container_handle);
272
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
273
+ *in_spec = container->get_in_spec();
274
+ *out_spec = container->get_out_spec();
275
+ })
276
+ }
277
+
278
+ AOTIRuntimeError AOTInductorModelCreate(
279
+ AOTInductorModelHandle* model_handle,
280
+ AOTInductorConstantMapHandle constant_map_handle){
281
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
282
+ auto constant_map = std::make_shared<torch::aot_inductor::ConstantMap>();
283
+ auto constant_array = std::make_shared<std::vector<torch::aot_inductor::ConstantHandle>>();
284
+ auto input_map = reinterpret_cast<std::unordered_map<std::string, AtenTensorHandle>*>(constant_map_handle);
285
+
286
+ auto model = new torch::aot_inductor::AOTInductorModel(
287
+ constant_map,
288
+ constant_array,
289
+ "cpu", // device_str is hardcoded, as AOTInductorModelCreate is only use for CPU models
290
+ ""
291
+ );
292
+
293
+ if (input_map) {
294
+ for (auto const& kv : *input_map) {
295
+ constant_map->emplace(kv.first, kv.second);
296
+ }
297
+ } else {
298
+ model->load_constants();
299
+ }
300
+
301
+ *model_handle = reinterpret_cast<AOTInductorModelHandle>(model);
302
+ })}
303
+
304
+ AOTIRuntimeError AOTInductorModelRun(
305
+ AOTInductorModelHandle model_handle,
306
+ AtenTensorHandle* input_handles,
307
+ AtenTensorHandle* output_handles) {
308
+ auto model =
309
+ reinterpret_cast<torch::aot_inductor::AOTInductorModel*>(model_handle);
310
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
311
+ AOTINoGradGuard guard;
312
+ model->run_impl(
313
+ input_handles,
314
+ output_handles,
315
+ (torch::aot_inductor::DeviceStreamType) nullptr,
316
+ nullptr);
317
+ })
318
+ }
319
+
320
+ AOTIRuntimeError AOTInductorModelDelete(AOTInductorModelHandle model_handle){
321
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
322
+ auto model = reinterpret_cast<torch::aot_inductor::AOTInductorModel*>(
323
+ model_handle);
324
+ delete model;
325
+ })}
326
+
327
+ AOTIRuntimeError AOTInductorModelGetNumOutputs(
328
+ AOTInductorModelHandle model_handle,
329
+ size_t* ret_num_outputs) {
330
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
331
+ auto model = reinterpret_cast<torch::aot_inductor::AOTInductorModel*>(model_handle);
332
+ *ret_num_outputs = model->num_outputs();
333
+ })
334
+ }
335
+
336
+ AOTIRuntimeError AOTInductorModelUpdateConstantsMap(
337
+ AOTInductorModelHandle model_handle,
338
+ AOTInductorConstantMapHandle constant_map_handle) {
339
+ auto model =
340
+ reinterpret_cast<torch::aot_inductor::AOTInductorModel*>(model_handle);
341
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
342
+ auto constant_map = std::make_shared<torch::aot_inductor::ConstantMap>();
343
+ auto input_map =
344
+ reinterpret_cast<std::unordered_map<std::string, AtenTensorHandle>*>(
345
+ constant_map_handle);
346
+
347
+ for (auto const& kv : *input_map) {
348
+ constant_map->emplace(kv.first, kv.second);
349
+ }
350
+ model->update_constants_map(std::move(constant_map));
351
+ })
352
+ }
353
+
354
+ } // extern "C"
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/common.py ADDED
@@ -0,0 +1,1755 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import contextlib
2
+ import dataclasses
3
+ import functools
4
+ import itertools
5
+ import logging
6
+ import operator
7
+ import re
8
+ from itertools import chain
9
+ from typing import (
10
+ Any,
11
+ Callable,
12
+ ClassVar,
13
+ Dict,
14
+ List,
15
+ NamedTuple,
16
+ Optional,
17
+ Set,
18
+ Tuple,
19
+ TYPE_CHECKING,
20
+ Union,
21
+ )
22
+
23
+ import sympy
24
+ from sympy.printing.printer import Printer
25
+
26
+ import torch
27
+ import torch.fx
28
+ from torch._prims_common import ELEMENTWISE_TYPE_PROMOTION_KIND
29
+ from torch.utils import _pytree as pytree
30
+ from torch.utils._sympy.value_ranges import ValueRanges
31
+
32
+ from .. import config, metrics
33
+ from ..utils import (
34
+ DeferredLineBase,
35
+ do_bench,
36
+ free_symbol_startswith,
37
+ IndentedBuffer,
38
+ sympy_dot,
39
+ sympy_index_symbol,
40
+ sympy_subs,
41
+ unique,
42
+ )
43
+ from ..virtualized import ops, OpsHandler, OpsValue, ReductionType, StoreMode, V
44
+
45
+ if TYPE_CHECKING:
46
+ from ..ir import TensorBox
47
+
48
+ schedule_log = torch._logging.getArtifactLogger(__name__, "schedule")
49
+
50
+
51
+ def data_type_logger(msg):
52
+ if schedule_log.isEnabledFor(logging.DEBUG):
53
+ schedule_log.debug("Data type propagation: %s", msg)
54
+
55
+
56
+ @dataclasses.dataclass
57
+ class WorkspaceArg:
58
+ """A temporary buffer used for a single kernel, then discarded.
59
+
60
+ Not registered as a traditional buffer since there are no users,
61
+ so it would be dead code eliminated.
62
+ """
63
+
64
+ nbytes: sympy.Expr
65
+ zero_fill: bool
66
+
67
+
68
+ @dataclasses.dataclass
69
+ class TensorArg:
70
+ name: str
71
+ buffer: str
72
+ dtype: torch.dtype
73
+ offset: sympy.Expr = sympy.Integer(0)
74
+
75
+
76
+ @dataclasses.dataclass
77
+ class SizeArg:
78
+ name: str
79
+ expr: sympy.Expr
80
+
81
+
82
+ @dataclasses.dataclass
83
+ class DeviceCodegen:
84
+ scheduling: type
85
+ wrapper_codegen: type
86
+
87
+
88
+ KernelArgType = Union[WorkspaceArg, TensorArg, SizeArg]
89
+
90
+ device_codegens: Dict[str, DeviceCodegen] = {}
91
+
92
+
93
+ class DeviceOpOverrides:
94
+ def import_get_raw_stream_as(self, name):
95
+ raise NotImplementedError()
96
+
97
+ def set_device(self, device_idx):
98
+ raise NotImplementedError()
99
+
100
+ def synchronize(self):
101
+ raise NotImplementedError()
102
+
103
+ def device_guard(self, device_idx):
104
+ raise NotImplementedError()
105
+
106
+
107
+ device_op_overrides_dict: Dict[str, DeviceOpOverrides] = {}
108
+
109
+
110
+ # The code generated by Inductor consists of two main parts: kernel code and wrapper code.
111
+ # For any new backend looking to integrate with Inductor, customization of these two main
112
+ # parts are necessary to generate its specific code.
113
+ #
114
+ # Kernel code generation is determined by different Scheduling. Consequently, a new
115
+ # backend needs to provide a custom Scheduling for its unique kernel code generation. Currently,
116
+ # CppScheduling and TritonScheduling serve the C++/OpenMP and Triton backends, respectively.
117
+ #
118
+ # For the Wrapper, Inductor provides a WrapperCodeGen class to generate the Python wrapper code
119
+ # that bridges kernels. This allows out-of-tree backends to inherit from WrapperCodeGen,
120
+ # and override specific member functions to create backend-specific Python wrapper code.
121
+ #
122
+ # Other classes, such as CppKernel and TritonKernel, used for code generation, typically form part
123
+ # of the logic for either Scheduling or WrapperCodeGen. So the Scheduling and WrapperCodeGen interfaces
124
+ # provide flexibility to the backend. A backend can choose to implement these classes from scratch,
125
+ # or reuse them by extending and overriding as necessary. And Inductor provides the registration API,
126
+ # register_backend_for_device, to equip a new backend at runtime.
127
+ #
128
+ # Intel has developed a new backend on top of Triton to support Intel GPUs, leveraging these interfaces.
129
+ # This backend can be used as a reference:
130
+ # https://github.com/intel/intel-extension-for-pytorch/blob/5dcc9d57e5422cf295e1a1ee97896d6b6a554a85/intel_extension_for_pytorch/_inductor/__init__.py#L9
131
+ def register_backend_for_device(
132
+ device: str, device_scheduling: type, device_wrapper_codegen: type
133
+ ):
134
+ device_codegens[device] = DeviceCodegen(device_scheduling, device_wrapper_codegen)
135
+
136
+
137
+ def get_scheduling_for_device(device: str):
138
+ return device_codegens[device].scheduling if device in device_codegens else None
139
+
140
+
141
+ def get_wrapper_codegen_for_device(device: str):
142
+ return (
143
+ device_codegens[device].wrapper_codegen if device in device_codegens else None
144
+ )
145
+
146
+
147
+ def index_prevent_reordering(index: List[sympy.Expr], index_vars, sizes):
148
+ from ..ir import FlexibleLayout
149
+
150
+ # added contiguous index prevents reordering
151
+ return [*index, sympy_dot(index_vars, FlexibleLayout.contiguous_strides(sizes))]
152
+
153
+
154
+ def register_device_op_overrides(device: str, device_op_overrides: DeviceOpOverrides):
155
+ device_op_overrides_dict[device] = device_op_overrides
156
+
157
+
158
+ def get_device_op_overrides(device: str):
159
+ assert isinstance(device, str)
160
+
161
+ if not device_op_overrides_dict.keys():
162
+ from .cuda import device_op_overrides # noqa: F401
163
+
164
+ if device in device_op_overrides_dict.keys():
165
+ return device_op_overrides_dict[device]
166
+
167
+ return DeviceOpOverrides()
168
+
169
+
170
+ @functools.lru_cache(None)
171
+ def boolean_ops():
172
+ return (
173
+ "is_inf",
174
+ "is_nan",
175
+ "bitwise_xor",
176
+ "logical_not",
177
+ "signbit",
178
+ "le",
179
+ "lt",
180
+ "ge",
181
+ "gt",
182
+ "eq",
183
+ "ne",
184
+ )
185
+
186
+
187
+ DTYPE_TO_COMPUTATION_DTYPE = {
188
+ torch.bfloat16: torch.float,
189
+ torch.float16: torch.float,
190
+ **{
191
+ dtype: dtype
192
+ for dtype in [
193
+ torch.bool,
194
+ torch.float32,
195
+ torch.float64,
196
+ torch.int8,
197
+ torch.int16,
198
+ torch.int32,
199
+ torch.int64,
200
+ torch.uint8,
201
+ torch.uint16,
202
+ torch.uint32,
203
+ torch.uint64,
204
+ ]
205
+ },
206
+ }
207
+
208
+
209
+ class DataTypePropagation:
210
+ def __init__(self, body) -> None:
211
+ self.body = body
212
+ self.graphs: Dict[Union[Callable[..., Any], str], Any] = {
213
+ "root": body.root_block.graph
214
+ }
215
+ for k, v in body.subblocks.items():
216
+ self.graphs[k] = v.graph
217
+
218
+ def deduce_node_dtype_by_inputs(self, node: torch.fx.Node):
219
+ inputs = node.all_input_nodes
220
+ input_nodes = [
221
+ n for n in inputs if isinstance(n, torch.fx.Node) and n.op != "placeholder"
222
+ ]
223
+ if len(input_nodes) == 0:
224
+ return None
225
+
226
+ all_input_nodes_propogated = all(
227
+ OptimizationContext.key in n.meta
228
+ and n.meta[OptimizationContext.key].dtype is not None
229
+ for n in input_nodes
230
+ )
231
+ if not all_input_nodes_propogated:
232
+ return None
233
+
234
+ return functools.reduce(
235
+ torch.promote_types,
236
+ [n.meta[OptimizationContext.key].dtype for n in input_nodes],
237
+ )
238
+
239
+ def deduce_node_dtype_by_subgraph(self, node: torch.fx.Node):
240
+ sub_graph = self.graphs[node.target]
241
+ dtype = self.propagate_graph(sub_graph)
242
+ assert dtype
243
+ return dtype
244
+
245
+ def deduce_node_dtype(self, node: torch.fx.Node):
246
+ if node.target in boolean_ops():
247
+ return torch.bool
248
+
249
+ if node.op == "placeholder":
250
+ return None
251
+
252
+ if node.target == "output":
253
+ # we can infer output node if it only have 1 arg
254
+ if len(node.args) != 1:
255
+ return None
256
+
257
+ if node.target in (
258
+ "to_dtype",
259
+ "index_expr",
260
+ ):
261
+ return node.args[-1]
262
+
263
+ if node.target in (
264
+ "rand",
265
+ "randn",
266
+ ):
267
+ return torch.float
268
+
269
+ if node.target in (
270
+ "get_index",
271
+ "index_expr",
272
+ ):
273
+ return torch.int64
274
+
275
+ if node.target in (
276
+ "load",
277
+ "store",
278
+ "store_reduction",
279
+ ):
280
+ buf_name = node.args[1]
281
+ return V.graph.get_dtype(buf_name) # type: ignore[arg-type]
282
+
283
+ if node.target == operator.getitem:
284
+ return self.deduce_node_dtype(node.args[0]) # type: ignore[arg-type]
285
+
286
+ assert isinstance(node.target, str)
287
+
288
+ if node.target == "reduction":
289
+ return node.args[1]
290
+
291
+ if node.target == "constant":
292
+ return DTYPE_TO_COMPUTATION_DTYPE[node.args[-1]] # type: ignore[index]
293
+
294
+ if node.target.startswith("masked_subblock"):
295
+ return self.deduce_node_dtype_by_subgraph(node)
296
+
297
+ return self.deduce_node_dtype_by_inputs(node)
298
+
299
+ def propagate_graph(self, graph: torch.fx.Graph):
300
+ assert graph.nodes
301
+ graph_dtype = None
302
+ # For masked_subblock, we use output's dtype to represent
303
+ # the dtype of this subgraph. For other cases, graph_dtype
304
+ # might be None
305
+ for node in graph.nodes:
306
+ if OptimizationContext.key in node.meta:
307
+ opt_ctx = node.meta[OptimizationContext.key]
308
+ else:
309
+ opt_ctx = OptimizationContext()
310
+
311
+ opt_ctx.dtype = self.deduce_node_dtype(node)
312
+ node.meta[OptimizationContext.key] = opt_ctx
313
+ if node.target == "output":
314
+ graph_dtype = opt_ctx.dtype
315
+ return graph_dtype
316
+
317
+ def propagate(self):
318
+ self.propagate_graph(self.graphs["root"])
319
+
320
+ @classmethod
321
+ def propagate_loopbody(cls, body):
322
+ return cls(body).propagate()
323
+
324
+ @classmethod
325
+ def propagate_scheduler_node(cls, node):
326
+ from ..ir import LoopBody
327
+ from ..scheduler import SchedulerNode
328
+
329
+ assert isinstance(node, SchedulerNode)
330
+ assert isinstance(node._body, LoopBody)
331
+ DataTypePropagation.propagate_loopbody(node._body)
332
+
333
+
334
+ class ExprPrinter(Printer):
335
+ @staticmethod
336
+ def paren(string):
337
+ def all_in_parens(string):
338
+ if string[0] != "(" or len(string) < 2:
339
+ return False
340
+ count = 1
341
+ for i, char in enumerate(string[1:]):
342
+ if char == "(":
343
+ count += 1
344
+ elif char == ")":
345
+ count -= 1
346
+ if count == 0 and i != len(string) - 2:
347
+ return False
348
+ assert count == 0
349
+ return True
350
+
351
+ if (
352
+ isinstance(string, CSEVariable)
353
+ or re.match(r"^[a-z0-9_.]+$", string, re.I)
354
+ or re.match(r"^\([^)]*\)$", string, re.I)
355
+ or string == ""
356
+ ):
357
+ return string
358
+ # don't put extra parens for strings that are already wrapped in parens
359
+ if all_in_parens(string):
360
+ return string
361
+ return f"({string})"
362
+
363
+ def _print_Infinity(self, expr):
364
+ return "math.inf"
365
+
366
+ def _print_NegativeInfinity(self, expr):
367
+ return "-math.inf"
368
+
369
+ def _print_Relational(self, expr):
370
+ return f" {expr.rel_op} ".join(map(self.paren, map(self._print, expr.args)))
371
+
372
+ def _print_Mul(self, expr):
373
+ return "*".join(map(self.paren, map(self._print, expr.args)))
374
+
375
+ def _print_Add(self, expr):
376
+ return " + ".join(map(self.paren, map(self._print, expr.args)))
377
+
378
+ def _print_Mod(self, expr):
379
+ return " % ".join(map(self.paren, map(self._print, expr.args)))
380
+
381
+ def _print_FloorDiv(self, expr):
382
+ raise NotImplementedError(f"_print_FloorDiv not implemented for {type(self)}")
383
+
384
+ def _print_CleanDiv(self, expr):
385
+ return self._print_FloorDiv(expr)
386
+
387
+ def _print_GreaterThan(self, expr):
388
+ # GreaterThan: >=
389
+ # StrictlyGreaterThan: >
390
+ # Go figure...
391
+ return " >= ".join(map(self.paren, map(self._print, expr.args)))
392
+
393
+ def _print_align(self, expr):
394
+ assert len(expr.args) == 1
395
+ return f"align({self._print(expr.args[0])})"
396
+
397
+
398
+ class PythonPrinter(ExprPrinter):
399
+ def _print_ModularIndexing(self, expr):
400
+ x, div, mod = expr.args
401
+ x = self.paren(self.doprint(x))
402
+ div = self.paren(self.doprint(div))
403
+ mod = self.paren(self.doprint(mod))
404
+ if div != "1":
405
+ x = f"({x} // {div})"
406
+ return f"{x} % {mod}"
407
+
408
+ def _print_FloorDiv(self, expr):
409
+ x, div = expr.args
410
+ x = self.paren(self.doprint(x))
411
+ div = self.paren(self.doprint(div))
412
+ return f"({x} // {div})"
413
+
414
+ def _helper_sqrt(self, expr):
415
+ return f"math.sqrt({self._print(expr)})"
416
+
417
+ def _print_Pow(self, expr):
418
+ # Pow() confuses triton
419
+ base, exp = expr.args
420
+ # NB: Remember this is sizevar computation! You don't typically
421
+ # expect to have to do floating point computation including exponents
422
+ # in sizevar compute. Instead of adding support for floating
423
+ # point pow, you should make upstream retranslate the Sympy expression
424
+ # into Tensor expressions earlier and do that instead.
425
+ if exp == 0.5:
426
+ return self._helper_sqrt(base)
427
+ elif exp == -0.5:
428
+ return "1/" + self._helper_sqrt(base)
429
+ base = self._print(base)
430
+ assert exp == int(exp), exp
431
+ exp = int(exp)
432
+ if exp > 0:
433
+ return "*".join([self.paren(base)] * exp)
434
+ elif exp < 0:
435
+ return "1/" + self.paren("*".join([self.paren(base)] * abs(exp)))
436
+ else: # exp == 0
437
+ return "1"
438
+
439
+ def _print_floor(self, expr):
440
+ assert len(expr.args) == 1
441
+ return f"math.floor({self._print(expr.args[0])})"
442
+
443
+ def _print_ceiling(self, expr):
444
+ assert len(expr.args) == 1
445
+ return f"math.ceil({self._print(expr.args[0])})"
446
+
447
+ def _print_Abs(self, expr):
448
+ assert len(expr.args) == 1
449
+ return f"abs({self._print(expr.args[0])})"
450
+
451
+ def _print_Max(self, expr):
452
+ assert len(expr.args) >= 2
453
+ return f"max({', '.join(map(self._print, expr.args))})"
454
+
455
+ def _print_Min(self, expr):
456
+ assert len(expr.args) >= 2
457
+ return f"min({', '.join(map(self._print, expr.args))})"
458
+
459
+ def _print_cos(self, expr):
460
+ assert len(expr.args) == 1
461
+ return f"math.cos({self._print(expr.args[0])})"
462
+
463
+ def _print_cosh(self, expr):
464
+ assert len(expr.args) == 1
465
+ return f"math.cosh({self._print(expr.args[0])})"
466
+
467
+ def _print_acos(self, expr):
468
+ assert len(expr.args) == 1
469
+ return f"math.acos({self._print(expr.args[0])})"
470
+
471
+ def _print_sin(self, expr):
472
+ assert len(expr.args) == 1
473
+ return f"math.sin({self._print(expr.args[0])})"
474
+
475
+ def _print_sinh(self, expr):
476
+ assert len(expr.args) == 1
477
+ return f"math.sinh({self._print(expr.args[0])})"
478
+
479
+ def _print_asin(self, expr):
480
+ assert len(expr.args) == 1
481
+ return f"math.asin({self._print(expr.args[0])})"
482
+
483
+ def _print_tan(self, expr):
484
+ assert len(expr.args) == 1
485
+ return f"math.tan({self._print(expr.args[0])})"
486
+
487
+ def _print_tanh(self, expr):
488
+ assert len(expr.args) == 1
489
+ return f"math.tanh({self._print(expr.args[0])})"
490
+
491
+ def _print_atan(self, expr):
492
+ assert len(expr.args) == 1
493
+ return f"math.atan({self._print(expr.args[0])})"
494
+
495
+ def _print_Round(self, expr):
496
+ assert len(expr.args) == 1
497
+ return f"round({self._print(expr.args[0])})"
498
+
499
+ def _print_RoundDecimal(self, expr):
500
+ assert len(expr.args) == 2
501
+ number, ndigits = expr.args
502
+ assert isinstance(ndigits, sympy.Integer)
503
+ return f"round({self._print(number)}, {ndigits})"
504
+
505
+
506
+ class OpOverrides:
507
+ def __init__(self, parent):
508
+ super().__init__()
509
+ self._parent = parent
510
+
511
+ def __getattr__(self, item):
512
+ return getattr(self._parent, item)
513
+
514
+ @staticmethod
515
+ def identity(value):
516
+ # used to trigger cse
517
+ return value
518
+
519
+ @staticmethod
520
+ def constant(value, dtype):
521
+ return repr(value)
522
+
523
+ @staticmethod
524
+ def reciprocal(x):
525
+ return ops.truediv("1", x)
526
+
527
+ @staticmethod
528
+ def square(x):
529
+ return ops.mul(x, x)
530
+
531
+ @staticmethod
532
+ def bitwise_not(x):
533
+ return f"~{ExprPrinter.paren(x)}"
534
+
535
+ @staticmethod
536
+ def logical_not(a):
537
+ return f"{ExprPrinter.paren(a)} == 0"
538
+
539
+ @staticmethod
540
+ def bitwise_and(x, y):
541
+ return f"{ExprPrinter.paren(x)} & {ExprPrinter.paren(y)}"
542
+
543
+ @staticmethod
544
+ def bitwise_or(x, y):
545
+ return f"{ExprPrinter.paren(x)} | {ExprPrinter.paren(y)}"
546
+
547
+ @staticmethod
548
+ def bitwise_xor(x, y):
549
+ return f"{ExprPrinter.paren(x)} ^ {ExprPrinter.paren(y)}"
550
+
551
+ @staticmethod
552
+ def bitwise_left_shift(x, y):
553
+ return f"{ExprPrinter.paren(x)} << {ExprPrinter.paren(y)}"
554
+
555
+ @staticmethod
556
+ def bitwise_right_shift(x, y):
557
+ return f"{ExprPrinter.paren(x)} >> {ExprPrinter.paren(y)}"
558
+
559
+ @staticmethod
560
+ def remainder(a, b):
561
+ r = ops.mod(a, b)
562
+ return ops.where(f"(({r} != 0) & (({r} < 0) != ({b} < 0)))", ops.add(r, b), r)
563
+
564
+ @staticmethod
565
+ def load_seed(name, offset):
566
+ return ops.load(name, sympy.Integer(offset))
567
+
568
+ @classmethod
569
+ def _initialize_pointwise_overrides(cls, target):
570
+ assert target in {"triton", "cpp", "cppvec"}, target
571
+
572
+ def pointwise_factory_1(impl):
573
+ def func(x):
574
+ return impl.format(x=x)
575
+
576
+ return func
577
+
578
+ def pointwise_factory_2(impl):
579
+ def func(x, y):
580
+ return impl.format(x=x, y=y)
581
+
582
+ return func
583
+
584
+ for funcname, data in pointwise_overrides_data.items():
585
+ impl = getattr(data, target)
586
+ if isinstance(impl, str):
587
+ nof_args = 2 if "{y}" in impl else 1
588
+ # extend the following dictionary with factory
589
+ # functions for a specific number of arguments as
590
+ # needed:
591
+ factory = {1: pointwise_factory_1, 2: pointwise_factory_2}[nof_args]
592
+ setattr(cls, funcname, staticmethod(factory(impl)))
593
+
594
+
595
+ @dataclasses.dataclass
596
+ class OverridesData:
597
+ name: str
598
+ cpp: str
599
+ triton: Optional[str] = None # None when not impl in libdevice/triton
600
+ cppvec: Optional[str] = None # None when not impl in aten/.../vec
601
+ type_promotion_kind: ELEMENTWISE_TYPE_PROMOTION_KIND = (
602
+ ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT
603
+ )
604
+
605
+
606
+ pointwise_overrides_data: Dict[str, OverridesData] = dict(
607
+ airy_ai=OverridesData(
608
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
609
+ cpp="airy_ai_forward({x})",
610
+ name="special_airy_ai",
611
+ ),
612
+ bessel_j0=OverridesData(
613
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
614
+ cpp="bessel_j0_forward({x})",
615
+ triton="libdevice.j0({x})",
616
+ name="special_bessel_j0",
617
+ ),
618
+ bessel_j1=OverridesData(
619
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
620
+ cpp="bessel_j1_forward({x})",
621
+ triton="libdevice.j1({x})",
622
+ name="special_bessel_j1",
623
+ ),
624
+ bessel_y0=OverridesData(
625
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
626
+ cpp="bessel_y0_forward({x})",
627
+ triton="libdevice.y0({x})",
628
+ name="special_bessel_y0",
629
+ ),
630
+ bessel_y1=OverridesData(
631
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
632
+ cpp="bessel_y1_forward({x})",
633
+ triton="libdevice.y1({x})",
634
+ name="special_bessel_y1",
635
+ ),
636
+ digamma=OverridesData(
637
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
638
+ cpp="calc_digamma({x})",
639
+ cppvec="{x}.digamma()",
640
+ name="digamma",
641
+ ),
642
+ # no cpp nor triton implementation for entr, it is defined as decomposition
643
+ # erf, erfc
644
+ erfcx=OverridesData(
645
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
646
+ cpp="calc_erfcx({x})",
647
+ triton="libdevice.erfcx({x})",
648
+ name="special_erfcx",
649
+ ),
650
+ # erfinv, exp2, expit, gammaln
651
+ igamma=OverridesData(
652
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
653
+ cpp="calc_igamma({x}, {y})",
654
+ name="igamma",
655
+ ),
656
+ igammac=OverridesData(
657
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
658
+ cpp="calc_igammac({x}, {y})",
659
+ name="igammac",
660
+ ),
661
+ gammainc=OverridesData(
662
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
663
+ cpp="calc_igamma({x}, {y})",
664
+ name="special_gammainc",
665
+ ),
666
+ gammaincc=OverridesData(
667
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
668
+ cpp="calc_igammac({x}, {y})",
669
+ name="special_gammaincc",
670
+ ),
671
+ i0=OverridesData(
672
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
673
+ cpp="calc_i0({x})",
674
+ triton="libdevice.cyl_bessel_i0({x})",
675
+ cppvec="{x}.i0()",
676
+ name="i0",
677
+ ),
678
+ i0e=OverridesData(
679
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
680
+ cpp="calc_i0e({x})",
681
+ cppvec="{x}.i0e()",
682
+ name="special_i0e",
683
+ ),
684
+ i1=OverridesData(
685
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
686
+ cpp="calc_i1({x})",
687
+ triton="libdevice.cyl_bessel_i1({x})",
688
+ name="special_i1",
689
+ ),
690
+ i1e=OverridesData(
691
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
692
+ cpp="calc_i1e({x})",
693
+ name="special_i1e",
694
+ ),
695
+ log_ndtr=OverridesData(
696
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
697
+ cpp="calc_log_ndtr({x})",
698
+ name="special_log_ndtr",
699
+ ),
700
+ # logit
701
+ modified_bessel_i0=OverridesData(
702
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
703
+ cpp="modified_bessel_i0_forward({x})",
704
+ triton="libdevice.cyl_bessel_i0({x})",
705
+ name="special_modified_bessel_i0",
706
+ ),
707
+ modified_bessel_i1=OverridesData(
708
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
709
+ cpp="modified_bessel_i1_forward({x})",
710
+ triton="libdevice.cyl_bessel_i1({x})",
711
+ name="special_modified_bessel_i1",
712
+ ),
713
+ modified_bessel_k0=OverridesData(
714
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
715
+ cpp="modified_bessel_k0_forward({x})",
716
+ name="special_modified_bessel_k0",
717
+ ),
718
+ modified_bessel_k1=OverridesData(
719
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
720
+ cpp="modified_bessel_k1_forward({x})",
721
+ name="special_modified_bessel_k1",
722
+ ),
723
+ # multigamma
724
+ ndtr=OverridesData(
725
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
726
+ cpp="calc_ndtr({x})",
727
+ name="special_ndtr",
728
+ ),
729
+ ndtri=OverridesData(
730
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
731
+ cpp="calc_ndtri({x})",
732
+ name="special_ndtri",
733
+ ),
734
+ polygamma=OverridesData(
735
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
736
+ cpp="calc_polygamma({y}, {x})",
737
+ name="polygamma",
738
+ ),
739
+ # psi - alias to digamma
740
+ # round
741
+ scaled_modified_bessel_k0=OverridesData(
742
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
743
+ cpp="scaled_modified_bessel_k0_forward({x})",
744
+ name="special_scaled_modified_bessel_k0",
745
+ ),
746
+ scaled_modified_bessel_k1=OverridesData(
747
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
748
+ cpp="scaled_modified_bessel_k1_forward({x})",
749
+ name="special_scaled_modified_bessel_k1",
750
+ ),
751
+ # sinc
752
+ spherical_bessel_j0=OverridesData(
753
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
754
+ cpp="spherical_bessel_j0_forward({x})",
755
+ name="special_spherical_bessel_j0",
756
+ ),
757
+ zeta=OverridesData(
758
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
759
+ cpp="zeta({x}, {y})",
760
+ name="special_zeta",
761
+ ),
762
+ chebyshev_polynomial_t=OverridesData(
763
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
764
+ cpp="chebyshev_polynomial_t_forward({x}, {y})",
765
+ name="special_chebyshev_polynomial_t",
766
+ ),
767
+ chebyshev_polynomial_u=OverridesData(
768
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
769
+ cpp="chebyshev_polynomial_u_forward({x}, {y})",
770
+ name="special_chebyshev_polynomial_u",
771
+ ),
772
+ chebyshev_polynomial_v=OverridesData(
773
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
774
+ cpp="chebyshev_polynomial_v_forward({x}, {y})",
775
+ name="special_chebyshev_polynomial_v",
776
+ ),
777
+ chebyshev_polynomial_w=OverridesData(
778
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
779
+ cpp="chebyshev_polynomial_w_forward({x}, {y})",
780
+ name="special_chebyshev_polynomial_w",
781
+ ),
782
+ legendre_polynomial_p=OverridesData(
783
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
784
+ cpp="legendre_polynomial_p_forward({x}, {y})",
785
+ name="special_legendre_polynomial_p",
786
+ ),
787
+ shifted_chebyshev_polynomial_t=OverridesData(
788
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
789
+ cpp="shifted_chebyshev_polynomial_t_forward({x}, {y})",
790
+ name="special_shifted_chebyshev_polynomial_t",
791
+ ),
792
+ shifted_chebyshev_polynomial_u=OverridesData(
793
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
794
+ cpp="shifted_chebyshev_polynomial_u_forward({x}, {y})",
795
+ name="special_shifted_chebyshev_polynomial_u",
796
+ ),
797
+ shifted_chebyshev_polynomial_v=OverridesData(
798
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
799
+ cpp="shifted_chebyshev_polynomial_v_forward({x}, {y})",
800
+ name="special_shifted_chebyshev_polynomial_v",
801
+ ),
802
+ shifted_chebyshev_polynomial_w=OverridesData(
803
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
804
+ cpp="shifted_chebyshev_polynomial_w_forward({x}, {y})",
805
+ name="special_shifted_chebyshev_polynomial_w",
806
+ ),
807
+ hermite_polynomial_h=OverridesData(
808
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
809
+ cpp="hermite_polynomial_h_forward({x}, {y})",
810
+ name="special_hermite_polynomial_h",
811
+ ),
812
+ hermite_polynomial_he=OverridesData(
813
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
814
+ cpp="hermite_polynomial_he_forward({x}, {y})",
815
+ name="special_hermite_polynomial_he",
816
+ ),
817
+ laguerre_polynomial_l=OverridesData(
818
+ type_promotion_kind=ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT,
819
+ cpp="laguerre_polynomial_l_forward({x}, {y})",
820
+ name="special_laguerre_polynomial_l",
821
+ ),
822
+ )
823
+
824
+
825
+ # Use mypy to check protocol implemented correctly
826
+ def _typecheck_OpOverrides(h: OpOverrides) -> OpsHandler[str]:
827
+ return h
828
+
829
+
830
+ class DeferredLine(DeferredLineBase):
831
+ """A line that can be 'unwritten' by adding name to V.graph.removed_buffers"""
832
+
833
+ def __init__(self, name, line):
834
+ super().__init__(line)
835
+ self.name = name
836
+ assert not isinstance(line, DeferredLineBase)
837
+
838
+ def __call__(self):
839
+ if all(
840
+ self.name not in x
841
+ for x in (
842
+ V.graph.removed_buffers,
843
+ V.kernel.removed_buffers,
844
+ V.graph.inplaced_to_remove,
845
+ V.kernel.inplaced_to_remove,
846
+ )
847
+ ):
848
+ return self.line
849
+ return None
850
+
851
+ def _new_line(self, line):
852
+ return DeferredLine(self.name, line)
853
+
854
+
855
+ class BracesBuffer(IndentedBuffer):
856
+ def indent(self, offset=1):
857
+ @contextlib.contextmanager
858
+ def ctx():
859
+ for _ in range(offset):
860
+ self.writeline("{")
861
+ self._indent += 1
862
+ for _ in range(-offset):
863
+ self._indent -= 1
864
+ self.writeline("}")
865
+ yield
866
+ for _ in range(-offset):
867
+ self.writeline("{")
868
+ self._indent += 1
869
+ for _ in range(offset):
870
+ self._indent -= 1
871
+ self.writeline("}")
872
+
873
+ return ctx()
874
+
875
+
876
+ class InplacedBuffer(NamedTuple):
877
+ inner_name: str
878
+ other_names: List[str]
879
+
880
+
881
+ class KernelArgs:
882
+ @staticmethod
883
+ def _lookup(prefix, odict, name):
884
+ assert isinstance(name, (str, sympy.Symbol))
885
+ if name not in odict:
886
+ odict[name] = f"{prefix}{len(odict)}"
887
+ return odict[name]
888
+
889
+ def __init__(self, sizevars=None):
890
+ self.input_buffers = dict()
891
+ self.output_buffers = dict()
892
+ self.inplace_buffers = dict()
893
+ self.sizevars = sizevars or dict()
894
+ self.workspace_arg = None
895
+
896
+ def __repr__(self):
897
+ return "KernelArgs({})".format(
898
+ ", ".join(
899
+ map(
900
+ repr,
901
+ [
902
+ self.input_buffers,
903
+ self.output_buffers,
904
+ self.inplace_buffers,
905
+ self.sizevars,
906
+ ],
907
+ )
908
+ )
909
+ )
910
+
911
+ def _buffer_is_marked_removed(self, name):
912
+ return isinstance(name, str) and name.startswith("REMOVED")
913
+
914
+ def input(self, name):
915
+ if V.graph.scheduler:
916
+ name = V.graph.scheduler.mutation_real_name.get(name, name)
917
+ assert name not in V.graph.removed_buffers, name
918
+ if name in self.output_buffers:
919
+ return self.output_buffers[name]
920
+ if name in self.inplace_buffers:
921
+ return self.inplace_buffers[name].inner_name
922
+ if name.startswith("seed"):
923
+ return self._lookup("seed", self.input_buffers, name)
924
+ return self._lookup("in_ptr", self.input_buffers, name)
925
+
926
+ def output(self, name):
927
+ if V.graph.scheduler:
928
+ name = V.graph.scheduler.mutation_real_name.get(name, name)
929
+ assert name not in V.graph.removed_buffers, name
930
+ if name in self.inplace_buffers:
931
+ return self.inplace_buffers[name].inner_name
932
+ return self._lookup("out_ptr", self.output_buffers, name)
933
+
934
+ def make_inplace(self, input_name, output_name):
935
+ assert output_name not in self.inplace_buffers
936
+ if input_name in self.inplace_buffers:
937
+ buf = self.inplace_buffers[input_name]
938
+ buf.other_names.append(output_name)
939
+ self.inplace_buffers[output_name] = buf
940
+ else:
941
+ buf = InplacedBuffer(
942
+ f"in_out_ptr{len(unique(self.inplace_buffers.values()))}",
943
+ [input_name, output_name],
944
+ )
945
+ self.inplace_buffers[input_name] = buf
946
+ self.inplace_buffers[output_name] = buf
947
+
948
+ def workspace(self, nbytes: sympy.Expr, zero_fill: bool):
949
+ if self.workspace_arg is None:
950
+ self.workspace_arg = WorkspaceArg(nbytes, zero_fill)
951
+ return "ws_ptr", 0
952
+
953
+ offset = self.workspace_arg.nbytes
954
+ zero_fill = zero_fill or self.workspace_arg.zero_fill
955
+ self.workspace_arg = WorkspaceArg(offset + nbytes, zero_fill)
956
+ return "ws_ptr", offset
957
+
958
+ def seed_offset(self, name, value):
959
+ if value in self.sizevars:
960
+ return self.sizevars[value]
961
+ if name in self.sizevars.values():
962
+ name = (
963
+ f"{name}{sum(1 for v in self.sizevars.values() if v.startswith(name))}"
964
+ )
965
+ self.sizevars[value] = name
966
+ return name
967
+
968
+ def size(self, name):
969
+ if str(name) == "seed":
970
+ self.sizevars["seed"] = "seed"
971
+ return "seed"
972
+ return self._lookup("ks", self.sizevars, name)
973
+
974
+ def call_names(self):
975
+ return chain(
976
+ self.input_buffers.keys(), self.output_buffers.keys(), self.sizevars.keys()
977
+ )
978
+
979
+ def wrap_ptr_arg(self, buf, dtype):
980
+ return buf
981
+
982
+ def wrap_size_arg(self, size):
983
+ return str(size)
984
+
985
+ def cpp_argdefs(self):
986
+ from .cpp import DTYPE_TO_CPP, INDEX_TYPE
987
+
988
+ call_args = []
989
+ arg_defs = []
990
+ arg_types = []
991
+ for inplaced in unique(self.inplace_buffers.values()):
992
+ if self._buffer_is_marked_removed(inplaced):
993
+ continue
994
+ outer = inplaced.other_names[-1]
995
+ inner = inplaced.inner_name
996
+ dtype = V.graph.get_dtype(outer)
997
+ cpp_dtype = DTYPE_TO_CPP[dtype]
998
+ arg_defs.append(f"{cpp_dtype}* {inner}")
999
+ call_args.append(self.wrap_ptr_arg(outer, dtype))
1000
+ arg_types.append(f"{cpp_dtype}*")
1001
+ for outer, inner in self.input_buffers.items():
1002
+ if outer in self.inplace_buffers:
1003
+ continue
1004
+ dtype = V.graph.get_dtype(outer)
1005
+ cpp_dtype = DTYPE_TO_CPP[dtype]
1006
+ arg_defs.append(f"const {cpp_dtype}* {inner}")
1007
+ call_args.append(self.wrap_ptr_arg(outer, dtype))
1008
+ arg_types.append(f"const {cpp_dtype}*")
1009
+ for outer, inner in self.output_buffers.items():
1010
+ if outer in self.inplace_buffers or self._buffer_is_marked_removed(inner):
1011
+ continue
1012
+ dtype = V.graph.get_dtype(outer)
1013
+ cpp_dtype = DTYPE_TO_CPP[dtype]
1014
+ arg_defs.append(f"{cpp_dtype}* {inner}")
1015
+ call_args.append(self.wrap_ptr_arg(outer, dtype))
1016
+ arg_types.append(f"{cpp_dtype}*")
1017
+ for outer, inner in self.sizevars.items():
1018
+ arg_defs.append(f"const {INDEX_TYPE} {inner}")
1019
+ call_args.append(self.wrap_size_arg(outer))
1020
+ arg_types.append(f"const {INDEX_TYPE}")
1021
+ if V.graph.wrapper_code:
1022
+ V.graph.wrapper_code.ensure_size_computed(outer)
1023
+ assert self.workspace_arg is None, "Workspace not supported on CPU "
1024
+ return arg_defs, call_args, arg_types
1025
+
1026
+ def python_argdefs(self):
1027
+ arg_defs = []
1028
+ call_args = []
1029
+ precompile_args: List[Union[TensorArg, SizeArg, WorkspaceArg]] = []
1030
+ for inplaced in unique(self.inplace_buffers.values()):
1031
+ if self._buffer_is_marked_removed(inplaced):
1032
+ continue
1033
+ arg_defs.append(inplaced.inner_name)
1034
+ call_args.append(inplaced.other_names[-1])
1035
+ precompile_args.append(
1036
+ TensorArg(
1037
+ name=inplaced.inner_name,
1038
+ buffer=inplaced.other_names[-1],
1039
+ dtype=V.graph.get_dtype(inplaced.other_names[-1]),
1040
+ )
1041
+ )
1042
+ for outer, inner in chain(
1043
+ self.input_buffers.items(), self.output_buffers.items()
1044
+ ):
1045
+ if outer in self.inplace_buffers or self._buffer_is_marked_removed(inner):
1046
+ continue
1047
+ arg_defs.append(inner)
1048
+ call_args.append(outer)
1049
+ precompile_args.append(
1050
+ TensorArg(
1051
+ name=inner,
1052
+ buffer=outer,
1053
+ dtype=V.graph.get_dtype(outer),
1054
+ )
1055
+ )
1056
+ for outer, inner in self.sizevars.items():
1057
+ arg_defs.append(inner)
1058
+ call_args.append(outer)
1059
+ precompile_args.append(SizeArg(inner, outer))
1060
+ if V.graph.wrapper_code:
1061
+ V.graph.wrapper_code.ensure_size_computed(outer)
1062
+ if self.workspace_arg is not None:
1063
+ arg_defs.append("ws_ptr")
1064
+ call_args.append("workspace")
1065
+ precompile_args.append(self.workspace_arg)
1066
+
1067
+ return arg_defs, call_args, precompile_args
1068
+
1069
+ def aliases(self):
1070
+ for inplaced in unique(self.inplace_buffers.values()):
1071
+ if self._buffer_is_marked_removed(inplaced):
1072
+ continue
1073
+ for other in inplaced.other_names:
1074
+ if (
1075
+ other in V.graph.inplaced_to_remove
1076
+ or other in V.kernel.inplaced_to_remove
1077
+ ):
1078
+ continue
1079
+ if other in self.input_buffers:
1080
+ yield self.input_buffers[other], inplaced.inner_name
1081
+ if other in self.output_buffers:
1082
+ yield self.output_buffers[other], inplaced.inner_name
1083
+
1084
+ def is_removed(self, name):
1085
+ def _is_removed(name, buffers):
1086
+ return name not in buffers or self._buffer_is_marked_removed(buffers[name])
1087
+
1088
+ return _is_removed(name, self.output_buffers) and _is_removed(
1089
+ name, self.inplace_buffers
1090
+ )
1091
+
1092
+ # Includes inplace buffers, excludes removed buffers. Essentially,
1093
+ # after you do a call into this kernel, which buffers actually contain
1094
+ # updated data? Modeled off of python_argdefs.
1095
+ def live_output_buffers(self):
1096
+ live_outs = set()
1097
+ for inplaced in unique(self.inplace_buffers.values()):
1098
+ if self._buffer_is_marked_removed(inplaced):
1099
+ continue
1100
+ live_outs.add(inplaced.other_names[-1])
1101
+ for outer, inner in self.output_buffers.items():
1102
+ if outer in self.inplace_buffers or self._buffer_is_marked_removed(inner):
1103
+ continue
1104
+ live_outs.add(outer)
1105
+ return live_outs
1106
+
1107
+
1108
+ class CSEVariable:
1109
+ """A CSEVariable is just a name for an expression but it is useful to be able to annotate them on a backend dependent basis.
1110
+ To do so, the backends can simply overload `Kernel.create_cse_var`
1111
+ The "CSEVariable.update_on_args" method gives you a hook for annotations
1112
+ See example of TritonCSEVariable in triton.py
1113
+ """
1114
+
1115
+ def __init__(self, name, bounds: ValueRanges[Any]):
1116
+ assert isinstance(bounds, ValueRanges)
1117
+ self.name = name
1118
+ self.bounds = bounds
1119
+
1120
+ def __str__(self):
1121
+ return self.name
1122
+
1123
+ def __hash__(self) -> int:
1124
+ return hash(self.name)
1125
+
1126
+ def __eq__(self, other) -> bool:
1127
+ return type(other) == type(self) and other.name == self.name
1128
+
1129
+ def update_on_args(self, name, args, kwargs):
1130
+ pass
1131
+
1132
+
1133
+ class CppWrapperKernelArgs(KernelArgs):
1134
+ def wrap_ptr_arg(self, buf, dtype):
1135
+ from .cpp import DTYPE_TO_CPP
1136
+
1137
+ if config.abi_compatible:
1138
+ # In the abi_compatible model, we just return the buf here.
1139
+ # We will form correct call args later in wrapper.generate_kernel_all.
1140
+ return buf
1141
+ else:
1142
+ return f"({DTYPE_TO_CPP[dtype]}*)({buf}.data_ptr())"
1143
+
1144
+ def wrap_size_arg(self, size):
1145
+ return f"{size}"
1146
+
1147
+
1148
+ class CSE:
1149
+ """Common subexpression elimination"""
1150
+
1151
+ def __init__(
1152
+ self,
1153
+ prefix="",
1154
+ suffix="",
1155
+ name_prefix="tmp",
1156
+ iter_buffers=None,
1157
+ store_cache=None,
1158
+ reduction_cache=None,
1159
+ varname_map=None,
1160
+ ):
1161
+ self.prefix = prefix
1162
+ self.suffix = suffix
1163
+ self.cache = {}
1164
+ self.name_prefix = name_prefix
1165
+ self.store_cache = store_cache or {}
1166
+ self.reduction_cache = reduction_cache or {}
1167
+ self.iter_buffer_ids = iter_buffers or itertools.count()
1168
+ self.invalidated_stores = set()
1169
+ self.varname_map = varname_map or {}
1170
+
1171
+ def invalidate(self, keep_vars: Set[str]):
1172
+ for name, tmp in list(self.store_cache.items()):
1173
+ if tmp not in keep_vars:
1174
+ del self.store_cache[name]
1175
+ self.invalidated_stores.add(name)
1176
+ self.cache = {k: v for k, v in self.cache.items() if v in keep_vars}
1177
+
1178
+ def clone(self):
1179
+ # Note(fdrocha): reduction_cache is not being cloned, not sure if this is intentional
1180
+ return CSE(
1181
+ prefix=self.prefix,
1182
+ suffix=self.suffix,
1183
+ name_prefix=self.name_prefix,
1184
+ iter_buffers=self.iter_buffer_ids,
1185
+ store_cache=self.store_cache,
1186
+ varname_map=self.varname_map,
1187
+ )
1188
+
1189
+ def generate(
1190
+ self,
1191
+ buffer: IndentedBuffer,
1192
+ expr: Union[str, CSEVariable, OpsValue, IndentedBuffer],
1193
+ *,
1194
+ bounds: ValueRanges[Any] = ValueRanges.unknown(),
1195
+ write=True,
1196
+ assignment=True,
1197
+ ) -> CSEVariable:
1198
+ if isinstance(expr, OpsValue):
1199
+ expr = expr.value
1200
+
1201
+ assert isinstance(expr, (str, CSEVariable, IndentedBuffer)), type(expr)
1202
+ assert write or assignment
1203
+ if isinstance(expr, CSEVariable):
1204
+ # If the expressions were always created with all the information, we could
1205
+ # assert expr.bounds == bounds, but sometimes the expression is created
1206
+ # with the loose ValueRanges.unknown(), so we need to tighten the bounds
1207
+ expr.bounds = expr.bounds.tighten(bounds)
1208
+ return expr
1209
+ cache_key = expr.getvalue() if isinstance(expr, IndentedBuffer) else expr
1210
+ var = self.cache.get(cache_key, None)
1211
+ if not var:
1212
+ var = self.newvar(bounds) if assignment else None
1213
+ self.cache[cache_key] = var
1214
+ if write:
1215
+ if V.kernel.current_node:
1216
+ V.kernel.current_node.codegen_originating_info(
1217
+ buffer, only_once=True
1218
+ )
1219
+ if isinstance(expr, IndentedBuffer):
1220
+ if assignment:
1221
+ buffer.writeline(f"{self.prefix}{var} =")
1222
+ buffer.splice(expr)
1223
+ buffer.writeline(self.suffix)
1224
+ else:
1225
+ if assignment:
1226
+ line = f"{self.prefix}{var} = {expr}{self.suffix}"
1227
+ else:
1228
+ line = f"{expr}{self.suffix}"
1229
+ buffer.writeline(line)
1230
+ else:
1231
+ var.bounds = var.bounds.tighten(bounds)
1232
+
1233
+ return var
1234
+
1235
+ def newvar(self, bounds: ValueRanges[Any] = ValueRanges.unknown()) -> CSEVariable:
1236
+ var_name = f"{self.name_prefix}{next(self.iter_buffer_ids)}"
1237
+ var = V.kernel.create_cse_var(var_name, bounds)
1238
+ self.varname_map[var_name] = var
1239
+ return var
1240
+
1241
+
1242
+ class IndirectAssertLine(DeferredLineBase):
1243
+ def __init__(self, line, assert_fn, var, mask, size_map):
1244
+ self.var = var
1245
+ self.mask = mask
1246
+ self.line = line
1247
+ self.assert_fn = assert_fn
1248
+ self.size_map = size_map
1249
+
1250
+ def __call__(self):
1251
+ size, size_str = self.size_map[(self.var, self.mask)]
1252
+
1253
+ # We assert if we've not been able to prove the bound
1254
+ assert_min = (self.var.bounds.lower >= 0) != sympy.true
1255
+ assert_max = (self.var.bounds.upper < size) != sympy.true
1256
+
1257
+ # FooBar interview question
1258
+ if not (assert_min or assert_max):
1259
+ return None
1260
+ elif assert_min and assert_max:
1261
+ # The conditions need to be in parens because of Python's operator precedence.
1262
+ # It'd be less error-prone to use and/or/not, which is suported by triton
1263
+ cond = f"(0 <= {self.var}) & ({self.var} < {size_str})"
1264
+ cond_print = f"0 <= {self.var} < {size_str}"
1265
+ elif assert_min:
1266
+ cond = f"0 <= {self.var}"
1267
+ cond_print = cond
1268
+ else:
1269
+ assert assert_max
1270
+ cond = f"{self.var} < {size_str}"
1271
+ cond_print = cond
1272
+
1273
+ if self.mask:
1274
+ cond = f"({cond}) | ~{self.mask}"
1275
+ return self.line.format(
1276
+ assert_fn=self.assert_fn, cond=cond, cond_print=cond_print
1277
+ )
1278
+
1279
+ def _new_line(self, line):
1280
+ return IndirectAssertLine(
1281
+ line, self.assert_fn, self.var, self.mask, self.size_map
1282
+ )
1283
+
1284
+
1285
+ class CodeGen:
1286
+ def __init__(self):
1287
+ super().__init__()
1288
+ self.exit_stack = contextlib.ExitStack()
1289
+
1290
+ def __enter__(self):
1291
+ self.exit_stack.__enter__()
1292
+ return self
1293
+
1294
+ def __exit__(self, exc_type, exc_val, exc_tb):
1295
+ self.exit_stack.__exit__(exc_type, exc_val, exc_tb)
1296
+
1297
+
1298
+ class Kernel(CodeGen):
1299
+ newvar_prefix = ""
1300
+ suffix = ""
1301
+ overrides: Optional[Callable[[OpsHandler[Any]], OpsHandler[Any]]] = None
1302
+ # TODO: these look dead, but with all the getattr it's hard to tell...
1303
+ load_format: None = None
1304
+ store_format: None = None
1305
+
1306
+ def __init__(self, args=None, increase_kernel_count=True):
1307
+ super().__init__()
1308
+ if increase_kernel_count:
1309
+ metrics.generated_kernel_count += 1
1310
+ self.args = args or KernelArgs()
1311
+ self.loads = IndentedBuffer()
1312
+ self.compute = IndentedBuffer()
1313
+ self.stores = IndentedBuffer()
1314
+ self.cse: CSE = CSE(self.newvar_prefix, self.suffix)
1315
+ self.must_keep_buffers = set()
1316
+ self.store_buffer_names = set()
1317
+ self._load_mask = None
1318
+ # set in set_current_node
1319
+ self.current_node = None
1320
+ self.node_to_bounds: Optional[Dict[torch.fx.Node, ValueRanges[Any]]] = None
1321
+ # Upper bounds for indirect_indexing and their str representation
1322
+ # NB: None, None is never stored in map, but it is the assumed
1323
+ # "not set" value for the dict
1324
+ self.indirect_max_sizes: Dict[
1325
+ Tuple[CSEVariable, str], Union[Tuple[sympy.Expr, str], Tuple[None, None]]
1326
+ ] = {}
1327
+
1328
+ self.removed_buffers = set()
1329
+ self.inplaced_to_remove = set()
1330
+
1331
+ # key: the buffer to write
1332
+ # value: the buffer to read and whose memory can be reused for
1333
+ # the buffer specified by key
1334
+ self.inplace_update_buffers = dict()
1335
+ # Set minimum number of elements processed per thread.
1336
+ self.min_elem_per_thread = 1
1337
+ self.kernel_name = None
1338
+
1339
+ @contextlib.contextmanager
1340
+ def set_current_node(self, node):
1341
+ prior = self.current_node
1342
+ self.current_node = node
1343
+ self.node_to_bounds = node._body.bounds().get_bounds()
1344
+ try:
1345
+ yield
1346
+ finally:
1347
+ self.current_node = prior
1348
+
1349
+ @contextlib.contextmanager
1350
+ def swap_buffers(self, lb, cb=None, sb=None):
1351
+ if cb is None:
1352
+ cb = lb
1353
+ loads = self.loads
1354
+ compute = self.compute
1355
+ stores = self.stores
1356
+ cse = self.cse
1357
+ self.loads = lb
1358
+ self.compute = cb
1359
+ self.stores = sb
1360
+ self.cse = cse.clone()
1361
+ try:
1362
+ yield
1363
+ finally:
1364
+ self.loads = loads
1365
+ self.compute = compute
1366
+ self.stores = stores
1367
+ self.cse = cse
1368
+
1369
+ def load(self, name: str, index: sympy.Expr) -> CSEVariable:
1370
+ raise NotImplementedError()
1371
+
1372
+ def indirect_load(self, name: str, index: sympy.Expr):
1373
+ """A load the depends on an index we have read"""
1374
+ prior = self.loads
1375
+ try:
1376
+ # put the load in the compute section as it might have deps
1377
+ self.loads = self.compute
1378
+ return self.load(name, index)
1379
+ finally:
1380
+ self.loads = prior
1381
+
1382
+ def store_reduction(self, name: str, index: sympy.Expr, value: CSEVariable):
1383
+ raise NotImplementedError()
1384
+
1385
+ def store(
1386
+ self, name: str, index: sympy.Expr, value: CSEVariable, mode: StoreMode = None
1387
+ ) -> None:
1388
+ raise NotImplementedError()
1389
+
1390
+ def reduction(
1391
+ self,
1392
+ dtype: torch.dtype,
1393
+ src_dtype: torch.dtype,
1394
+ reduction_type: ReductionType,
1395
+ value: Union[CSEVariable, Tuple[CSEVariable, ...]],
1396
+ ) -> Union[CSEVariable, Tuple[CSEVariable, ...]]:
1397
+ raise NotImplementedError()
1398
+
1399
+ def scan(
1400
+ self,
1401
+ dtype: torch.dtype,
1402
+ combine_fn: Callable[[CSEVariable, CSEVariable], CSEVariable],
1403
+ value: CSEVariable,
1404
+ init: int,
1405
+ ) -> CSEVariable:
1406
+ raise NotImplementedError()
1407
+
1408
+ def bucketize(
1409
+ self,
1410
+ values: CSEVariable,
1411
+ offsets_name: str,
1412
+ offsets_size: sympy.Expr,
1413
+ indexing_dtype: torch.dtype,
1414
+ right: bool,
1415
+ ) -> CSEVariable:
1416
+ """
1417
+ See [Note: Inductor bucketize op]
1418
+ """
1419
+ raise NotImplementedError()
1420
+
1421
+ @property
1422
+ def assert_function(self) -> str:
1423
+ raise NotImplementedError()
1424
+
1425
+ def index_to_str(self, index: sympy.Expr) -> str:
1426
+ raise NotImplementedError()
1427
+
1428
+ def __enter__(self):
1429
+ # TODO: hoist this to top level
1430
+ class CSEProxy:
1431
+ self.name = "CSEProxy"
1432
+
1433
+ @staticmethod
1434
+ def __getattr__(name: str) -> Callable[..., CSEVariable]: # type: ignore[misc]
1435
+ def inner(*args, **kwargs):
1436
+ # TritonTemplateKernel has no current_node
1437
+ buf_bounds = ValueRanges.unknown()
1438
+ if hasattr(V.interpreter, "current_node"):
1439
+ fx_node = V.interpreter.current_node
1440
+ assert isinstance(self.node_to_bounds, dict)
1441
+ buf_bounds = self.node_to_bounds.get(
1442
+ fx_node, ValueRanges.unknown()
1443
+ )
1444
+
1445
+ value = getattr(parent_handler, name)(*args, **kwargs) # type: ignore[has-type]
1446
+
1447
+ def do_cse(v):
1448
+ csevar = self.cse.generate(self.compute, v, bounds=buf_bounds)
1449
+ csevar.update_on_args(name, args, kwargs)
1450
+ return csevar
1451
+
1452
+ return pytree.tree_map(do_cse, value)
1453
+
1454
+ return inner
1455
+
1456
+ @staticmethod
1457
+ def indirect_indexing(
1458
+ var: CSEVariable, size: sympy.Expr, check: bool = True
1459
+ ):
1460
+ # Skip CSE since this doesn't return an expression
1461
+
1462
+ if var.bounds.lower < 0: # type: ignore[operator]
1463
+ new_bounds = ValueRanges.unknown()
1464
+ if var.bounds != ValueRanges.unknown() and isinstance(
1465
+ size, sympy.Number
1466
+ ):
1467
+ # Take the negative part of the bound and add size to it
1468
+ # Then take union of that and the positive part
1469
+ # This is a tighter bound than that of a generic ops.where, as we have info on the cond
1470
+ neg = var.bounds & ValueRanges(-sympy.oo, -1)
1471
+ new_bounds = ValueRanges(neg.lower + size, neg.upper + size)
1472
+ # We don't have a good way of representing the empty range
1473
+ if var.bounds.upper >= 0: # type: ignore[operator]
1474
+ pos = var.bounds & ValueRanges(0, sympy.oo)
1475
+ new_bounds = new_bounds | pos
1476
+
1477
+ stm = ops.add(var, self.rename_indexing(size))
1478
+ # Mixed negative and non-negative
1479
+ if var.bounds.upper >= 0: # type: ignore[operator]
1480
+ lt = ops.lt(var, "0")
1481
+ stm = ops.where(lt, stm, var)
1482
+ new_var = self.cse.generate(self.compute, stm, bounds=new_bounds)
1483
+
1484
+ new_var.update_on_args("index_wrap", (var,), {})
1485
+ var = new_var
1486
+
1487
+ if self.generate_assert(check):
1488
+ mask = self.load_mask(var)
1489
+
1490
+ # An assertion line may have been written already, if so just
1491
+ # update the max size.
1492
+ map_key = (var, mask)
1493
+ existing_size, _ = self.indirect_max_sizes.get(
1494
+ map_key, (None, None)
1495
+ )
1496
+ if existing_size is not None:
1497
+ size = sympy.Min(size, existing_size)
1498
+ else:
1499
+ line = (
1500
+ '{assert_fn}({cond}, "index out of bounds: {cond_print}")'
1501
+ )
1502
+ self.compute.writeline(
1503
+ IndirectAssertLine(
1504
+ line,
1505
+ self.assert_function,
1506
+ var,
1507
+ mask,
1508
+ self.indirect_max_sizes,
1509
+ )
1510
+ )
1511
+
1512
+ self.indirect_max_sizes[map_key] = (size, self.index_to_str(size))
1513
+ return sympy_index_symbol(str(var))
1514
+
1515
+ @staticmethod
1516
+ def load(name: str, index: sympy.Expr) -> CSEVariable:
1517
+ if name in self.cse.invalidated_stores:
1518
+ # A load from an invalidated store requires us to
1519
+ # keep the actual buffer around
1520
+ V.kernel.must_keep_buffers.add(name)
1521
+ if free_symbol_startswith(index, "tmp"):
1522
+ return self.indirect_load(name, index)
1523
+ store_cache = self.cse.store_cache
1524
+ if name in store_cache:
1525
+ return store_cache[name]
1526
+ return self.load(name, index)
1527
+
1528
+ @staticmethod
1529
+ def store(
1530
+ name: str, index: sympy.Expr, value: CSEVariable, mode: StoreMode = None
1531
+ ) -> None:
1532
+ self.store_buffer_names.add(name)
1533
+ if mode is None:
1534
+ self.cse.store_cache[name] = value
1535
+ if self.current_node:
1536
+ for other_name in self.current_node.get_mutations():
1537
+ self.cse.store_cache[other_name] = value
1538
+ if name not in V.graph.removed_buffers:
1539
+ return self.store(name, index, value, mode=mode)
1540
+ else:
1541
+ return None # type: ignore[return-value]
1542
+
1543
+ @staticmethod
1544
+ def store_reduction(name: str, index: sympy.Expr, value: CSEVariable):
1545
+ self.store_buffer_names.add(name)
1546
+ self.cse.store_cache[name] = value
1547
+ if self.current_node:
1548
+ for other_name in self.current_node.get_mutations():
1549
+ self.cse.store_cache[other_name] = value
1550
+
1551
+ if name not in V.graph.removed_buffers:
1552
+ return self.store_reduction(name, index, value)
1553
+
1554
+ @staticmethod
1555
+ def reduction(
1556
+ dtype: torch.dtype,
1557
+ src_dtype: torch.dtype,
1558
+ reduction_type: ReductionType,
1559
+ value: Union[CSEVariable, Tuple[CSEVariable, ...]],
1560
+ ) -> Union[CSEVariable, Tuple[CSEVariable, ...]]:
1561
+ return self.reduction(dtype, src_dtype, reduction_type, value)
1562
+
1563
+ @staticmethod
1564
+ def scan(
1565
+ dtype: torch.dtype,
1566
+ combine_fn: Callable[[CSEVariable, CSEVariable], CSEVariable],
1567
+ value: CSEVariable,
1568
+ init: int,
1569
+ ) -> CSEVariable:
1570
+ return self.scan(dtype, combine_fn, value, init)
1571
+
1572
+ @staticmethod
1573
+ def bucketize(
1574
+ values: CSEVariable,
1575
+ offsets_name: str,
1576
+ offsets_size: sympy.Expr,
1577
+ indexing_dtype: torch.dtype,
1578
+ right: bool,
1579
+ ) -> CSEVariable:
1580
+ """
1581
+ [Note: Inductor bucketize op]
1582
+
1583
+ Given values (tensor) and offsets_name (reference to the name of a 1D
1584
+ tensor), calculate the bucket that each value belongs to.
1585
+
1586
+ e.g. for values [-1, 0, 1, 2, 3, 4, 5, 9], offsets [0, 4, 4, 8], right=True
1587
+ return = [ 0, 1, 1, 1, 1, 3, 3, 4].
1588
+
1589
+ When right == False, bucket i refers to range (offsets[i], offsets[i+1]].
1590
+ When right == True, bucket i refers to range [offsets[i], offsets[i+1]).
1591
+
1592
+ Offsets must be non-decreasing or the result is undefined.
1593
+ """
1594
+ return self.bucketize(
1595
+ values, offsets_name, offsets_size, indexing_dtype, right
1596
+ )
1597
+
1598
+ # Use mypy to check protocol implemented correctly
1599
+ def _typecheck_CSEProxy(h: CSEProxy) -> OpsHandler[CSEVariable]:
1600
+ return h
1601
+
1602
+ super().__enter__()
1603
+ assert self.overrides
1604
+ parent_handler = self.overrides(V.get_ops_handler())
1605
+ self.exit_stack.enter_context(V.set_ops_handler(CSEProxy()))
1606
+ self.exit_stack.enter_context(V.set_kernel_handler(self))
1607
+ return self
1608
+
1609
+ def __exit__(self, exc_type, exc_val, exc_tb):
1610
+ """
1611
+ Note that V.graph.scheduler can be None when codegening triton template
1612
+ kernels.
1613
+ """
1614
+ if V.graph.scheduler:
1615
+ V.graph.scheduler.remove_kernel_local_buffers()
1616
+ super().__exit__(exc_type, exc_val, exc_tb)
1617
+
1618
+ def generate_assert(self, check):
1619
+ return (check or config.debug_index_asserts) and config.assert_indirect_indexing
1620
+
1621
+ def load_mask(self, var) -> str:
1622
+ # only the triton kernel requires mask
1623
+ return ""
1624
+
1625
+ def rename_indexing(self, index) -> sympy.Expr:
1626
+ # adds the necessary kernel args for index expressions
1627
+ # and renames variables in index expressions to kernel arg names
1628
+ if isinstance(index, (list, tuple)):
1629
+ return [self.rename_indexing(x) for x in index] # type: ignore[return-value]
1630
+ index = V.graph.sizevars.simplify(index)
1631
+ sorted_symbols = sorted(index.free_symbols, key=lambda s: s.name)
1632
+ replacements = {
1633
+ x: self.args.size(x)
1634
+ for x in sorted_symbols
1635
+ if x.name.startswith(("s", "u", "ps"))
1636
+ or (x.name.startswith("i") and not x.name.startswith("idx"))
1637
+ }
1638
+ return sympy_subs(index, replacements)
1639
+
1640
+ def create_cse_var(self, *args, **kwargs):
1641
+ return CSEVariable(*args, **kwargs)
1642
+
1643
+
1644
+ @dataclasses.dataclass
1645
+ class OptimizationContext:
1646
+ key: ClassVar[str] = "opt_ctx"
1647
+
1648
+ # Load value as mask
1649
+ is_load_as_mask: bool = False
1650
+
1651
+ dtype: Optional[torch.dtype] = None
1652
+ ops_name: str = ""
1653
+
1654
+ # Load uint8/int8 value as float32
1655
+ is_load_int8_as_float: bool = False
1656
+
1657
+
1658
+ @functools.lru_cache(None)
1659
+ def jinja2_env():
1660
+ try:
1661
+ import jinja2
1662
+
1663
+ return jinja2.Environment(
1664
+ undefined=jinja2.StrictUndefined,
1665
+ )
1666
+ except ImportError:
1667
+ return None
1668
+
1669
+
1670
+ PrimitiveInfoType = Union[int, float, bool, str, List[Union[int, str, float, bool]]]
1671
+
1672
+
1673
+ class ChoiceCaller:
1674
+ """
1675
+ Represents a possible choice used in autotune_process.py.
1676
+ During autotuning, self.benchmark() is first called to get benchmark result,
1677
+ and if this choice is selected, self.output_node() is called to get the output_node.
1678
+
1679
+ Children classes: TritonTemplateCaller, CUDATemplateCaller.
1680
+ """
1681
+
1682
+ def __init__(self, name, input_nodes, layout):
1683
+ super().__init__()
1684
+ self.name = name
1685
+ self.layout = layout
1686
+ self.input_nodes = input_nodes
1687
+
1688
+ def benchmark(self, *args, out) -> float:
1689
+ algo = self.to_callable()
1690
+ return do_bench(lambda: algo(*args, out=out))
1691
+
1692
+ def call_name(self) -> str:
1693
+ raise NotImplementedError()
1694
+
1695
+ def to_callable(self):
1696
+ raise NotImplementedError()
1697
+
1698
+ def hash_key(self) -> str:
1699
+ raise NotImplementedError()
1700
+
1701
+ def output_node(self) -> "TensorBox":
1702
+ raise NotImplementedError()
1703
+
1704
+ def info_dict(self) -> Dict[str, Union[PrimitiveInfoType, List[PrimitiveInfoType]]]:
1705
+ """Information returned here is logged to the autotune log file when that is enabled."""
1706
+ return {}
1707
+
1708
+
1709
+ class KernelTemplate:
1710
+ """
1711
+ Base class for defining kernel templates.
1712
+
1713
+ Children classes: TritonTemplate, CUDATemplate
1714
+ """
1715
+
1716
+ @staticmethod
1717
+ def _template_from_string(source):
1718
+ env = jinja2_env()
1719
+ if env is not None:
1720
+ return env.from_string(source)
1721
+ return None
1722
+
1723
+ @staticmethod
1724
+ def _fake_get_dtype(fake_out):
1725
+ _get_dtype_real = V.graph.get_dtype
1726
+
1727
+ def get_dtype(name):
1728
+ if name == fake_out.get_name():
1729
+ return fake_out.get_dtype()
1730
+ return _get_dtype_real(name)
1731
+
1732
+ return get_dtype
1733
+
1734
+ def __init__(self, name: str):
1735
+ self.name = name
1736
+
1737
+ def maybe_append_choice(self, choices, **kwargs):
1738
+ """
1739
+ Maybe generates a new ChoiceCaller and appends it into existing choices.
1740
+
1741
+ choices: A list of ChoiceCallers.
1742
+ kwargs: Additional kwargs to be passed to self.generate() to generate a new ChoiceCaller.
1743
+ """
1744
+
1745
+ try:
1746
+ choices.append(self.generate(**kwargs))
1747
+ except NotImplementedError:
1748
+ pass
1749
+
1750
+ def generate(self, **kwargs) -> ChoiceCaller:
1751
+ """
1752
+ Generates a ChoiceCaller instance from the given arguments.
1753
+ """
1754
+
1755
+ raise NotImplementedError()
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp.py ADDED
The diff for this file is too large to render. See raw diff
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_prefix.h ADDED
@@ -0,0 +1,595 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #pragma once
2
+
3
+ #include <algorithm>
4
+ #include <atomic>
5
+ #include <cmath>
6
+ #include <cstdlib>
7
+ #include <limits>
8
+ #include <omp.h>
9
+
10
+ #include <ATen/NumericUtils.h>
11
+ #include <ATen/core/PhiloxRNGEngine.h>
12
+ #include <ATen/native/Math.h>
13
+
14
+ #include <c10/util/Float8_e4m3fn.h>
15
+ #include <c10/util/Float8_e5m2.h>
16
+ #include <c10/util/BFloat16.h>
17
+ #include <c10/util/BFloat16-math.h>
18
+ #include <c10/util/generic_math.h>
19
+ #include <c10/util/Half.h>
20
+ #include <c10/util/TypeCast.h>
21
+
22
+ #if defined(CPU_CAPABILITY_AVX512) || defined(CPU_CAPABILITY_AVX2) || defined(CPU_CAPABILITY_ZVECTOR)
23
+ #define INDUCTOR_USE_VECTOR_TYPES() 1
24
+ #else
25
+ #define INDUCTOR_USE_VECTOR_TYPES() 0
26
+ #endif
27
+
28
+ #if INDUCTOR_USE_VECTOR_TYPES()
29
+ #include <ATen/cpu/vec/functional.h>
30
+ #include <ATen/cpu/vec/vec.h>
31
+ #include <ATen/cpu/vec/vec_n.h>
32
+ #endif
33
+
34
+ typedef at::Half half;
35
+ typedef at::BFloat16 bfloat16;
36
+
37
+ typedef at::Float8_e4m3fn float8_e4m3fn;
38
+ typedef at::Float8_e5m2 float8_e5m2;
39
+
40
+ template <typename T>
41
+ struct Welford {
42
+ T mean = T(0);
43
+ T m2 = T(0);
44
+ T weight = T(0);
45
+ };
46
+
47
+
48
+ template <typename T>
49
+ struct IsVecType: std::false_type {};
50
+
51
+ #if INDUCTOR_USE_VECTOR_TYPES()
52
+ template <typename T>
53
+ struct IsVecType<at::vec::Vectorized<T>>: std::true_type {};
54
+ #endif
55
+
56
+ template <typename T>
57
+ Welford<T> welford_combine(const Welford<T> &a, const Welford<T> &b) {
58
+ if constexpr (!IsVecType<T>::value) {
59
+ if (a.weight == 0) {
60
+ return b;
61
+ }
62
+ if (b.weight == 0) {
63
+ return a;
64
+ }
65
+ }
66
+ auto delta = b.mean - a.mean;
67
+ auto new_weight = a.weight + b.weight;
68
+ auto wb_over_w = b.weight / new_weight;
69
+ if constexpr (IsVecType<T>::value) {
70
+ // Guard against division by zero
71
+ wb_over_w = T::blendv(wb_over_w, T(0), new_weight == T(0));
72
+ }
73
+ auto result = Welford<T>{
74
+ a.mean + delta * wb_over_w,
75
+ a.m2 + b.m2 + delta * delta * a.weight * wb_over_w,
76
+ new_weight
77
+ };
78
+ return result;
79
+ }
80
+
81
+ template <typename T>
82
+ Welford<T> welford_combine(const Welford<T> &acc, T data) {
83
+ // Add a single data point
84
+ auto delta = data - acc.mean;
85
+ auto new_weight = acc.weight + T(1);
86
+ auto new_mean = acc.mean + delta / new_weight;
87
+ auto new_delta = data - new_mean;
88
+ auto result = Welford<T>{
89
+ new_mean,
90
+ acc.m2 + delta * new_delta,
91
+ new_weight
92
+ };
93
+ return result;
94
+ }
95
+
96
+ // Refer to https://github.com/pytorch/pytorch/blob/b5b36cf0c4e1958f1ff25120f5d4beeef3288187/
97
+ // aten/src/ATen/native/SharedReduceOps.h#L419-L445
98
+ template <typename scalar_t>
99
+ inline bool greater_or_nan(scalar_t a, scalar_t b, int64_t idx_a, int64_t idx_b) {
100
+ // If (a == b), then choose the one with lower idx, else max(a, b)
101
+ if (at::_isnan(a)) {
102
+ if (at::_isnan(b)) {
103
+ return idx_a < idx_b;
104
+ }
105
+ return true;
106
+ }
107
+ return (a == b) ? idx_a < idx_b : (a > b);
108
+ }
109
+
110
+ template <typename scalar_t>
111
+ inline bool less_or_nan(scalar_t a, scalar_t b, int64_t idx_a, int64_t idx_b) {
112
+ // If (a == b), then choose the one with lower idx, else min(a, b)
113
+ if (at::_isnan(a)) {
114
+ if (at::_isnan(b)) {
115
+ return idx_a < idx_b;
116
+ }
117
+ return true;
118
+ }
119
+ return (a == b) ? idx_a < idx_b : (a < b);
120
+ }
121
+
122
+ #if INDUCTOR_USE_VECTOR_TYPES()
123
+ template <typename scalar_t>
124
+ inline at::vec::Vectorized<scalar_t> vec_shuffle_down(at::vec::Vectorized<scalar_t> x, size_t n) {
125
+ using Vec = at::vec::Vectorized<scalar_t>;
126
+ alignas(alignof(Vec)) scalar_t array[Vec::size()];
127
+ x.store(array);
128
+ for (size_t i = 0; i + n < Vec::size(); i += 2 * n) {
129
+ array[i] = array[i + n];
130
+ }
131
+ return Vec::loadu(array);
132
+ }
133
+
134
+ #ifdef CPU_CAPABILITY_AVX2
135
+ inline at::vec::Vectorized<float> vec_shuffle_down(at::vec::Vectorized<float> x, size_t n) {
136
+ using vec_t = at::vec::Vectorized<float>;
137
+ #define SHUFFLE_MASK(z, y, x, w) ((z << 6) | (y << 4) | (x << 2) | w)
138
+ switch (n) {
139
+ case 1:
140
+ return vec_t(_mm256_permute_ps(x, SHUFFLE_MASK(1, 1, 3, 3)));
141
+ case 2:
142
+ return vec_t(_mm256_permute_ps(x, SHUFFLE_MASK(2, 2, 2, 2)));
143
+ case 4:
144
+ return vec_t(_mm256_permute2f128_ps(x, x, SHUFFLE_MASK(1, 1, 1, 1)));
145
+ }
146
+ TORCH_CHECK(false, "Unhandled vec_shuffle_down value ", n);
147
+ }
148
+ #endif
149
+
150
+ template <typename scalar_t>
151
+ Welford<scalar_t> welford_vec_reduce_all(Welford<at::vec::Vectorized<scalar_t>> acc) {
152
+ using Vec = at::vec::Vectorized<scalar_t>;
153
+ for (size_t n = 1; n < Vec::size(); n *= 2) {
154
+ auto shuffled = Welford<Vec>{
155
+ vec_shuffle_down(acc.mean, n),
156
+ vec_shuffle_down(acc.m2, n),
157
+ vec_shuffle_down(acc.weight, n)
158
+ };
159
+ acc = welford_combine(acc, shuffled);
160
+ }
161
+
162
+ Welford<scalar_t> result;
163
+ alignas(alignof(Vec)) scalar_t array[Vec::size()];
164
+ acc.mean.store(array);
165
+ result.mean = array[0];
166
+
167
+ acc.m2.store(array);
168
+ result.m2 = array[0];
169
+
170
+ acc.weight.store(array);
171
+ result.weight = array[0];
172
+
173
+ return result;
174
+ }
175
+ #endif
176
+
177
+
178
+ template <typename T, typename U> inline typename std::common_type<T, U>::type mod(T a, U b) { return a % b; }
179
+ template <> inline float mod(float a, float b) { return std::fmod(a, b); }
180
+ template <> inline double mod(double a, double b) { return std::fmod(a, b); }
181
+
182
+ template <typename scalar_t>
183
+ inline scalar_t max_propagate_nan(scalar_t a, scalar_t b) {
184
+ if (at::_isnan(a)) {
185
+ return a;
186
+ }
187
+ return a > b ? a : b;
188
+ }
189
+
190
+ template <typename scalar_t>
191
+ inline scalar_t min_propagate_nan(scalar_t a, scalar_t b) {
192
+ if (at::_isnan(a)) {
193
+ return a;
194
+ }
195
+ return a < b ? a : b;
196
+ }
197
+
198
+ constexpr float uint32_to_uniform_float(uint32_t value) {
199
+ // maximum value such that `MAX_INT * scale < 1.0` (with float rounding)
200
+ constexpr float scale = 4.6566127342e-10;
201
+ return static_cast<float>(value & 0x7FFFFFFF) * scale;
202
+ }
203
+
204
+ float normalized_rand_cpu(uint32_t seed, uint32_t offset) {
205
+ return uint32_to_uniform_float(at::Philox4_32(seed, 0, offset)());
206
+ }
207
+
208
+ float randn_cpu(uint32_t seed, uint32_t offset) {
209
+ at::Philox4_32 engine(seed, 0, offset);
210
+ return engine.randn(10);
211
+ }
212
+
213
+ int64_t randint64_cpu(uint32_t seed, uint32_t offset, int64_t low, int64_t high) {
214
+ auto gen = at::Philox4_32(seed, 0, offset);
215
+ uint64_t r0 = gen();
216
+ uint64_t r1 = gen();
217
+ uint64_t result = r0 | (r1 << 32);
218
+ return static_cast<int64_t>(result % (high - low)) + low;
219
+ }
220
+
221
+ template <typename T> struct AsIntegerType { typedef T type; };
222
+ template <> struct AsIntegerType<float> { typedef uint32_t type; };
223
+ template <> struct AsIntegerType<double> { typedef uint64_t type; };
224
+ template <> struct AsIntegerType<bfloat16> { typedef uint16_t type; };
225
+
226
+ template <typename T>
227
+ typename std::enable_if<!std::is_reduced_floating_point<T>::value, T>::type
228
+ inline fetch_value(volatile T *addr) {
229
+ return *addr;
230
+ }
231
+
232
+ template <typename T>
233
+ typename std::enable_if<std::is_reduced_floating_point<T>::value, T>::type
234
+ inline fetch_value(volatile T *addr) {
235
+ return T(addr->x, T::from_bits());
236
+ }
237
+
238
+ template <typename T>
239
+ typename std::enable_if<!std::is_integral<T>::value>::type
240
+ atomic_add(volatile T *addr, T offset) {
241
+ typedef typename AsIntegerType<T>::type alt_type;
242
+
243
+ static_assert(sizeof(std::atomic<alt_type>) == sizeof(T),
244
+ "std::atomic issue");
245
+
246
+ alt_type expected;
247
+
248
+ alt_type desired;
249
+
250
+ std::atomic<alt_type> *atomic_addr = (std::atomic<alt_type> *)addr;
251
+ do {
252
+ T val = fetch_value(addr);
253
+ reinterpret_cast<T *>(&expected)[0] = val;
254
+ reinterpret_cast<T *>(&desired)[0] = val + offset;
255
+ } while (!atomic_addr->compare_exchange_weak(expected, desired,
256
+ std::memory_order_relaxed));
257
+ }
258
+
259
+ // Since C++20 float is supported by fetch_add, but the performance may not
260
+ // better than compare_exchange_weak, which can be checked by microbenchmark
261
+ // inductor_cpu_atomic.py
262
+ template <typename T>
263
+ typename std::enable_if<std::is_integral<T>::value>::type
264
+ atomic_add(volatile T *addr, T offset) {
265
+ static_assert(sizeof(std::atomic<T>) == sizeof(T),
266
+ "std::atomic issue");
267
+ std::atomic<T> *atomic_addr = (std::atomic<T> *)addr;
268
+ atomic_addr->fetch_add(offset, std::memory_order_relaxed);
269
+ }
270
+
271
+ // This function is used to convert bool or uint8 to float mask for
272
+ // vectorization. The caller needs to make sure the src represents TRUE/FALSE
273
+ // correctly.
274
+ template <typename T>
275
+ inline float flag_to_float_scalar(T src) {
276
+ float ret;
277
+ *(uint32_t*)(&ret) = src ? 0xFFFFFFFF : 0;
278
+ return ret;
279
+ }
280
+
281
+ #if defined(CPU_CAPABILITY_AVX512) || defined(CPU_CAPABILITY_AVX2) || defined(CPU_CAPABILITY_ZVECTOR)
282
+
283
+ inline at::vec::Vectorized<float> masked_load(const float* src, at::vec::Vectorized<float> mask) {
284
+ # if defined(CPU_CAPABILITY_AVX512)
285
+ at::vec::Vectorized<float> zero_vec(0);
286
+ auto all_ones = _mm512_set1_epi32(0xFFFFFFFF);
287
+ auto mmask = _mm512_cmp_epi32_mask(_mm512_castps_si512(mask), all_ones, _MM_CMPINT_EQ);
288
+ return _mm512_mask_loadu_ps(zero_vec, mmask, src);
289
+ # elif defined(CPU_CAPABILITY_AVX2)
290
+ auto all_ones = _mm256_set1_epi32(0xFFFFFFFF);
291
+ auto mmask = _mm256_cmpeq_epi32(_mm256_castps_si256(mask), all_ones);
292
+ return _mm256_maskload_ps(src, mmask);
293
+ # elif defined(CPU_CAPABILITY_ZVECTOR)
294
+ auto result = at::vec::Vectorized<float>::loadu(src);
295
+ return (result & mask);
296
+ # else
297
+ # error Unsupported vectorization CPU capability
298
+ # endif
299
+ }
300
+
301
+ template <typename T>
302
+ typename std::enable_if<std::is_same<T, bfloat16>::value || std::is_same<T, half>::value, at::vec::Vectorized<T>>::type
303
+ inline masked_load(const T* src, at::vec::Vectorized<float> mask) {
304
+ # if defined(CPU_CAPABILITY_AVX512)
305
+ auto all_ones = _mm512_set1_epi32(0xFFFFFFFF);
306
+ auto mmask = _mm512_cmp_epi32_mask(_mm512_castps_si512(mask), all_ones, _MM_CMPINT_EQ);
307
+ auto zero = _mm256_set1_epi16(0);
308
+ auto temp = _mm256_mask_loadu_epi16(zero, mmask, src);
309
+ return _mm512_inserti32x8(_mm512_castsi256_si512(temp), zero, 1);
310
+ # elif defined(CPU_CAPABILITY_AVX2)
311
+ auto all_ones = _mm256_set1_epi32(0xFFFFFFFF);
312
+ auto mmask_vec = _mm256_cmpeq_epi32(_mm256_castps_si256(mask), all_ones);
313
+ __at_align__ uint32_t mmask[8];
314
+ _mm256_storeu_si256(reinterpret_cast<__m256i*>(mmask), mmask_vec);
315
+ __at_align__ uint16_t result[16];
316
+ for (auto i = 0; i < 8; i++) {
317
+ result[i] = mmask[i] == 0xFFFFFFFF ? src[i].x: uint16_t(0);
318
+ }
319
+ return at::vec::Vectorized<T>::loadu(result);
320
+ # elif defined(CPU_CAPABILITY_ZVECTOR)
321
+ auto result = at::vec::Vectorized<T>::loadu(src, 8);
322
+ uint32_t maskdata[8] = { 0 };
323
+ uint16_t maskdata_dest[16] = { 0 };
324
+ mask.store(maskdata);
325
+ for (auto i = 0; i < 8; i++) {
326
+ maskdata_dest[i] = (maskdata[i] == 0xFFFFFFFF) ? 0xFFFF: 0;
327
+ }
328
+ auto maskvector = at::vec::Vectorized<T>::loadu(maskdata_dest);
329
+ return (result & maskvector);
330
+ # else
331
+ # error Unsupported vectorization CPU capability
332
+ # endif
333
+ }
334
+
335
+ template <typename T>
336
+ typename std::enable_if<std::is_same<T, uint8_t>::value || std::is_same<T, int8_t>::value, at::vec::Vectorized<T>>::type
337
+ inline masked_load(const T* src, at::vec::Vectorized<float> mask) {
338
+ # if defined(CPU_CAPABILITY_AVX512)
339
+ auto all_ones = _mm512_set1_epi32(0xFFFFFFFF);
340
+ auto mmask = _mm512_cmp_epi32_mask(_mm512_castps_si512(mask), all_ones, _MM_CMPINT_EQ);
341
+ auto zero = _mm_set1_epi8(0);
342
+ auto temp = _mm_mask_loadu_epi8(zero, mmask, src);
343
+ return _mm512_inserti64x2(_mm512_set1_epi32(0), temp, 0);
344
+ # elif defined(CPU_CAPABILITY_AVX2)
345
+ auto all_ones = _mm256_set1_epi32(0xFFFFFFFF);
346
+ auto mmask_vec = _mm256_cmpeq_epi32(_mm256_castps_si256(mask), all_ones);
347
+ __at_align__ uint32_t mmask[8];
348
+ _mm256_storeu_si256(reinterpret_cast<__m256i*>(mmask), mmask_vec);
349
+ __at_align__ T result[32];
350
+ for (auto i = 0; i < 8; i++) {
351
+ result[i] = mmask[i] == 0xFFFFFFFF ? src[i]: T(0);
352
+ }
353
+ return at::vec::Vectorized<T>::loadu(result);
354
+ # elif defined(CPU_CAPABILITY_ZVECTOR)
355
+ auto result = at::vec::Vectorized<T>::loadu(src, 8);
356
+ uint32_t maskdata[8];
357
+ T maskdata_dest[32] = { 0 };
358
+ mask.store(maskdata);
359
+ for (auto i = 0; i < 8; i++) {
360
+ maskdata_dest[i] = (maskdata[i] == 0xFFFFFFFF) ? 0xFF: 0;
361
+ }
362
+ auto maskvector = at::vec::Vectorized<T>::loadu(maskdata_dest);
363
+ return (result & maskvector);
364
+ # else
365
+ # error Unsupported vectorization CPU capability
366
+ # endif
367
+ }
368
+
369
+ template <typename T>
370
+ inline at::vec::Vectorized<float> flag_to_float_vec(const T* src) {
371
+ __at_align__ float dst_tmp[at::vec::Vectorized<float>::size()];
372
+ #pragma unroll
373
+ for (int64_t i = 0; i < at::vec::Vectorized<float>::size(); i++) {
374
+ dst_tmp[i] = flag_to_float_scalar(src[i]);
375
+ }
376
+ return at::vec::Vectorized<float>::loadu(dst_tmp);
377
+ }
378
+
379
+ template <typename scalar_t>
380
+ inline at::vec::Vectorized<float> cvt_lowp_fp_to_fp32(
381
+ at::vec::Vectorized<scalar_t> src) {
382
+ at::vec::Vectorized<float> res_vec1(0);
383
+ at::vec::Vectorized<float> res_vec2(0);
384
+ std::tie(res_vec1, res_vec2) = at::vec::convert_to_float<scalar_t>(src);
385
+ return res_vec1;
386
+ }
387
+
388
+ template <typename scalar_t>
389
+ inline at::vec::Vectorized<scalar_t> cvt_fp32_to_lowp_fp(
390
+ at::vec::Vectorized<float> src) {
391
+ return at::vec::convert_from_float<scalar_t>(src, src);
392
+ }
393
+
394
+ inline at::vec::Vectorized<float> mask_convert_to_float(at::vec::Vectorized<float> src) {
395
+ auto zeros = at::vec::Vectorized<float>(0);
396
+ auto ones = at::vec::Vectorized<float>(1);
397
+ return at::vec::Vectorized<float>::blendv(zeros, ones, src);
398
+ }
399
+
400
+ template <typename scalar_t>
401
+ inline
402
+ typename std::enable_if<std::is_same<scalar_t, bfloat16>::value || std::is_same<scalar_t, half>::value, at::vec::Vectorized<scalar_t>>::type
403
+ mask_convert_to_lowp(at::vec::Vectorized<float> src) {
404
+ auto fp_vec = mask_convert_to_float(src);
405
+ return cvt_fp32_to_lowp_fp<scalar_t>(fp_vec);
406
+ }
407
+
408
+ template <typename SRC>
409
+ inline at::vec::Vectorized<float> vec_convert_to_mask(at::vec::Vectorized<SRC> src) {
410
+ assert(
411
+ at::vec::Vectorized<float>::size() == at::vec::Vectorized<SRC>::size());
412
+ at::vec::Vectorized<float> res_vec(0);
413
+ __at_align__ float dst_tmp[at::vec::Vectorized<float>::size()];
414
+ __at_align__ SRC src_tmp[at::vec::Vectorized<SRC>::size()];
415
+ src.store(src_tmp);
416
+
417
+ #pragma unroll
418
+ for (int i = 0; i < at::vec::Vectorized<float>::size(); i++) {
419
+ *(uint32_t*)(dst_tmp + i) = src_tmp[i] ? 0xFFFFFFFF : 0;
420
+ }
421
+
422
+ return res_vec.loadu(dst_tmp);
423
+ }
424
+
425
+ template <typename SRC>
426
+ inline at::vec::Vectorized<float> to_float_mask(at::vec::Vectorized<SRC> src) {
427
+ return vec_convert_to_mask(src);
428
+ }
429
+
430
+ #if defined(CPU_CAPABILITY_AVX512) || defined(CPU_CAPABILITY_AVX2)
431
+ template <>
432
+ inline at::vec::Vectorized<float> to_float_mask(at::vec::Vectorized<int> src) {
433
+ #if defined(CPU_CAPABILITY_AVX2)
434
+ return at::vec::Vectorized<float>(_mm256_castsi256_ps(src));
435
+ #else
436
+ return at::vec::Vectorized<float>(_mm512_castsi512_ps(src));
437
+ #endif
438
+ }
439
+ #endif
440
+
441
+ template <>
442
+ inline at::vec::Vectorized<float> to_float_mask(at::vec::Vectorized<float> src) {
443
+ return src;
444
+ }
445
+
446
+ inline at::vec::Vectorized<float> to_float_mask(int src) {
447
+ union {
448
+ float fmask;
449
+ uint32_t imask;
450
+ } mask;
451
+ mask.imask = src ? 0xFFFFFFFF : 0;
452
+ return at::vec::Vectorized<float>(mask.fmask);
453
+ }
454
+
455
+ inline bool all_zero(at::vec::Vectorized<float> src) {
456
+ # if defined(CPU_CAPABILITY_AVX512)
457
+ auto src_int = _mm512_castps_si512(src);
458
+ __mmask16 mask = _mm512_test_epi32_mask(src_int, src_int);
459
+ return mask == 0;
460
+ # elif defined(CPU_CAPABILITY_AVX2)
461
+ return _mm256_testz_ps(src, src);
462
+ # else
463
+ __at_align__ int mask[at::vec::Vectorized<float>::size()];
464
+ src.store(mask);
465
+ for (int i = 0; i < at::vec::Vectorized<float>::size(); i++) {
466
+ if (mask[i] != 0) {
467
+ return false;
468
+ }
469
+ }
470
+ return true;
471
+ # endif
472
+ }
473
+
474
+ inline bool vector_lane_mask_check(at::vec::Vectorized<float> src, int lane) {
475
+ # if defined(CPU_CAPABILITY_AVX512)
476
+ return _mm512_movepi32_mask(_mm512_castps_si512(src)) & (1 << lane);
477
+ # elif defined(CPU_CAPABILITY_AVX2)
478
+ return _mm256_movemask_ps(src) & (1 << lane);
479
+ # else
480
+ __at_align__ int mask[at::vec::Vectorized<float>::size()];
481
+ src.store(mask);
482
+ return mask[lane] != 0;
483
+ # endif
484
+ }
485
+
486
+ inline at::vec::Vectorized<float> cvt_int64_to_fp32(at::vec::VectorizedN<int64_t,2> src) {
487
+ # if defined(CPU_CAPABILITY_AVX512)
488
+ auto low = _mm512_cvtepi64_ps(src[0]);
489
+ auto high = _mm512_cvtepi64_ps(src[1]);
490
+ return _mm512_insertf32x8(_mm512_castps256_ps512(low), high, 1);
491
+ # elif defined(CPU_CAPABILITY_AVX2)
492
+ auto low_double = at::vec::convert_to_fp_of_same_size<double>(src[0]);
493
+ auto low = _mm256_cvtpd_ps(low_double);
494
+ auto high_double = at::vec::convert_to_fp_of_same_size<double>(src[1]);
495
+ auto high = _mm256_cvtpd_ps(high_double);
496
+ return _mm256_insertf128_ps(_mm256_castps128_ps256(low), high, 1);
497
+ # else
498
+ constexpr int float_vec_size = at::vec::Vectorized<float>::size();
499
+ constexpr int int64_vec_size = at::vec::Vectorized<int64_t>::size();
500
+ __at_align__ float result[float_vec_size];
501
+ __at_align__ int64_t src_buf[int64_vec_size];
502
+ for (int i = 0; i < 2; i++) {
503
+ src[i].store(src_buf + i * int64_vec_size);
504
+ for (int j = 0; j < int64_vec_size; j++) {
505
+ result[i * int64_vec_size + j] = static_cast<float>(src_buf[i * int64_vec_size + j]);
506
+ }
507
+ }
508
+ return at::vec::Vectorized<float>::loadu(result);
509
+ # endif
510
+ }
511
+
512
+ inline at::vec::VectorizedN<int64_t,2> cvt_fp32_to_int64(at::vec::Vectorized<float> src) {
513
+ at::vec::VectorizedN<int64_t,2> result;
514
+ # if defined(CPU_CAPABILITY_AVX512)
515
+ result[0] = _mm512_cvt_roundps_epi64(_mm512_castps512_ps256(src), _MM_FROUND_TO_ZERO |_MM_FROUND_NO_EXC);
516
+ result[1] = _mm512_cvt_roundps_epi64(_mm512_extractf32x8_ps(src, 1), _MM_FROUND_TO_ZERO |_MM_FROUND_NO_EXC);
517
+ # elif defined(CPU_CAPABILITY_AVX2)
518
+ auto int32_vec = at::vec::convert_to_int_of_same_size(src);
519
+ result[0] = _mm256_cvtepi32_epi64(_mm256_castsi256_si128(int32_vec));
520
+ result[1] = _mm256_cvtepi32_epi64(_mm256_extracti128_si256(int32_vec, 1));
521
+ # else
522
+ constexpr int float_vec_size = at::vec::Vectorized<float>::size();
523
+ constexpr int int64_vec_size = at::vec::Vectorized<int64_t>::size();
524
+ __at_align__ float src_buf[float_vec_size];
525
+ __at_align__ int64_t result_buf[int64_vec_size];
526
+ src.store(src_buf);
527
+ for (int i = 0; i < 2; i++) {
528
+ for (int j = 0; j < int64_vec_size; j++) {
529
+ result_buf[j] = static_cast<int64_t>(src_buf[i * int64_vec_size + j]);
530
+ }
531
+ result[i] = at::vec::Vectorized<int64_t>::loadu(result_buf);
532
+ }
533
+ # endif
534
+ return result;
535
+ }
536
+
537
+ inline at::vec::Vectorized<int32_t> cvt_int64_to_int32(at::vec::VectorizedN<int64_t,2> src) {
538
+ # if defined(CPU_CAPABILITY_AVX512)
539
+ auto low = _mm512_cvtepi64_epi32(src[0]);
540
+ auto high = _mm512_cvtepi64_epi32(src[1]);
541
+ return _mm512_inserti32x8(_mm512_castsi256_si512(low), high, 1);
542
+ # elif defined(CPU_CAPABILITY_AVX2)
543
+ auto low = _mm256_shuffle_epi32(src[0], _MM_SHUFFLE(2, 0, 2, 0));
544
+ auto high = _mm256_shuffle_epi32(src[1], _MM_SHUFFLE(2, 0, 2, 0));
545
+ auto low_perm = _mm256_permute4x64_epi64(low, _MM_SHUFFLE(3, 1, 2, 0));
546
+ auto high_perm = _mm256_permute4x64_epi64(high, _MM_SHUFFLE(3, 1, 2, 0));
547
+ return _mm256_blend_epi32(low_perm, high_perm, 0xF0);
548
+ # else
549
+ constexpr int int32_vec_size = at::vec::Vectorized<int32_t>::size();
550
+ constexpr int int64_vec_size = at::vec::Vectorized<int64_t>::size();
551
+ __at_align__ int32_t result[int32_vec_size];
552
+ __at_align__ int64_t src_buf[int64_vec_size];
553
+ for (int i = 0; i < 2; i++) {
554
+ src[i].store(src_buf + i * int64_vec_size);
555
+ for (int j = 0; j < int64_vec_size; j++) {
556
+ result[i * int64_vec_size + j] = static_cast<int32_t>(src_buf[i * int64_vec_size + j]);
557
+ }
558
+ }
559
+ return at::vec::Vectorized<int32_t>::loadu(result);
560
+ # endif
561
+ }
562
+
563
+ inline at::vec::VectorizedN<int64_t,2> cvt_int32_to_int64(at::vec::Vectorized<int32_t> src) {
564
+ at::vec::VectorizedN<int64_t,2> result;
565
+ # if defined(CPU_CAPABILITY_AVX512)
566
+ result[0] = _mm512_cvtepi32_epi64(_mm512_castsi512_si256(src));
567
+ result[1] = _mm512_cvtepi32_epi64(_mm512_extracti32x8_epi32(src, 1));
568
+ # elif defined(CPU_CAPABILITY_AVX2)
569
+ result[0] = _mm256_cvtepi32_epi64(_mm256_castsi256_si128(src));
570
+ result[1] = _mm256_cvtepi32_epi64(_mm256_extracti128_si256(src, 1));
571
+ #else
572
+ constexpr int int32_vec_size = at::vec::Vectorized<int32_t>::size();
573
+ constexpr int int64_vec_size = at::vec::Vectorized<int64_t>::size();
574
+ __at_align__ int32_t src_buf[int32_vec_size];
575
+ __at_align__ int64_t result_buf[int64_vec_size];
576
+ src.store(src_buf);
577
+ for (int i = 0; i < 2; i++) {
578
+ for (int j = 0; j < int64_vec_size; j++) {
579
+ result_buf[j] = static_cast<int64_t>(src_buf[i * int64_vec_size + j]);
580
+ }
581
+ result[i] = at::vec::Vectorized<int64_t>::loadu(result_buf);
582
+ }
583
+ # endif
584
+ return result;
585
+ }
586
+
587
+ inline at::vec::VectorizedN<int64_t,2> mask_convert_to_int64(at::vec::Vectorized<float> src) {
588
+ return cvt_fp32_to_int64(mask_convert_to_float(src));
589
+ }
590
+
591
+ inline at::vec::Vectorized<float> to_float_mask(at::vec::VectorizedN<int64_t,2> src) {
592
+ return to_float_mask(cvt_int64_to_int32(src));
593
+ }
594
+
595
+ #endif
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cpu.py ADDED
@@ -0,0 +1,1851 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import functools
2
+ import os
3
+ import sys
4
+ from itertools import count
5
+ from typing import List, Optional, Tuple
6
+
7
+ import sympy
8
+ from sympy import Expr
9
+
10
+ import torch
11
+ import torch._ops
12
+ from .. import config, ir
13
+
14
+ from ..codecache import CudaKernelParamCache
15
+ from ..utils import cache_on_self, sympy_product
16
+ from ..virtualized import V
17
+ from .common import IndentedBuffer
18
+ from .wrapper import EnterSubgraphLine, ExitSubgraphLine, pexpr, WrapperCodeGen
19
+
20
+
21
+ class CppWrapperCpu(WrapperCodeGen):
22
+ """
23
+ Generates cpp wrapper for running on CPU and calls cpp kernels
24
+ """
25
+
26
+ def __init__(self):
27
+ if not hasattr(self, "device"):
28
+ self.device = "cpu"
29
+ super().__init__()
30
+ self.declare = "auto "
31
+ self.declare_maybe_reference = "decltype(auto) "
32
+ self.ending = ";"
33
+ self.open_bracket = "{"
34
+ self.closed_bracket = "}"
35
+ self.comment = "//"
36
+ self.namespace = "at::"
37
+ self.none_str = "nullptr" if config.abi_compatible else "at::Tensor()"
38
+ self.extern_call_ops = set()
39
+ self.size = "sizes()"
40
+ self.stride = "strides()"
41
+ self.cuda = False
42
+ self.supports_intermediate_hooks = False
43
+ self.outputs_need_copy = set()
44
+ self.kernel_callsite_id = count()
45
+ self.int_array_id = count() # for int array local variable declarations
46
+ self.declared_int_array_vars = set()
47
+ self.tmp_tensor_id = count() # for tmp tensor local variable declarations
48
+ self.arg_var_id = count()
49
+ self.used_cached_devices = set()
50
+ self.used_cached_dtypes = set()
51
+ self.cached_output_id = count()
52
+ self.scalar_to_tensor_id = count()
53
+
54
+ from .cpp import cexpr, CppPrinter
55
+
56
+ self.expr_printer = cexpr
57
+
58
+ # CppPrinter sometimes calls at::native functions which causes problems in
59
+ # the ABI-compatible mode. Currently we are hitting this problem when codegen
60
+ # Grid computation expressions, but we my need to fix other size computation
61
+ # as well.
62
+ class GridExprCppPrinter(CppPrinter):
63
+ def _print_FloorDiv(self, expr):
64
+ x, div = expr.args
65
+ x = self.paren(self.doprint(x))
66
+ div = self.paren(self.doprint(div))
67
+ assert expr.is_integer, "Expect integers in GridExprPrinter"
68
+ return f"({x}/{div})"
69
+
70
+ self.grid_expr_printer = GridExprCppPrinter().doprint
71
+
72
+ def generate_kernel_call(
73
+ self,
74
+ name,
75
+ call_args,
76
+ grid=None,
77
+ device_index=None,
78
+ cuda=True,
79
+ triton=True,
80
+ arg_types=None,
81
+ grid_fn: str = "grid",
82
+ triton_meta=None,
83
+ ):
84
+ """
85
+ Generates kernel call code.
86
+
87
+ cuda: Defines whether the backend is GPU. Otherwise the backend is CPU.
88
+
89
+ triton: Defines whether the GPU backend uses Triton for codegen.
90
+ Otherwise it uses the CUDA language for codegen.
91
+ Only valid when cuda == True.
92
+ """
93
+ if cuda:
94
+ return super().generate_kernel_call(
95
+ name,
96
+ call_args,
97
+ grid,
98
+ device_index,
99
+ cuda,
100
+ triton,
101
+ arg_types,
102
+ grid_fn,
103
+ )
104
+ else:
105
+ if config.abi_compatible:
106
+ assert arg_types is not None and len(call_args) == len(
107
+ arg_types
108
+ ), "Mismatch call_args and arg_types in generate_kernel_call"
109
+ new_args = []
110
+ for idx, arg in enumerate(call_args):
111
+ if "*" in arg_types[idx]:
112
+ var_name = f"var_{next(self.arg_var_id)}"
113
+ self.writeline(
114
+ f"auto* {var_name} = get_data_ptr_wrapper({arg});"
115
+ )
116
+ new_args.append(f"({arg_types[idx]})({var_name})")
117
+ else:
118
+ # arg is a scalar
119
+ new_args.append(arg)
120
+ self.writeline(self.wrap_kernel_call(name, new_args))
121
+ else:
122
+ self.writeline(self.wrap_kernel_call(name, call_args))
123
+
124
+ def write_constant(self, name, hashed):
125
+ # include a hash so our code cache gives different constants different files
126
+ self.header.writeline(f"// {name} {hashed}")
127
+
128
+ def write_header(self):
129
+ if V.graph.is_const_graph:
130
+ # We do not write header for constant graph, it will be written by main module.
131
+ return
132
+
133
+ if V.graph.aot_mode:
134
+ for header_cpp_file in ("interface.cpp", "implementation.cpp"):
135
+ with open(
136
+ os.path.join(
137
+ os.path.dirname(__file__), "aoti_runtime", header_cpp_file
138
+ )
139
+ ) as f:
140
+ self.header.splice(f.read())
141
+ else:
142
+ self.header.splice(
143
+ """
144
+ import torch
145
+ from torch._inductor.codecache import CppWrapperCodeCache
146
+
147
+ cpp_wrapper_src = (
148
+ '''
149
+ """
150
+ )
151
+
152
+ if config.abi_compatible:
153
+ if config.c_shim_version == "1":
154
+ self.header.splice("#include <torch/csrc/inductor/aoti_torch/c/shim.h>")
155
+ else:
156
+ self.header.splice(
157
+ f"#include <torch/csrc/inductor/aoti_torch/generated/c_shim_{self.device}.h>"
158
+ )
159
+ self.header.splice(
160
+ """
161
+ #include <torch/csrc/inductor/aoti_runtime/arrayref_tensor.h>
162
+ #include <torch/csrc/inductor/aoti_runtime/thread_local.h>
163
+ #include <torch/csrc/inductor/aoti_runtime/scalar_to_tensor.h>
164
+ """
165
+ )
166
+ if V.graph.aot_mode:
167
+ self.header.splice(
168
+ """
169
+ #include <torch/csrc/inductor/aoti_runtime/model.h>
170
+ """
171
+ )
172
+ else:
173
+ self.header.splice(
174
+ """
175
+ #include <ATen/ATen.h>
176
+ #include <ATen/core/dispatch/Dispatcher.h>
177
+ #include <ATen/native/BinaryOps.h>
178
+ #include <torch/csrc/inductor/aoti_runtime/utils.h>
179
+ #include <torch/csrc/inductor/aoti_torch/tensor_converter.h>
180
+ #include <torch/csrc/inductor/inductor_ops.h>
181
+ #include <torch/types.h>
182
+ #include <ATen/ops/bernoulli_native.h>
183
+
184
+ #define reinterpret_tensor torch::inductor::_reinterpret_tensor
185
+ #define alloc_from_pool torch::inductor::_alloc_from_pool
186
+ """
187
+ )
188
+
189
+ self.header.splice("#include <c10/util/generic_math.h>")
190
+
191
+ if not V.graph.aot_mode:
192
+ self.header.splice(
193
+ """
194
+ #include <pybind11/pybind11.h>
195
+
196
+ using namespace torch::aot_inductor;
197
+ """
198
+ )
199
+
200
+ from .memory_planning import ALIGN_BYTES
201
+
202
+ # Round up to the nearest multiple of ALIGN_BYTES
203
+ # ALIGN_BYTES must be a power of 2
204
+ self.header.splice(
205
+ f"""
206
+ [[maybe_unused]] static int64_t align(int64_t nbytes) {{
207
+ return (nbytes + {ALIGN_BYTES} - 1) & -{ALIGN_BYTES};
208
+ }}
209
+ """
210
+ )
211
+
212
+ def mark_output_type(self):
213
+ # mark output type to unwrap tensor back to python scalar
214
+ from ..ir import ShapeAsConstantBuffer
215
+
216
+ output_is_tensor = dict()
217
+ for idx, x in enumerate(V.graph.graph_outputs):
218
+ if isinstance(x, ShapeAsConstantBuffer):
219
+ output_is_tensor[idx] = False
220
+ else:
221
+ output_is_tensor[idx] = True
222
+
223
+ self.output_is_tensor = output_is_tensor
224
+
225
+ def write_prefix(self):
226
+ if V.graph.is_const_graph:
227
+ # We do not write prefix for constant graph, it will be written by main module.
228
+ return
229
+
230
+ if V.graph.aot_mode:
231
+ self.prefix.writeline("namespace torch {")
232
+ self.prefix.writeline("namespace aot_inductor {")
233
+
234
+ def write_input_output_info(
235
+ self,
236
+ info_kind: str,
237
+ idx: int,
238
+ name: str,
239
+ ):
240
+ self.prefix.writeline(f"""{info_kind}[{idx}].name = "{name}";""")
241
+
242
+ @staticmethod
243
+ def get_input_cpp_type(input):
244
+ assert config.use_minimal_arrayref_interface
245
+ from .cpp import DTYPE_TO_CPP
246
+
247
+ if isinstance(input, sympy.Expr):
248
+ from ..graph import may_get_constant_buffer_dtype
249
+
250
+ dtype = may_get_constant_buffer_dtype(input)
251
+ assert dtype is not None, f"Failed to get the dtype of sympy.Expr: {input}"
252
+ return DTYPE_TO_CPP[dtype]
253
+ return f"ArrayRefTensor<{DTYPE_TO_CPP[input.get_dtype()]}>"
254
+
255
+ def write_wrapper_decl(self):
256
+ inputs_len = len(V.graph.graph_inputs.keys())
257
+ if V.graph.aot_mode:
258
+ if config.use_minimal_arrayref_interface and not V.graph.is_const_graph:
259
+ from .cpp import DTYPE_TO_CPP
260
+
261
+ input_cpp_types = ", ".join(
262
+ f"{CppWrapperCpu.get_input_cpp_type(x)}"
263
+ for x in V.graph.graph_inputs.values()
264
+ )
265
+
266
+ output_arrayref_types = ", ".join(
267
+ f"ArrayRefTensor<{DTYPE_TO_CPP[x.get_dtype()]}>"
268
+ for x in V.graph.graph_outputs
269
+ )
270
+
271
+ self.prefix.splice(
272
+ f"""
273
+ using AOTInductorModelInputs = std::tuple<{input_cpp_types}>;
274
+ using AOTInductorModelOutputs = std::tuple<{output_arrayref_types}>;
275
+ """
276
+ )
277
+
278
+ if V.graph.const_module:
279
+ self.header.splice(V.graph.const_module.wrapper_code.header)
280
+ self.prefix.splice(V.graph.const_code)
281
+
282
+ if V.graph.is_const_graph:
283
+ self.prefix.splice(
284
+ """
285
+ void AOTInductorModel::_const_run_impl(
286
+ std::vector<AtenTensorHandle>& output_handles,
287
+ DeviceStreamType stream,
288
+ AOTIProxyExecutorHandle proxy_executor
289
+ ) {
290
+ """
291
+ )
292
+ else:
293
+ if not config.aot_inductor.use_runtime_constant_folding:
294
+ # If we do not split the constant graph, we'll just create
295
+ # an empty implementation when wrapping the main module.
296
+ self.prefix.splice(
297
+ """
298
+ void AOTInductorModel::_const_run_impl(
299
+ std::vector<AtenTensorHandle>& output_handles,
300
+ DeviceStreamType stream,
301
+ AOTIProxyExecutorHandle proxy_executor
302
+ ) {}
303
+
304
+ """
305
+ )
306
+
307
+ run_impl_proto = """
308
+ void AOTInductorModel::run_impl(
309
+ AtenTensorHandle*
310
+ input_handles, // array of input AtenTensorHandle; handles
311
+ // are stolen; the array itself is borrowed
312
+ AtenTensorHandle*
313
+ output_handles, // array for writing output AtenTensorHandle; handles
314
+ // will be stolen by the caller; the array itself is
315
+ // borrowed
316
+ DeviceStreamType stream,
317
+ AOTIProxyExecutorHandle proxy_executor
318
+ ) {
319
+ """
320
+ if config.use_minimal_arrayref_interface:
321
+ self.prefix.splice(
322
+ """
323
+ template <>
324
+ AOTInductorModelOutputs AOTInductorModel::run_impl_minimal_arrayref_interface<
325
+ AOTInductorModelInputs, AOTInductorModelOutputs>(
326
+ const AOTInductorModelInputs& inputs,
327
+ DeviceStreamType stream,
328
+ AOTIProxyExecutorHandle proxy_executor
329
+ ) {
330
+ """
331
+ )
332
+ self.suffix.splice(run_impl_proto)
333
+ self.suffix.splice(
334
+ """
335
+ AOTInductorModelInputs inputs;
336
+ convert_handles_to_inputs(input_handles, inputs);
337
+ auto outputs = run_impl_minimal_arrayref_interface<AOTInductorModelInputs, AOTInductorModelOutputs>(
338
+ inputs, stream, proxy_executor);
339
+ // NOTE: outputs is full of ArrayRef to thread_local storage. If in the future we need this
340
+ // interface to perform well for a DSO using the minimal arrayref interface, all we need
341
+ // to do is provide ThreadLocalCachedTensor for each one!
342
+ convert_outputs_to_handles(outputs, output_handles);
343
+ }
344
+ """
345
+ )
346
+
347
+ self.suffix.splice(
348
+ """
349
+ extern "C" AOTIRuntimeError AOTInductorModelRunMinimalArrayrefInterface(
350
+ AOTInductorModelHandle model_handle,
351
+ const AOTInductorModelInputs& inputs,
352
+ AOTInductorModelOutputs& outputs) {
353
+ auto model = reinterpret_cast<torch::aot_inductor::AOTInductorModel*>(model_handle);
354
+ CONVERT_EXCEPTION_TO_ERROR_CODE({
355
+ outputs = model->run_impl_minimal_arrayref_interface<AOTInductorModelInputs, AOTInductorModelOutputs>(
356
+ inputs,
357
+ (torch::aot_inductor::DeviceStreamType)nullptr,
358
+ nullptr);
359
+ })
360
+ }
361
+ """
362
+ )
363
+ else:
364
+ self.prefix.splice(run_impl_proto)
365
+ else:
366
+ self.prefix.splice(
367
+ """
368
+ void inductor_entry_impl(
369
+ AtenTensorHandle*
370
+ input_handles, // array of input AtenTensorHandle; handles
371
+ // are stolen; the array itself is borrowed
372
+ AtenTensorHandle*
373
+ output_handles // array for writing output AtenTensorHandle; handles
374
+ // will be stolen by the caller; the array itself is
375
+ // borrowed)
376
+ ) {
377
+ """
378
+ )
379
+ with self.prefix.indent():
380
+ # assign inputs and outputs in both cases so the later codegen can be simplified
381
+ if not config.use_minimal_arrayref_interface:
382
+ if not V.graph.is_const_graph:
383
+ if V.graph.aot_mode:
384
+ num_args = len(V.graph.graph_inputs)
385
+ else:
386
+ # Weights are promoted in the JIT mode
387
+ num_args = len(V.graph.graph_inputs) + len(V.graph.constants)
388
+ self.prefix.splice(
389
+ """
390
+ pybind11::gil_scoped_release release;
391
+ """
392
+ )
393
+
394
+ if config.abi_compatible:
395
+ self.prefix.splice(
396
+ f"""
397
+ auto inputs = steal_from_raw_handles_to_raii_handles(input_handles, {num_args});
398
+ """
399
+ )
400
+ else:
401
+ # This looks dumb, but can avoid creating two versions of code in the AOTInductor runtime.
402
+ self.prefix.splice(
403
+ f"""
404
+ auto inputs = alloc_tensors_by_stealing_from_handles(input_handles, {num_args});
405
+ """
406
+ )
407
+
408
+ if inputs_len != 0:
409
+ for idx, input_key in enumerate(V.graph.graph_inputs.keys()):
410
+ if config.use_minimal_arrayref_interface:
411
+ self.prefix.writeline(
412
+ f"auto {input_key} = std::get<{idx}>(inputs);"
413
+ )
414
+ continue
415
+ # unwrap input tensor back to scalar
416
+ if isinstance(V.graph.graph_inputs[input_key], sympy.Expr):
417
+ from ..graph import may_get_constant_buffer_dtype
418
+ from .cpp import DTYPE_TO_CPP
419
+
420
+ dtype = may_get_constant_buffer_dtype(
421
+ V.graph.graph_inputs[input_key]
422
+ )
423
+ assert (
424
+ dtype is not None
425
+ ), "Fails to get the dtype of the sympy.Expr"
426
+ cpp_dtype = DTYPE_TO_CPP[dtype]
427
+ if config.abi_compatible:
428
+ self.prefix.writeline(f"{cpp_dtype} {input_key};")
429
+ dtype_str = str(dtype).split(".")[-1]
430
+ self.prefix.writeline(
431
+ f"aoti_torch_item_{dtype_str}(inputs[{idx}], &{input_key});"
432
+ )
433
+ else:
434
+ self.prefix.writeline(
435
+ f"{cpp_dtype} {input_key} = inputs[{idx}].item<{cpp_dtype}>();"
436
+ )
437
+ else:
438
+ self.prefix.writeline(
439
+ f"auto {input_key} = std::move(inputs[{idx}]);"
440
+ )
441
+
442
+ assert all(
443
+ isinstance(v, torch.Tensor) for v in list(V.graph.constants.values())
444
+ ), "Expect all constants to be Tensor"
445
+ for idx, constants_key in enumerate(V.graph.constants.keys()):
446
+ if V.graph.aot_mode:
447
+ # Weights are stored in constants_ and owned by RAIIAtenTensorHandle there.
448
+ # Don't call std::move here because it will cause constants_ to lose the ownership.
449
+ if config.abi_compatible:
450
+ self.prefix.writeline(
451
+ f"""auto {constants_key} = constants_->at({idx});"""
452
+ )
453
+ else:
454
+ self.prefix.writeline(
455
+ f"auto {constants_key} = *tensor_handle_to_tensor_pointer("
456
+ + f"""constants_->at({idx}));"""
457
+ )
458
+ else:
459
+ # Append constants as inputs to the graph
460
+ constants_idx = inputs_len + idx
461
+ self.prefix.writeline(
462
+ f"auto {constants_key} = inputs[{constants_idx}];"
463
+ )
464
+
465
+ self.codegen_inputs(self.prefix, V.graph.graph_inputs)
466
+
467
+ if V.graph.aot_mode:
468
+ if not V.graph.is_const_graph:
469
+ if config.use_minimal_arrayref_interface:
470
+ # TODO: input shape checking for regular tensor interface as well?
471
+ self.codegen_input_numel_asserts()
472
+ else:
473
+ self.prefix.writeline("inputs.clear();")
474
+ self.prefix.writeline(
475
+ "auto& kernels = static_cast<AOTInductorModelKernels&>(*this->kernels_.get());"
476
+ )
477
+
478
+ def codegen_input_numel_asserts(self):
479
+ for name, buf in V.graph.graph_inputs.items():
480
+ if isinstance(buf, sympy.Expr):
481
+ continue
482
+
483
+ # comparing strides for 0 size tensor is tricky. Ignore them for now.
484
+ if sympy_product(buf.get_size()) == 0:
485
+ continue
486
+ numel = buf.get_numel()
487
+ self.prefix.writeline(f"assert_numel({name}, {numel});")
488
+
489
+ def codegen_input_size_var_decl(self, code: IndentedBuffer, name):
490
+ if config.abi_compatible:
491
+ code.writeline(f"int64_t* {name}_size;")
492
+ code.writeline(
493
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_sizes({name}, &{name}_size));"
494
+ )
495
+ else:
496
+ super().codegen_input_size_var_decl(code, name)
497
+
498
+ def codegen_input_stride_var_decl(self, code: IndentedBuffer, name):
499
+ if config.abi_compatible:
500
+ code.writeline(f"int64_t* {name}_stride;")
501
+ code.writeline(
502
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_strides({name}, &{name}_stride));"
503
+ )
504
+ else:
505
+ super().codegen_input_stride_var_decl(code, name)
506
+
507
+ def codegen_model_kernels(self):
508
+ self.prefix.writeline("namespace {")
509
+ self.prefix.writeline(
510
+ "class AOTInductorModelKernels : public AOTInductorModelKernelsBase {"
511
+ )
512
+ self.prefix.writeline(" public:")
513
+ declare_kernel = set(self.src_to_kernel.values())
514
+ declare_kernel.update(
515
+ entry[0] for entry in self.user_defined_kernel_cache.values()
516
+ )
517
+ if V.graph.const_module:
518
+ declare_kernel.update(
519
+ V.graph.const_module.wrapper_code.src_to_kernel.values()
520
+ )
521
+ for kernel in declare_kernel:
522
+ self.prefix.writeline(f" CUfunction {kernel}{{nullptr}};")
523
+ self.prefix.writeline("};")
524
+ self.prefix.writeline("} // namespace")
525
+
526
+ def codegen_model_constructor(self):
527
+ """
528
+ // Generated code example
529
+ AOTInductorModel::AOTInductorModel()
530
+ : AOTInductorModelBase(4, 1) {
531
+ inputs_info_[0].name = "input0";
532
+ inputs_info_[0].dtype = "torch.float16";
533
+ ...
534
+ constants_info_[0].name = "L__self___weight";
535
+ constants_info_[0].dtype = at::kFloat;
536
+ constants_info_[0].offset = 0;
537
+ constants_info_[0].data_size = 8192;
538
+ constants_info_[0].shape = {64, 32};
539
+ constants_info_[0].stride = {32, 1};
540
+ ...
541
+ outputs_info_[0].name = "output0";
542
+ outputs_info_[0].dtype = "torch.float16";
543
+ }
544
+ """
545
+
546
+ num_inputs = len(V.graph.graph_inputs)
547
+ num_outputs = len(V.graph.graph_outputs)
548
+ num_constants = len(V.graph.constants)
549
+ self.prefix.splice(
550
+ f"""
551
+ AOTInductorModel::AOTInductorModel(std::shared_ptr<ConstantMap> constants_map,
552
+ std::shared_ptr<std::vector<ConstantHandle>> constants_array,
553
+ const std::string& device_str,
554
+ std::optional<std::string> cubin_dir)
555
+ : AOTInductorModelBase({num_inputs}, {num_outputs}, {num_constants}, device_str, cubin_dir) {{
556
+ """
557
+ )
558
+
559
+ with self.prefix.indent():
560
+ for idx, (name, inp) in enumerate(V.graph.graph_inputs.items()):
561
+ assert not isinstance(
562
+ inp, sympy.Expr
563
+ ), f"input {name=} cannot be symbolic"
564
+ self.write_input_output_info("inputs_info_", idx, name)
565
+
566
+ for idx, (name, tensor) in enumerate(V.graph.constants.items()):
567
+ assert isinstance(tensor, torch.Tensor)
568
+ self.prefix.writeline(f"""constants_info_[{idx}].name = "{name}";""")
569
+ self.prefix.writeline(
570
+ f"constants_info_[{idx}].dtype = static_cast<int32_t>({self.codegen_dtype(tensor.dtype)});"
571
+ )
572
+ self.prefix.writeline(
573
+ f"constants_info_[{idx}].offset = {tensor.storage_offset()};"
574
+ )
575
+ self.prefix.writeline(
576
+ f"constants_info_[{idx}].data_size = {tensor.untyped_storage().nbytes()};"
577
+ )
578
+ from_folded = "true" if name in V.graph.folded_constants else "false"
579
+ self.prefix.writeline(
580
+ f"constants_info_[{idx}].from_folded = {from_folded};"
581
+ )
582
+
583
+ size_str = ", ".join([str(s) for s in tensor.size()])
584
+ self.prefix.writeline(f"constants_info_[{idx}].shape = {{{size_str}}};")
585
+
586
+ stride_str = ", ".join([str(s) for s in tensor.stride()])
587
+ self.prefix.writeline(
588
+ f"constants_info_[{idx}].stride = {{{stride_str}}};"
589
+ )
590
+ if name in V.graph.dynamo_flat_name_to_original_fqn:
591
+ original_fqn = V.graph.dynamo_flat_name_to_original_fqn.get(
592
+ name, name
593
+ )
594
+ elif name in V.graph.allocated_constant_name:
595
+ original_fqn = V.graph.allocated_constant_name[name]
596
+ else:
597
+ raise AssertionError("original_fqn must be set for constant")
598
+ self.prefix.writeline(
599
+ f"""constants_info_[{idx}].original_fqn = "{original_fqn}";"""
600
+ )
601
+ self.prefix.writeline("update_constants_map(std::move(constants_map));")
602
+ self.prefix.writeline("update_constants_array(std::move(constants_array));")
603
+
604
+ def escape_string(x):
605
+ return (
606
+ x.replace("\\", "\\\\")
607
+ .replace('"', '\\"')
608
+ .replace("\n", "\\n")
609
+ .replace("\t", "\\t")
610
+ )
611
+
612
+ self.prefix.writeline(
613
+ f'in_spec_ = "{escape_string(config.aot_inductor.serialized_in_spec)}";'
614
+ )
615
+ self.prefix.writeline(
616
+ f'out_spec_ = "{escape_string(config.aot_inductor.serialized_out_spec)}";'
617
+ )
618
+
619
+ for idx, output in enumerate(V.graph.graph_outputs):
620
+ assert not isinstance(
621
+ output, sympy.Expr
622
+ ), f"output {name=} cannot be symbolic"
623
+ name = f"output{idx}"
624
+ self.write_input_output_info("outputs_info_", idx, name)
625
+
626
+ self.prefix.writeline(
627
+ "this->kernels_ = std::make_unique<AOTInductorModelKernels>();"
628
+ )
629
+
630
+ self.prefix.writeline("}")
631
+
632
+ def codegen_const_run_driver(self):
633
+ """
634
+ // Generated code example
635
+ std::unordered_map<std::string, AtenTensorHandle> AOTInductorModel::const_run_impl(
636
+ DeviceStreamType stream,
637
+ AOTIProxyExecutorHandle proxy_executor,
638
+ bool initialization
639
+ ) {
640
+ std::unordered_map<std::string, AtenTensorHandle> folded_constants_map;
641
+ std::vector<AtenTensorHandle> output_handles;
642
+ // build up output_handles over here.
643
+ _const_run_impl(output_handles, stream, proxy_executor);
644
+ // build up folded_constants_map
645
+ return folded_constants_map;
646
+ }
647
+ """
648
+
649
+ self.prefix.splice(
650
+ """
651
+ std::unordered_map<std::string, AtenTensorHandle> AOTInductorModel::const_run_impl(
652
+ DeviceStreamType stream,
653
+ AOTIProxyExecutorHandle proxy_executor,
654
+ bool initialization
655
+ ) {
656
+ """
657
+ )
658
+ if not config.aot_inductor.use_runtime_constant_folding:
659
+ self.prefix.splice(
660
+ """
661
+ if (!initialization) {
662
+ std::cerr << "[WARNING] Calling constant_folding in model, but compiled with config: "
663
+ << "aot_inductor.use_runtime_constant_folding=False\\n";
664
+ }
665
+ return {};
666
+ }
667
+ """
668
+ )
669
+ return
670
+
671
+ with self.prefix.indent():
672
+ # This is a mapping to the index of constant folding graph's output
673
+ const_index_mapping: List[Optional[Tuple[int, str]]] = [None] * len(
674
+ V.graph.const_output_index
675
+ )
676
+ for idx, (name, _) in enumerate(V.graph.constants.items()):
677
+ if name in V.graph.const_output_index:
678
+ const_index_mapping[V.graph.const_output_index[name]] = (idx, name) # type: ignore[call-overload]
679
+ assert (
680
+ None not in const_index_mapping
681
+ ), "Not all constant gets mapped for constant folding graph."
682
+
683
+ self.prefix.writeline(
684
+ f"""
685
+ std::unordered_map<std::string, AtenTensorHandle> folded_constants_map;
686
+ folded_constants_map.reserve({len(const_index_mapping)});
687
+ std::vector<AtenTensorHandle> output_handles({len(const_index_mapping)});
688
+ """
689
+ )
690
+
691
+ self.prefix.splice(
692
+ """
693
+ // The below assignment of output_handles to constants is not used directly.
694
+ // It's only used to memo the correspondence of handle and constants.
695
+ """
696
+ )
697
+
698
+ for output_idx, (const_idx, _) in enumerate(const_index_mapping): # type: ignore[misc]
699
+ self.prefix.writeline(
700
+ f"output_handles[{output_idx}] = constants_->at({const_idx});"
701
+ )
702
+
703
+ self.prefix.writeline(
704
+ "_const_run_impl(output_handles, stream, proxy_executor);"
705
+ )
706
+
707
+ for output_idx, (_, const_name) in enumerate(const_index_mapping): # type: ignore[misc]
708
+ self.prefix.writeline(
709
+ f'folded_constants_map["{const_name}"] = output_handles[{output_idx}];'
710
+ )
711
+ self.prefix.writeline("return folded_constants_map;")
712
+
713
+ self.prefix.writeline("}")
714
+
715
+ def generate(self, is_inference):
716
+ if V.graph.aot_mode and not V.graph.is_const_graph:
717
+ self.codegen_model_kernels()
718
+ self.codegen_model_constructor()
719
+ self.codegen_const_run_driver()
720
+ self.write_wrapper_decl()
721
+ return super().generate(is_inference)
722
+
723
+ def finalize_prefix(self):
724
+ cached_dtypes_buffer = IndentedBuffer()
725
+ if config.abi_compatible:
726
+ for dtype in self.used_cached_dtypes:
727
+ cached_dtypes_buffer.writeline(f"CACHE_TORCH_DTYPE({dtype});")
728
+ for device in self.used_cached_devices:
729
+ cached_dtypes_buffer.writeline(f"CACHE_TORCH_DEVICE({device});")
730
+ cached_dtypes_buffer.splice(self.prefix)
731
+ self.prefix = cached_dtypes_buffer
732
+
733
+ def define_kernel(
734
+ self, name: str, kernel: str, metadata: Optional[str] = None, cuda=False
735
+ ):
736
+ self.header.splice(f"\n{kernel}\n")
737
+
738
+ def codegen_scalar_to_tensor(self, output: str):
739
+ name = f"scalar_to_tensor_{next(self.scalar_to_tensor_id)}"
740
+ self.wrapper_call.writeline(
741
+ f"RAIIAtenTensorHandle {name} = scalar_to_tensor_handle({output});"
742
+ )
743
+ return name
744
+
745
+ @cache_on_self
746
+ def get_output_refs(self):
747
+ return [
748
+ f"torch::tensor({x.codegen_reference(self.wrapper_call)})"
749
+ if isinstance(x, ir.ShapeAsConstantBuffer) and not config.abi_compatible
750
+ else x.codegen_reference(self.wrapper_call)
751
+ for x in V.graph.graph_outputs
752
+ ]
753
+
754
+ def generate_return(self, output_refs):
755
+ cst_names = V.graph.constants.keys()
756
+ arr_iface = (
757
+ not V.graph.is_const_graph and config.use_minimal_arrayref_interface
758
+ ) # For brevity.
759
+
760
+ def use_thread_local_cached_output_tensor(idx, output):
761
+ cached_output_name = f"cached_output_{next(self.cached_output_id)}"
762
+ cache_type = "Array" if arr_iface else "Tensor"
763
+ self.wrapper_call.writeline(
764
+ f"thread_local ThreadLocalCachedOutput{cache_type}<std::decay_t<decltype({output})>> "
765
+ f"{cached_output_name}({output});"
766
+ )
767
+ if arr_iface:
768
+ self.wrapper_call.writeline(
769
+ f"{cached_output_name}.copy_data_from({output});"
770
+ )
771
+ output_entry = f"std::get<{idx}>(output_arrayref_tensors)"
772
+ element_type = f"std::decay_t<decltype({output_entry}.data()[0])>"
773
+ self.wrapper_call.writeline(
774
+ f"{output_entry} = {cached_output_name}.arrayref_tensor<{element_type}>();"
775
+ )
776
+ else:
777
+ self.wrapper_call.writeline(
778
+ f"{cached_output_name}.copy_data_from({output});"
779
+ )
780
+ self.wrapper_call.writeline(
781
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&output_handles[{idx}]));"
782
+ )
783
+ self.wrapper_call.writeline(
784
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_assign_tensors({cached_output_name}.tensor(), "
785
+ f"output_handles[{idx}]));"
786
+ )
787
+
788
+ if arr_iface:
789
+ self.wrapper_call.writeline(
790
+ "AOTInductorModelOutputs output_arrayref_tensors;"
791
+ )
792
+ for idx, output in enumerate(output_refs):
793
+ if config.abi_compatible:
794
+ output_buffer = V.graph.graph_outputs[idx]
795
+ if isinstance(output_buffer, ir.ShapeAsConstantBuffer):
796
+ # Need to wrap scalar into tensor as the main function returns a vector of tensors
797
+ output_tensor = self.codegen_scalar_to_tensor(output)
798
+ self.wrapper_call.writeline(
799
+ f"output_handles[{idx}] = {output_tensor}.release();"
800
+ )
801
+ continue
802
+
803
+ output_is_tensor_handle_expr = (
804
+ f"std::is_same_v<std::decay_t<decltype({output})>,"
805
+ "RAIIAtenTensorHandle> || "
806
+ f"std::is_same_v<std::decay_t<decltype({output})>,"
807
+ "AtenTensorHandle> || "
808
+ f"std::is_same_v<std::decay_t<decltype({output})>,"
809
+ "ConstantHandle>"
810
+ )
811
+ self.wrapper_call.writeline(
812
+ f"if constexpr ({output_is_tensor_handle_expr}) {{"
813
+ )
814
+ with self.wrapper_call.indent():
815
+ if arr_iface:
816
+ cached_output_name = (
817
+ f"cached_output_{next(self.cached_output_id)}"
818
+ )
819
+ output_value_type = f"std::decay_t<decltype(std::get<{idx}>(output_arrayref_tensors).data()[0])>"
820
+ self.wrapper_call.writeline(
821
+ f"thread_local RAIIAtenTensorHandle {cached_output_name};"
822
+ )
823
+ if output in cst_names:
824
+ # NOTE(return_constant): In some rare cases where we return
825
+ # a constant, we have to return a copy of this constant,
826
+ # because (1) constants are not owned by the Model instance
827
+ # (2) constants remain the same cross inference runs,
828
+ # assuming they are not updated at runtime Basically, we
829
+ # cannot release or transfer the ownership of any original
830
+ # constant to the user.
831
+ self.wrapper_call.writeline(
832
+ f"AtenTensorHandle {cached_output_name}_tmp;"
833
+ )
834
+ self.wrapper_call.writeline(
835
+ f"aoti_torch_clone({output}, &{cached_output_name}_tmp);"
836
+ )
837
+ self.wrapper_call.writeline(
838
+ f"{cached_output_name} = {cached_output_name}_tmp;"
839
+ )
840
+ else:
841
+ self.wrapper_call.writeline(
842
+ f"{cached_output_name} = {output}.release();"
843
+ )
844
+ self.wrapper_call.writeline(
845
+ f"convert_handle_to_arrayref_tensor({cached_output_name}, "
846
+ f"std::get<{idx}>(output_arrayref_tensors));"
847
+ )
848
+ else:
849
+ if output in cst_names:
850
+ # See NOTE(return_constant) above.
851
+ self.wrapper_call.writeline(
852
+ f"aoti_torch_clone({output}, &output_handles[{idx}]);"
853
+ )
854
+ else:
855
+ self.wrapper_call.writeline(
856
+ f"output_handles[{idx}] = {output}.release();"
857
+ )
858
+ self.wrapper_call.writeline("} else {")
859
+ with self.wrapper_call.indent():
860
+ use_thread_local_cached_output_tensor(idx, output)
861
+ self.wrapper_call.writeline("}")
862
+
863
+ else:
864
+ assert (
865
+ not arr_iface
866
+ ), "minimal ArrayRef interface is only supported in ABI-compatible mode"
867
+ if output in cst_names:
868
+ output_expr = f"{output}.clone()"
869
+ # See NOTE(return_constant) above.
870
+ else:
871
+ output_expr = output
872
+ self.wrapper_call.writeline(
873
+ f"output_handles[{idx}] = reinterpret_cast<AtenTensorHandle>("
874
+ + f"new at::Tensor({output_expr}));"
875
+ )
876
+ if arr_iface:
877
+ self.wrapper_call.writeline("return output_arrayref_tensors;")
878
+
879
+ def generate_before_suffix(self, result):
880
+ if not V.graph.is_const_graph:
881
+ if V.graph.aot_mode:
882
+ result.writeline("} // AOTInductorModel::run_impl")
883
+ else:
884
+ result.writeline("} // inductor_entry_impl")
885
+
886
+ def generate_end(self, result):
887
+ if V.graph.aot_mode:
888
+ if V.graph.is_const_graph:
889
+ result.writeline("} // AOTInductorModel::_const_run_impl")
890
+ else:
891
+ result.writeline("} // namespace aot_inductor")
892
+ result.writeline("} // namespace torch")
893
+ return
894
+
895
+ result.writeline("'''\n)")
896
+ result.splice(
897
+ f"""
898
+ inductor_entry = CppWrapperCodeCache.load_pybinding(
899
+ ["std::vector<at::Tensor>"], cpp_wrapper_src, {self.cuda}, {len(V.graph.graph_outputs)})
900
+ """
901
+ )
902
+
903
+ # unwrap output tensor back to python scalar
904
+ if all(x for x in self.output_is_tensor.values()):
905
+ # If no ShapeAsConstantBuffer in the output, directly return the output as tensors
906
+ return_str = "return f(args_tensor)"
907
+ else:
908
+ outputs = [
909
+ f"outputs[{i}]" if self.output_is_tensor[i] else f"outputs[{i}].item()"
910
+ for i in range(len(V.graph.graph_outputs))
911
+ ]
912
+ outputs_str = f"[{', '.join(outputs)}]"
913
+ return_str = f"""
914
+ outputs = f(args_tensor)
915
+ return {outputs_str}
916
+ """
917
+
918
+ args_str = "args_tensor = [arg if isinstance(arg, torch.Tensor) else torch.tensor(arg) for arg in args]"
919
+ if V.graph.constants:
920
+ # Append constants to the input args for cpp wrapper.
921
+ # Python wrapper directly gets the value inside the wrapper call
922
+ # as a global variable passed when calling exec(code, mod.__dict__, mod.__dict__).
923
+ # For cpp wrapper, we need to pass this python value to the inductor_entry_impl function explicitly.
924
+ assert all(
925
+ isinstance(v, torch.Tensor) for v in list(V.graph.constants.values())
926
+ ), "Expect all constants to be Tensor"
927
+ constants_str = f"[{', '.join(V.graph.constants.keys())}]"
928
+ args_str += f"""
929
+ constants_tensor = {constants_str}
930
+ args_tensor.extend(constants_tensor)
931
+ """
932
+
933
+ # Wrap the func to support setting result._boxed_call = True
934
+ result.splice(
935
+ f"""
936
+ def _wrap_func(f):
937
+ def g(args):
938
+ {args_str}
939
+ {return_str}
940
+ return g
941
+ call = _wrap_func(inductor_entry)
942
+ """
943
+ )
944
+
945
+ def generate_c_shim_extern_kernel_call(self, kernel, args):
946
+ # In the abi_compatible mode, we call fallback aten ops through a C shim layer
947
+ self.allow_stack_allocation = False
948
+ kernel_tokens = kernel.split("::")
949
+ kernel_suffix = kernel_tokens[-1]
950
+ if kernel_suffix == "call":
951
+ kernel_suffix = kernel_tokens[-2]
952
+ if config.c_shim_version == "1":
953
+ shim_fn = f"aoti_torch_{kernel_suffix}"
954
+ else:
955
+ shim_fn = f"aoti_torch_{self.device}_{kernel_suffix}"
956
+
957
+ # HACK: val_to_arg_str jams multiple arguments together using a comma. If that
958
+ # ever breaks, it needs to be reworked to be able to return multiple arguments,
959
+ # and the split-on-comma code here needs to be removed.
960
+ wrapped_args = []
961
+ for x in args:
962
+ pieces = x.split(", ")
963
+ for piece in pieces:
964
+ # We only really *need* convert_arrayref_tensor_to_tensor for
965
+ # ArrayRefTensors. The code flowing into here uses `0` for nullptr,
966
+ # which convert_arrayref_tensor_to_tensor would blindly coerce to int,
967
+ # so just avoid wrapping integers.
968
+ if not piece.isdigit():
969
+ piece = f"convert_arrayref_tensor_to_tensor({piece})"
970
+ wrapped_args.append(piece)
971
+ self.writeline(
972
+ f"AOTI_TORCH_ERROR_CODE_CHECK({shim_fn}({', '.join(wrapped_args)}));"
973
+ )
974
+
975
+ def generate_c_shim_extern_kernel_alloc(self, extern_kernel, args):
976
+ # registered output buffer name
977
+ name = extern_kernel.name
978
+ output_handle_name = f"{name}_handle"
979
+ self.writeline(f"AtenTensorHandle {output_handle_name};")
980
+ output_arg = f"&{output_handle_name}"
981
+ self.generate_c_shim_extern_kernel_call(
982
+ extern_kernel.get_kernel_name(), args + [output_arg]
983
+ )
984
+ self.writeline(f"RAIIAtenTensorHandle {name}({output_handle_name});")
985
+
986
+ def generate_extern_kernel_alloc(self, extern_kernel, args):
987
+ if config.abi_compatible:
988
+ self.generate_c_shim_extern_kernel_alloc(extern_kernel, args)
989
+ else:
990
+ super().generate_extern_kernel_alloc(extern_kernel, args)
991
+
992
+ def generate_c_shim_fallback_kernel(self, fallback_kernel, args):
993
+ output_args = []
994
+ output_raii_handles = []
995
+ output_name_base = fallback_kernel.get_name()
996
+ for idx, output in enumerate(fallback_kernel.outputs):
997
+ if isinstance(output, ir.MultiOutput):
998
+ name = f"{output.get_name()}"
999
+ output_handle_name = f"{name}_handle"
1000
+ if output.indices:
1001
+ assert (
1002
+ output.indices[0][1] == idx
1003
+ ), f"expected {output.indices[0][1]=} == {idx=} for {output_name_base=}"
1004
+ self.writeline(f"AtenTensorHandle {output_handle_name};")
1005
+ output_args.append(f"&{output_handle_name}")
1006
+ output_raii_handles.append(
1007
+ f"RAIIAtenTensorHandle {name}({output_handle_name});"
1008
+ )
1009
+ elif isinstance(output, int):
1010
+ output_name = f"{output_name_base}_{idx}"
1011
+ self.writeline(f"int64_t {output_name} = {output};")
1012
+ output_args.append(f"&{output_name}")
1013
+ elif output is None:
1014
+ output_args.append("nullptr")
1015
+ else:
1016
+ raise NotImplementedError("unsupported type of {output=}")
1017
+ args = args + output_args
1018
+ assert (
1019
+ fallback_kernel.abi_compatible_kernel is not None
1020
+ ), f"abi_compatible_kernel is None for {fallback_kernel.python_kernel_name=}"
1021
+ self.generate_c_shim_extern_kernel_call(
1022
+ fallback_kernel.abi_compatible_kernel, args
1023
+ )
1024
+ for raii_handle in output_raii_handles:
1025
+ self.writeline(raii_handle)
1026
+
1027
+ def generate_fallback_kernel(self, fallback_kernel, args):
1028
+ if config.abi_compatible:
1029
+ self.generate_c_shim_fallback_kernel(fallback_kernel, args)
1030
+ else:
1031
+ super().generate_fallback_kernel(fallback_kernel, args)
1032
+
1033
+ def generate_extern_kernel_out(self, output_view, codegen_reference, args, kernel):
1034
+ if output_view:
1035
+ output_as_strided = f"{output_view.codegen_reference()}"
1036
+ output_name = f"{output_view.get_name()}_as_strided"
1037
+ self.writeline(f"auto {output_name} = {output_as_strided};")
1038
+
1039
+ args.insert(0, output_name)
1040
+ else:
1041
+ args.insert(0, f"{codegen_reference}")
1042
+
1043
+ if config.abi_compatible:
1044
+ self.generate_c_shim_extern_kernel_call(kernel, args)
1045
+ else:
1046
+ self.writeline(self.wrap_kernel_call(kernel, args))
1047
+
1048
+ def generate_user_defined_triton_kernel(
1049
+ self, kernel_name, grid, configs, args, triton_meta
1050
+ ):
1051
+ assert len(grid) != 0
1052
+ if len(grid) == 1:
1053
+ grid_decision = grid[0]
1054
+ else:
1055
+ meta = CudaKernelParamCache.get(kernel_name)
1056
+ assert meta is not None
1057
+ grid_decision = None
1058
+ for i, c in enumerate(configs):
1059
+ if all(arg == meta["meta"][key] for key, arg in c.kwargs.items()):
1060
+ grid_decision = grid[i]
1061
+ break
1062
+ assert grid_decision is not None
1063
+
1064
+ self.generate_kernel_call(
1065
+ kernel_name,
1066
+ args,
1067
+ grid=grid_decision,
1068
+ device_index=V.graph.scheduler.current_device.index,
1069
+ cuda=True,
1070
+ triton=True,
1071
+ triton_meta=triton_meta,
1072
+ )
1073
+
1074
+ def generate_scatter_fallback(
1075
+ self, output, inputs, kernel, python_kernel_name, src_is_tensor, reduce, kwargs
1076
+ ):
1077
+ # TODO: support other overload for cpp wrapper and remove the below assertions
1078
+ if config.abi_compatible:
1079
+ # call the ABI shim function instead of the ATen one
1080
+ kernel = kernel.replace("at::", "aoti_torch_")
1081
+ line = f"{kernel}({output}, {','.join(map(str, inputs))}"
1082
+ if python_kernel_name == "aten.scatter_":
1083
+ if src_is_tensor:
1084
+ if reduce:
1085
+ line += f", {V.graph.wrapper_code.val_to_arg_str(reduce)}"
1086
+ else:
1087
+ assert (
1088
+ reduce is None
1089
+ ), "Expect reduce to be None for aten.scatter_ with scalar src"
1090
+ else:
1091
+ line += f", {','.join(kwargs)}"
1092
+ line += f"){self.ending}"
1093
+ self.writeline(line)
1094
+
1095
+ def generate_index_put_fallback(self, kernel, x, indices, values, accumulate):
1096
+ if V.graph.aot_mode and V.graph.cpp_wrapper and config.abi_compatible:
1097
+ # See the comment in codegen_reinterpret_view about why having something like
1098
+ # RAIIAtenTensorHandle(tmp_tensor_handle_2) in a tmp array can cause the correponding
1099
+ # tensor prematurely deallocated, thus this std::vector().data() trick here.
1100
+ indices_str = (
1101
+ f"std::vector<AtenTensorHandle>{{{', '.join(indices)}}}.data()"
1102
+ )
1103
+ args = [x, indices_str, str(len(indices)), values, accumulate]
1104
+ else:
1105
+ indices_str = (
1106
+ f"{self.open_bracket}{', '.join(indices)}{self.closed_bracket}"
1107
+ )
1108
+ args = [x, indices_str, values, accumulate]
1109
+
1110
+ args.insert(0, x) # set x as the output tensor, this fallback mutates x.
1111
+ self.writeline(self.wrap_kernel_call(kernel, args))
1112
+
1113
+ def add_benchmark_harness(self, output):
1114
+ if V.graph.aot_mode:
1115
+ return
1116
+ super().add_benchmark_harness(output)
1117
+
1118
+ def codegen_sizevar(self, x: Expr) -> str:
1119
+ return self.expr_printer(V.graph.sizevars.simplify(x))
1120
+
1121
+ def codegen_tuple_access(self, basename: str, name: str, index: str) -> str:
1122
+ if config.abi_compatible:
1123
+ # in the abi_compatible mode, outputs are returned via arguments
1124
+ return name
1125
+ else:
1126
+ return f"std::get<{index}>({basename})"
1127
+
1128
+ def codegen_shape_tuple(self, shape: Tuple[Expr, ...]) -> str:
1129
+ parts = list(map(self.codegen_sizevar, shape))
1130
+ if len(parts) == 0:
1131
+ return "{}"
1132
+ if len(parts) == 1:
1133
+ return f"{{{parts[0]}, }}"
1134
+ return f"{{{', '.join(parts)}}}"
1135
+
1136
+ def codegen_dynamic_scalar(self, node):
1137
+ from .cpp import DTYPE_TO_ATEN, DTYPE_TO_CPP
1138
+
1139
+ (data,) = (t.codegen_reference() for t in node.inputs)
1140
+ if config.abi_compatible:
1141
+ dtype = node.inputs[0].get_dtype()
1142
+ dtype_str = str(dtype).split(".")[-1]
1143
+ self.writeline(f"{DTYPE_TO_CPP[dtype]} {node.sym};")
1144
+ self.writeline(f"aoti_torch_item_{dtype_str}({data}, &{node.sym});")
1145
+ # record in unbacked_symbol_decls so we won't generate a declaration of the symbol again
1146
+ self.unbacked_symbol_decls.add(str(node.sym))
1147
+ else:
1148
+ if node.is_bool:
1149
+ self.writeline(f"bool {node.sym} = {data}.item() ? 1 : 0;")
1150
+ else:
1151
+ convert_type = DTYPE_TO_ATEN[node.inputs[0].get_dtype()].replace(
1152
+ "at::k", "to"
1153
+ )
1154
+ self.writeline(f"auto {node.sym} = {data}.item().{convert_type}();")
1155
+
1156
+ def can_stack_allocate_buffer(self, buffer):
1157
+ return (
1158
+ self.allow_stack_allocation
1159
+ and buffer.get_device().type == "cpu"
1160
+ and self.can_prove_buffer_has_static_shape(buffer)
1161
+ and ir.is_contiguous_strides_for_shape(
1162
+ buffer.get_stride(), buffer.get_size()
1163
+ )
1164
+ )
1165
+
1166
+ def make_buffer_free(self, buffer):
1167
+ return (
1168
+ ""
1169
+ if isinstance(buffer.get_layout(), ir.MultiOutputLayout)
1170
+ or (V.graph.aot_mode and buffer.get_name() in self.stack_allocated_buffers)
1171
+ or (
1172
+ config.use_minimal_arrayref_interface
1173
+ and V.graph.aot_mode
1174
+ and buffer.get_name() in V.graph.graph_inputs
1175
+ )
1176
+ else f"{buffer.get_name()}.reset();"
1177
+ )
1178
+
1179
+ def make_free_by_names(self, names_to_del: List[str]):
1180
+ return " ".join(f"{name}.reset();" for name in names_to_del)
1181
+
1182
+ def codegen_exact_buffer_reuse(self, old_name: str, new_name: str, del_line: str):
1183
+ if config.abi_compatible:
1184
+ return f"auto {new_name} = std::move({old_name}); // reuse"
1185
+ else:
1186
+ return super().codegen_exact_buffer_reuse(old_name, new_name, del_line)
1187
+
1188
+ def generate_profiler_mark_wrapper_call(self, stack):
1189
+ self.wrapper_call.writeline(
1190
+ 'RECORD_FUNCTION("inductor_wrapper_call", c10::ArrayRef<c10::IValue>());'
1191
+ )
1192
+
1193
+ def write_triton_header_once(self):
1194
+ pass
1195
+
1196
+ def generate_start_graph(self):
1197
+ pass
1198
+
1199
+ def generate_end_graph(self):
1200
+ pass
1201
+
1202
+ def generate_inf_and_nan_checker(self, nodes):
1203
+ for buf in nodes.get_names():
1204
+ # TODO: Add buf name directly into check_inf_and_nan.
1205
+ self.writeline(
1206
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_check_inf_and_nan({buf}));"
1207
+ )
1208
+
1209
+ def codegen_device(self, device):
1210
+ if config.abi_compatible:
1211
+ self.used_cached_devices.add(device.type)
1212
+ return f"cached_torch_device_type_{device.type},{device.index if device.index else 0}"
1213
+ else:
1214
+ from .cpp import DEVICE_TO_ATEN
1215
+
1216
+ return (
1217
+ f"c10::Device({DEVICE_TO_ATEN[device.type]}, {device.index})"
1218
+ if device.index is not None
1219
+ else f"{DEVICE_TO_ATEN[device.type]}"
1220
+ )
1221
+
1222
+ def codegen_dtype(self, dtype):
1223
+ if config.abi_compatible:
1224
+ dtype_str = str(dtype).split(".")[-1]
1225
+ self.used_cached_dtypes.add(dtype_str)
1226
+ return f"cached_torch_dtype_{dtype_str}"
1227
+ else:
1228
+ from .cpp import DTYPE_TO_ATEN
1229
+
1230
+ return DTYPE_TO_ATEN[dtype]
1231
+
1232
+ @functools.lru_cache(None)
1233
+ def codegen_int_array_var(
1234
+ self,
1235
+ int_array: str,
1236
+ writer=None,
1237
+ known_statically=False,
1238
+ graph=None, # for per-graph caching
1239
+ ):
1240
+ # Because the memory planning is done in two passes (see the implementation
1241
+ # of self.generate), the writeline behavior is different in the two passes.
1242
+ # As a result, the emitted int array declarations may appear in a later
1243
+ # position of the generated code, so the second pass codegen should not
1244
+ # reuse int array declarations generated in the first pass
1245
+ if writer is None:
1246
+ # The first pass codegen uses `self` as the writer
1247
+ writer = self
1248
+
1249
+ var = f"int_array_{next(self.int_array_id)}"
1250
+ if var not in self.declared_int_array_vars:
1251
+ self.declared_int_array_vars.add(var)
1252
+ if known_statically:
1253
+ writer.writeline(f"static constexpr int64_t {var}[] = {int_array};")
1254
+ else:
1255
+ writer.writeline(f"int64_t {var}[] = {int_array};")
1256
+ return var
1257
+
1258
+ def make_buffer_allocation(self, buffer):
1259
+ return self.make_allocation(
1260
+ buffer.get_name(),
1261
+ buffer.get_device(),
1262
+ buffer.get_dtype(),
1263
+ buffer.get_size(),
1264
+ buffer.get_stride(),
1265
+ buffer if self.can_stack_allocate_buffer(buffer) else None,
1266
+ )
1267
+
1268
+ def make_allocation(
1269
+ self, name, device, dtype, shape, stride, buffer_if_can_stack_allocate=None
1270
+ ):
1271
+ orig_stride = stride
1272
+ device_str = self.codegen_device(device)
1273
+ dtype_code = self.codegen_dtype(dtype)
1274
+ size = self.codegen_shape_tuple(shape)
1275
+ stride = self.codegen_shape_tuple(orig_stride)
1276
+ if config.abi_compatible:
1277
+ size_array_var = self.codegen_int_array_var(
1278
+ size,
1279
+ self.wrapper_call,
1280
+ known_statically=self.is_statically_known_list_of_ints(shape),
1281
+ graph=self.get_codegened_graph(),
1282
+ )
1283
+ stride_array_var = self.codegen_int_array_var(
1284
+ stride,
1285
+ self.wrapper_call,
1286
+ known_statically=self.is_statically_known_list_of_ints(orig_stride),
1287
+ graph=self.get_codegened_graph(),
1288
+ )
1289
+ device_type, device_id = device_str.split(",")
1290
+ device_idx = "this->device_idx_" if V.graph.aot_mode else device_id
1291
+ if buffer_if_can_stack_allocate is not None:
1292
+ from .cpp import DTYPE_TO_CPP
1293
+
1294
+ self.stack_allocated_buffers[name] = buffer_if_can_stack_allocate
1295
+ cpp_type = DTYPE_TO_CPP[dtype]
1296
+ numel = buffer_if_can_stack_allocate.get_numel()
1297
+ # Note: we don't zero storage because empty_strided doesn't zero either.
1298
+ self.wrapper_call.writeline(f"{cpp_type} {name}_storage[{numel}];")
1299
+ args = [
1300
+ f"{name}_storage",
1301
+ size_array_var,
1302
+ stride_array_var,
1303
+ device_type,
1304
+ device_idx,
1305
+ ]
1306
+ return f"ArrayRefTensor<{cpp_type}> {name}({', '.join(args)});"
1307
+
1308
+ args = [
1309
+ str(len(shape)),
1310
+ size_array_var,
1311
+ stride_array_var,
1312
+ dtype_code,
1313
+ device_type,
1314
+ device_idx,
1315
+ f"&{name}_handle",
1316
+ ]
1317
+
1318
+ self.wrapper_call.writeline(f"AtenTensorHandle {name}_handle;")
1319
+ self.wrapper_call.writeline(
1320
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_empty_strided({', '.join(args)}));"
1321
+ )
1322
+
1323
+ return f"RAIIAtenTensorHandle {name}({name}_handle);"
1324
+
1325
+ if V.graph.aot_mode and device_str.startswith("c10::Device("):
1326
+ tensor_device = f"{device_str.split(',')[0]}, this->device_idx_)"
1327
+ else:
1328
+ tensor_device = device_str
1329
+
1330
+ if device.type == "cpu":
1331
+ return f"at::Tensor {name} = at::detail::empty_strided_cpu({size}, {stride}, {dtype_code});"
1332
+ if device.type == "cuda":
1333
+ return (
1334
+ f"at::Tensor {name} = at::detail::empty_strided_cuda("
1335
+ f"{size}, {stride}, {dtype_code}, c10::DeviceType::CUDA);"
1336
+ )
1337
+ return (
1338
+ f"{self.declare}{name} = {self.namespace}empty_strided("
1339
+ f"{size}, {stride}, at::TensorOptions({tensor_device}).dtype({dtype_code})){self.ending}"
1340
+ )
1341
+
1342
+ def codegen_alloc_from_pool(self, name, offset, dtype, shape, stride) -> str:
1343
+ if config.abi_compatible:
1344
+ size = self.codegen_shape_tuple(shape)
1345
+ stride = self.codegen_shape_tuple(stride)
1346
+ tmp_name = f"tmp_tensor_handle_{next(self.tmp_tensor_id)}"
1347
+ args = [
1348
+ name,
1349
+ pexpr(offset), # bytes not numel
1350
+ self.codegen_dtype(dtype),
1351
+ str(len(shape)),
1352
+ self.codegen_int_array_var(
1353
+ size, self.wrapper_call, graph=self.get_codegened_graph()
1354
+ ),
1355
+ self.codegen_int_array_var(
1356
+ stride, self.wrapper_call, graph=self.get_codegened_graph()
1357
+ ),
1358
+ f"&{tmp_name}",
1359
+ ]
1360
+ self.wrapper_call.writeline(f"AtenTensorHandle {tmp_name};")
1361
+ self.wrapper_call.writeline(
1362
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch__alloc_from_pool({', '.join(args)}));"
1363
+ )
1364
+ return f"RAIIAtenTensorHandle({tmp_name})"
1365
+
1366
+ return "alloc_from_pool({})".format(
1367
+ ", ".join(
1368
+ [
1369
+ name,
1370
+ pexpr(offset), # bytes not numel
1371
+ self.codegen_dtype(dtype),
1372
+ self.codegen_shape_tuple(shape),
1373
+ self.codegen_shape_tuple(stride),
1374
+ ]
1375
+ )
1376
+ )
1377
+
1378
+ def codegen_reinterpret_view(
1379
+ self, data, size_list, stride_list, offset, writer
1380
+ ) -> str:
1381
+ dim = str(len(size_list))
1382
+ size = self.codegen_shape_tuple(size_list)
1383
+ stride = self.codegen_shape_tuple(stride_list)
1384
+ offset = self.codegen_sizevar(offset)
1385
+
1386
+ if config.abi_compatible:
1387
+ tmp_name = f"tmp_tensor_handle_{next(self.tmp_tensor_id)}"
1388
+ # Because the memory planning is done in two passes (see the implementation
1389
+ # of self.generate), the writeline behavior is different in the two passes.
1390
+ if writer is None:
1391
+ writer = self
1392
+
1393
+ args = [
1394
+ f"{data.get_name()}",
1395
+ dim,
1396
+ self.codegen_int_array_var(
1397
+ size,
1398
+ writer,
1399
+ known_statically=self.is_statically_known_list_of_ints(size_list),
1400
+ graph=self.get_codegened_graph(),
1401
+ ),
1402
+ self.codegen_int_array_var(
1403
+ stride,
1404
+ writer,
1405
+ known_statically=self.is_statically_known_list_of_ints(stride_list),
1406
+ graph=self.get_codegened_graph(),
1407
+ ),
1408
+ offset,
1409
+ ]
1410
+
1411
+ def gen_reinterpret_call(writer, args):
1412
+ writer.writeline(
1413
+ f"auto {tmp_name} = reinterpret_tensor_wrapper({', '.join(args)});"
1414
+ )
1415
+
1416
+ if (
1417
+ self.can_stack_allocate_buffer(data)
1418
+ and self.is_statically_known_list_of_ints(size_list)
1419
+ and self.is_statically_known_list_of_ints(stride_list)
1420
+ and ir.is_contiguous_strides_for_shape(stride_list, size_list)
1421
+ ):
1422
+ gen_reinterpret_call(writer, args)
1423
+ return tmp_name
1424
+
1425
+ gen_reinterpret_call(writer, args)
1426
+
1427
+ # NB, the return handle here represents a temporary tensor, which will be automatically
1428
+ # released.
1429
+ # Here's a sample usage in the cpp wrapper code:
1430
+ # ```
1431
+ # aoti_torch_addmm_out(
1432
+ # buf1,
1433
+ # arg1_1,
1434
+ # RAIIAtenTensorHandle(tmp_tensor_handle_0),
1435
+ # buf0,
1436
+ # 1L,
1437
+ # 1L));
1438
+ # ```
1439
+ # RAIIAtenTensorHandle(tmp_tensor_handle_0) will be released after the call to addmm_out.
1440
+ # This could be problematic when it's used in a different pattern, for example:
1441
+ # ````
1442
+ # AtenTensorHandle tensor_args[] = {RAIIAtenTensorHandle(tmp_tensor_handle_2), buf5, buf6};
1443
+ # aoti_torch_proxy_executor_call_function(..., tensor_args);
1444
+ # ````
1445
+ # RAIIAtenTensorHandle(tmp_tensor_handle_2) will be invalid when it's used in the latter
1446
+ # kernel call.
1447
+ #
1448
+ # This is solved by updating the proxy_executor invocation to
1449
+ # ```
1450
+ # aoti_torch_proxy_executor_call_function(...,
1451
+ # std::vector<AtenTensorHandle>{
1452
+ # RAIIAtenTensorHandle(tmp_tensor_handle_2), buf5, buf6
1453
+ # }.data()
1454
+ # );
1455
+ # ```
1456
+ return f"wrap_with_raii_handle_if_needed({tmp_name})"
1457
+ else:
1458
+ args = [data.get_name(), size, stride, offset]
1459
+ return f"reinterpret_tensor({', '.join(args)})"
1460
+
1461
+ def codegen_device_copy(self, src, dst):
1462
+ if config.abi_compatible:
1463
+ self.writeline(
1464
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_tensor_copy_(expensive_copy_to_tensor_if_needed({src}), {dst}));"
1465
+ )
1466
+ else:
1467
+ self.writeline(f"{dst}.copy_({src});")
1468
+
1469
+ def codegen_multi_output(self, name, value):
1470
+ # in the abi_compatible mode, outputs are retrieved by passing
1471
+ # output pointers, so we skip its codegen here.
1472
+ if not config.abi_compatible:
1473
+ super().codegen_multi_output(name, value)
1474
+
1475
+ def codegen_subgraph_prefix(self, subgraph, outer_inputs, outer_outputs):
1476
+ for inner_input, outer_input in zip(subgraph.graph.graph_inputs, outer_inputs):
1477
+ if config.abi_compatible:
1478
+ # in ABI-compatible mode, we copy the underlying at::Tensor of the conditional
1479
+ # input (outer_input) into another at::Tensor to be used as a subgraph input
1480
+ # (inner_input) in the nested scope. we can't std::move here, as the codegened
1481
+ # outer input may be an expression / rvalue (e.g., reinterpret_view(x)), so we
1482
+ # can't necessarily std::move it back to the origin (x).
1483
+ self.writeline(f"AtenTensorHandle {inner_input}_handle;")
1484
+ self.writeline(
1485
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_assign_tensors_out({outer_input}, &{inner_input}_handle));"
1486
+ )
1487
+ self.writeline(
1488
+ f"RAIIAtenTensorHandle {inner_input}({inner_input}_handle);"
1489
+ )
1490
+ else:
1491
+ self.writeline(
1492
+ f"{self.declare}{inner_input} = {outer_input}{self.ending}"
1493
+ )
1494
+
1495
+ def codegen_subgraph_suffix(self, subgraph, outer_inputs, outer_outputs):
1496
+ for inner_output, outer_output in zip(
1497
+ subgraph.graph.graph_outputs, outer_outputs
1498
+ ):
1499
+ src = inner_output.codegen_reference()
1500
+ if config.abi_compatible:
1501
+ # in ABI-compatible mode, we need to std::move subgraph output (inner_output)
1502
+ # to the conditional output (outer_output), as RAIIAtenTensorHandle's copy
1503
+ # constructor is deleted.
1504
+ src = f"std::move({src})"
1505
+ self.writeline(f"{outer_output} = {src}{self.ending}")
1506
+
1507
+ def codegen_conditional(self, conditional):
1508
+ name = conditional.get_name()
1509
+ outer_inputs = [f"{buf.codegen_reference()}" for buf in conditional.operands]
1510
+ if config.abi_compatible:
1511
+ outer_outputs = []
1512
+ for out in conditional.outputs:
1513
+ # in ABI-compatible mode, ir.MultiOutput is not codegened,
1514
+ # hence pre-declare output variables directly and separately
1515
+ self.writeline(f"RAIIAtenTensorHandle {out.get_name()};")
1516
+ outer_outputs.append(out.get_name())
1517
+ predicate = f"{conditional.predicate.get_name()}_scalar"
1518
+ self.writeline(f"bool {predicate};")
1519
+ # in ABI-compatible mode, we need to use the ABI shim function
1520
+ # to extract a C++ bool from the unrelying scalar bool Tensor
1521
+ self.writeline(
1522
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_item_bool({conditional.predicate.codegen_reference()}, &{predicate}));"
1523
+ )
1524
+ else:
1525
+ # in non-ABI-compatible mode, we can codegen the conditional outputs
1526
+ # as array of at::Tensor instances, as the ir.MultiOutput is codegened
1527
+ outer_outputs = [f"{name}[{i}]" for i in range(len(conditional.outputs))]
1528
+ self.writeline(f"at::Tensor {name}[{len(conditional.outputs)}];")
1529
+ predicate = f"{conditional.predicate.codegen_reference()}.item<bool>()"
1530
+
1531
+ self.writeline(f"if ({predicate}) {{")
1532
+ self.writeline(EnterSubgraphLine(self, conditional.true_subgraph.graph))
1533
+ self.codegen_subgraph(conditional.true_subgraph, outer_inputs, outer_outputs)
1534
+ self.writeline(ExitSubgraphLine(self))
1535
+ self.writeline("} else {")
1536
+ self.writeline(EnterSubgraphLine(self, conditional.false_subgraph.graph))
1537
+ self.codegen_subgraph(conditional.false_subgraph, outer_inputs, outer_outputs)
1538
+ self.writeline(ExitSubgraphLine(self))
1539
+ self.writeline("}")
1540
+
1541
+ def generate_extern_kernel_args_decl_if_needed(
1542
+ self, op_overload, raw_args, output_args
1543
+ ):
1544
+ arg_types = [x.real_type for x in op_overload._schema.arguments]
1545
+ return_types = [x.type for x in op_overload._schema.returns]
1546
+
1547
+ new_tensor_args = []
1548
+ new_int_args = []
1549
+
1550
+ def fill_args(arg, arg_type):
1551
+ static_arg_types = (
1552
+ torch.FloatType,
1553
+ torch.BoolType,
1554
+ torch.StringType,
1555
+ torch.Type,
1556
+ torch.DeviceObjType,
1557
+ )
1558
+ inductor_tensor_buffers = (
1559
+ ir.Buffer,
1560
+ ir.ReinterpretView,
1561
+ )
1562
+
1563
+ if isinstance(arg_type, torch.TensorType):
1564
+ assert isinstance(arg, inductor_tensor_buffers), f"got {type(arg)}"
1565
+ new_tensor_args.append(f"{arg.codegen_reference()}")
1566
+ elif isinstance(arg_type, torch.IntType):
1567
+ # int
1568
+ new_int_args.append(str(arg))
1569
+ elif isinstance(arg_type, torch.SymIntType):
1570
+ # SymInt
1571
+ expr = arg.node.expr if isinstance(arg, torch.SymInt) else arg
1572
+ new_int_args.append(self.expr_printer(expr))
1573
+ elif isinstance(arg_type, torch.NumberType):
1574
+ # Scalar of type int
1575
+ assert isinstance(arg, (int, float, bool))
1576
+ # Only treat int Scalar as dynamic
1577
+ if isinstance(arg, int):
1578
+ new_int_args.append(str(arg))
1579
+ elif isinstance(arg_type, torch.ListType):
1580
+ assert isinstance(arg, (list, tuple))
1581
+
1582
+ # List[Tensor]
1583
+ if isinstance(arg_type.getElementType(), torch.TensorType):
1584
+ new_tensor_args.extend([f"{a.codegen_reference()}" for a in arg])
1585
+ # List[Optional[Tensor]]
1586
+ elif isinstance(
1587
+ arg_type.getElementType(), torch.OptionalType
1588
+ ) and isinstance(
1589
+ arg_type.getElementType().getElementType(), torch.TensorType
1590
+ ):
1591
+ new_tensor_args.extend(
1592
+ [f"{a.codegen_reference()}" for a in arg if a is not None]
1593
+ )
1594
+ # List[int]
1595
+ elif isinstance(arg_type.getElementType(), torch.IntType):
1596
+ new_int_args.extend([str(a) for a in arg])
1597
+ # List[SymInt]
1598
+ elif isinstance(arg_type.getElementType(), torch.SymIntType):
1599
+ expressions = [
1600
+ a.node.expr if isinstance(a, torch.SymInt) else a for a in arg
1601
+ ]
1602
+ new_int_args.extend(
1603
+ [self.expr_printer(expr) for expr in expressions]
1604
+ )
1605
+ # List[Scalar]
1606
+ elif isinstance(arg_type.getElementType(), torch.NumberType):
1607
+ # Only treat int Scalar as dynamic
1608
+ is_int_type = [isinstance(a, int) for a in arg]
1609
+ if any(is_int_type):
1610
+ assert all(
1611
+ is_int_type
1612
+ ), "AOTInductor only supports int scalars of the same type"
1613
+ new_int_args.extend([str(a) for a in arg])
1614
+ else:
1615
+ assert isinstance(
1616
+ arg_type.getElementType(), static_arg_types # type: ignore[arg-type]
1617
+ ), f"Fall through arguments must be one of static_arg_types, got {type(arg_type)}"
1618
+ else:
1619
+ assert isinstance(
1620
+ arg_type, static_arg_types # type: ignore[arg-type]
1621
+ ), f"Fall through arguments must be one of static_arg_types, got {type(arg_type)}"
1622
+
1623
+ for arg, arg_type in zip(raw_args, arg_types):
1624
+ if arg is not None:
1625
+ if isinstance(arg_type, torch.OptionalType):
1626
+ fill_args(arg, arg_type.getElementType())
1627
+ else:
1628
+ fill_args(arg, arg_type)
1629
+
1630
+ def fill_output_arg(arg, return_type):
1631
+ if isinstance(return_type, torch.TensorType):
1632
+ self.writeline(f"AtenTensorHandle {arg}_handle; // output buffer")
1633
+ self.writeline(
1634
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&{arg}_handle));"
1635
+ )
1636
+ self.writeline(f"RAIIAtenTensorHandle {arg}({arg}_handle);")
1637
+ new_tensor_args.append(f"{arg}")
1638
+ elif isinstance(return_type, torch.SymIntType):
1639
+ raise NotImplementedError("NYI support for return type: SymInt")
1640
+ elif isinstance(return_type, torch.ListType) and isinstance(
1641
+ return_type.getElementType(), torch.SymIntType
1642
+ ):
1643
+ raise NotImplementedError("NYI support for return type: List[SymInt]")
1644
+ else:
1645
+ raise AssertionError(f"Unsupported return type found: {return_type}")
1646
+
1647
+ # TODO: Only support tensor(s) returns for now, SymInt is not implemented yet
1648
+ for return_type in return_types:
1649
+ if isinstance(return_type, (torch.TensorType)):
1650
+ pass
1651
+ elif isinstance(return_type, torch.OptionalType):
1652
+ assert isinstance(return_type.getElementType(), torch.TensorType)
1653
+ elif isinstance(return_type, torch.ListType):
1654
+ assert isinstance(return_type.getElementType(), torch.TensorType)
1655
+ else:
1656
+ raise NotImplementedError(
1657
+ f"return type {return_type} is not yet supported."
1658
+ )
1659
+
1660
+ for output_arg in output_args:
1661
+ assert output_arg is not None, "Optional return types are not yet supported"
1662
+ if isinstance(output_arg, (list, tuple)):
1663
+ for out in output_arg:
1664
+ fill_output_arg(out, torch.TensorType.get())
1665
+ else:
1666
+ fill_output_arg(output_arg, torch.TensorType.get())
1667
+
1668
+ return new_tensor_args, new_int_args
1669
+
1670
+ def generate_extern_kernel_alloc_and_find_schema_if_needed(
1671
+ self,
1672
+ name,
1673
+ kernel,
1674
+ codegen_args,
1675
+ cpp_op_schema,
1676
+ cpp_kernel_key,
1677
+ cpp_kernel_overload_name="",
1678
+ op_overload=None,
1679
+ raw_args=None,
1680
+ outputs=None,
1681
+ ):
1682
+ if config.is_fbcode():
1683
+ assert op_overload is not None
1684
+ assert raw_args is not None
1685
+ assert outputs is not None
1686
+
1687
+ return self.generate_extern_kernel_alloc_and_find_schema_if_needed_fbcode(
1688
+ name,
1689
+ cpp_kernel_key,
1690
+ op_overload,
1691
+ raw_args,
1692
+ outputs,
1693
+ )
1694
+ else:
1695
+ return self.generate_extern_kernel_alloc_and_find_schema_if_needed_oss(
1696
+ name,
1697
+ kernel,
1698
+ codegen_args,
1699
+ cpp_op_schema,
1700
+ cpp_kernel_key,
1701
+ cpp_kernel_overload_name,
1702
+ )
1703
+
1704
+ def generate_extern_kernel_alloc_and_find_schema_if_needed_oss(
1705
+ self,
1706
+ name,
1707
+ kernel,
1708
+ codegen_args,
1709
+ cpp_op_schema,
1710
+ cpp_kernel_key,
1711
+ cpp_kernel_overload_name="",
1712
+ ):
1713
+ if cpp_kernel_key not in self.extern_call_ops:
1714
+ self.writeline(
1715
+ f"static auto op_{cpp_kernel_key} = c10::Dispatcher::singleton()"
1716
+ )
1717
+ self.writeline(
1718
+ f'\t.findSchemaOrThrow("{kernel}", "{cpp_kernel_overload_name}")'
1719
+ )
1720
+ self.writeline(f"\t.typed<{cpp_op_schema}>();")
1721
+ self.extern_call_ops.add(cpp_kernel_key)
1722
+
1723
+ self.writeline(
1724
+ f"auto {name} = op_{cpp_kernel_key}.call({', '.join(codegen_args)});"
1725
+ )
1726
+
1727
+ def generate_extern_kernel_alloc_and_find_schema_if_needed_fbcode(
1728
+ self,
1729
+ name,
1730
+ cpp_kernel_key,
1731
+ op_overload,
1732
+ raw_args, # contains both args and flatten kwargs
1733
+ outputs,
1734
+ ):
1735
+ def extract_output_name(out):
1736
+ assert out is not None, "None, i.e. optional output is not supported"
1737
+ if isinstance(out, ir.MultiOutput):
1738
+ return out.get_name()
1739
+ elif isinstance(out, (list, tuple)):
1740
+ return type(out)(extract_output_name(o) for o in out)
1741
+ else:
1742
+ raise AssertionError(f"Unexpected output: {type(out)}")
1743
+
1744
+ # output_args has the same pytree structure as outputs
1745
+ output_args = extract_output_name(outputs)
1746
+ if isinstance(output_args, str):
1747
+ output_args = [output_args]
1748
+
1749
+ (
1750
+ tensor_call_args,
1751
+ int_call_args,
1752
+ ) = self.generate_extern_kernel_args_decl_if_needed(
1753
+ op_overload, raw_args, output_args
1754
+ )
1755
+
1756
+ tensor_call_args_str = ", ".join(tensor_call_args)
1757
+ int_call_args_str = ", ".join(int_call_args)
1758
+
1759
+ extern_kernel_node_index = len(V.graph.extern_kernel_nodes) - 1
1760
+
1761
+ self.writeline(
1762
+ f"aoti_torch_proxy_executor_call_function(proxy_executor, "
1763
+ f"{extern_kernel_node_index}, "
1764
+ f"{len(int_call_args)}, "
1765
+ f"std::vector<int64_t>{{{int_call_args_str}}}.data(), "
1766
+ f"{len(tensor_call_args)}, "
1767
+ f"std::vector<AtenTensorHandle>{{{tensor_call_args_str}}}.data());"
1768
+ )
1769
+
1770
+ self.extern_call_ops.add(cpp_kernel_key)
1771
+
1772
+ def generate_reset_kernel_saved_flags(self):
1773
+ pass
1774
+
1775
+ def generate_save_uncompiled_kernels(self):
1776
+ pass
1777
+
1778
+ def val_to_cpp_arg_str(self, type_, val, is_legacy_abi) -> str:
1779
+ if (
1780
+ config.abi_compatible
1781
+ and not is_legacy_abi
1782
+ and isinstance(type_, torch.OptionalType)
1783
+ ):
1784
+ if val is None:
1785
+ return "0" # nullptr is not available in C
1786
+ if not isinstance(type_.getElementType(), torch.TensorType):
1787
+ var_name = f"var_{next(self.arg_var_id)}"
1788
+ self.writeline(f"auto {var_name} = {self.val_to_arg_str(val)};")
1789
+ return f"&{var_name}"
1790
+ elif config.c_shim_version == "2":
1791
+ # Similar to other data type, use pointer to denote optional tensor arg in v2 C shim
1792
+ base_handle = self.val_to_arg_str(val)
1793
+ if "wrap_with_raii_handle_if_needed" in base_handle:
1794
+ # wrap_with_raii_handle_if_needed creates a temp RAIIAtenTensorHandle, so we need to
1795
+ # explicitly store it. Otherwise, it will be destroyed before the fallback kernel call.
1796
+ tmp_var_name = f"var_{next(self.arg_var_id)}"
1797
+ self.writeline(
1798
+ f"RAIIAtenTensorHandle {tmp_var_name} = {base_handle};"
1799
+ )
1800
+ base_handle = tmp_var_name
1801
+ var_name = f"var_{next(self.arg_var_id)}"
1802
+ self.writeline(f"AtenTensorHandle {var_name} = {base_handle}.get();")
1803
+ return f"&{var_name}"
1804
+
1805
+ return self.val_to_arg_str(val)
1806
+
1807
+ def val_to_arg_str(self, val) -> str:
1808
+ if val is None:
1809
+ # When None is passed as an argument, it represents an optional that does not contain a value.
1810
+ if config.abi_compatible:
1811
+ return "0" # nullptr is not available in C
1812
+ return "c10::nullopt"
1813
+ elif isinstance(val, bool):
1814
+ if config.abi_compatible:
1815
+ return "1" if val else "0"
1816
+ else:
1817
+ return "true" if val else "false"
1818
+ elif isinstance(val, int):
1819
+ # uint64_t is long on Linux, but long long on MacOS
1820
+ return f"{val}LL" if sys.platform == "darwin" else f"{val}L"
1821
+ elif isinstance(val, str):
1822
+ return f'"{val}"'
1823
+ elif isinstance(
1824
+ val, (ir.Buffer, ir.ReinterpretView, ir.StorageBox, ir.TensorBox)
1825
+ ):
1826
+ return val.codegen_reference()
1827
+ elif isinstance(val, torch.device):
1828
+ return self.codegen_device(val)
1829
+ elif isinstance(val, torch.dtype):
1830
+ return self.codegen_dtype(val)
1831
+ elif isinstance(val, float) and val in [float("inf"), float("-inf")]:
1832
+ if val == float("inf"):
1833
+ return "std::numeric_limits<float>::infinity()"
1834
+ else:
1835
+ return "-std::numeric_limits<float>::infinity()"
1836
+ elif isinstance(val, (list, tuple)):
1837
+ # FIXME handle embedded optional types?
1838
+ result = f"{{{', '.join(self.val_to_arg_str(x) for x in val)}}}"
1839
+ if config.abi_compatible:
1840
+ static = self.is_statically_known_list_of_ints(val)
1841
+ # Need to pass the array length because we can't use std::vector
1842
+ int_var_array = self.codegen_int_array_var(
1843
+ result,
1844
+ known_statically=static,
1845
+ graph=self.get_codegened_graph(),
1846
+ )
1847
+ return f"{int_var_array}, {len(val)}"
1848
+ else:
1849
+ return result
1850
+ else:
1851
+ return repr(val)
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cuda.py ADDED
@@ -0,0 +1,328 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import functools
2
+ import os
3
+ from itertools import chain, count
4
+ from typing import Any, List, Optional, TYPE_CHECKING
5
+
6
+ import sympy
7
+
8
+ from torch._inductor.codecache import get_cpp_wrapper_cubin_path_name
9
+
10
+ from .. import config
11
+ from ..codecache import CudaKernelParamCache
12
+ from ..triton_heuristics import grid as default_grid
13
+ from ..virtualized import V
14
+ from .cpp_wrapper_cpu import CppWrapperCpu
15
+ from .wrapper import SymbolicCallArg
16
+
17
+ if TYPE_CHECKING:
18
+ from ..graph import GraphLowering
19
+
20
+
21
+ def is_int(s: str) -> bool:
22
+ # Cpp code gen adds L at the end of ints
23
+ # Lets remove it for checking whether we have an int or not
24
+ if s and s[-1] == "L":
25
+ s = s[:-1]
26
+ try:
27
+ int(s)
28
+ except ValueError:
29
+ return False
30
+ except TypeError:
31
+ return False
32
+ return True
33
+
34
+
35
+ def is_float(s: str) -> bool:
36
+ try:
37
+ float(s)
38
+ except ValueError:
39
+ return False
40
+ return True
41
+
42
+
43
+ class CppWrapperCuda(CppWrapperCpu):
44
+ """
45
+ Generates cpp wrapper for running on GPU and calls CUDA kernels
46
+ """
47
+
48
+ def __init__(self):
49
+ self.device = "cuda"
50
+ super().__init__()
51
+ self.grid_id = count()
52
+ self.cuda = True
53
+
54
+ def write_header(self):
55
+ if V.graph.is_const_graph:
56
+ # We do not write header for constant graph, it will be written by main module.
57
+ return
58
+
59
+ super().write_header()
60
+
61
+ self.header.splice("#include <filesystem>")
62
+ if config.abi_compatible:
63
+ self.header.splice(
64
+ "#include <torch/csrc/inductor/aoti_runtime/utils_cuda.h>"
65
+ )
66
+ else:
67
+ self.header.splice(
68
+ """
69
+ #include <c10/cuda/CUDAGuard.h>
70
+ #include <c10/cuda/CUDAStream.h>
71
+ #include <ATen/cuda/EmptyTensor.h>
72
+ """
73
+ )
74
+
75
+ self.header.splice(
76
+ """
77
+ #define CUDA_DRIVER_CHECK(EXPR) \\
78
+ do { \\
79
+ CUresult code = EXPR; \\
80
+ const char *msg; \\
81
+ cuGetErrorString(code, &msg); \\
82
+ if (code != CUDA_SUCCESS) { \\
83
+ throw std::runtime_error( \\
84
+ std::string("CUDA driver error: ") + \\
85
+ std::string(msg)); \\
86
+ } \\
87
+ } while (0);
88
+
89
+ namespace {
90
+
91
+ struct Grid {
92
+ Grid(uint32_t x, uint32_t y, uint32_t z)
93
+ : grid_x(x), grid_y(y), grid_z(z) {}
94
+ uint32_t grid_x;
95
+ uint32_t grid_y;
96
+ uint32_t grid_z;
97
+
98
+ bool is_non_zero() {
99
+ return grid_x > 0 && grid_y > 0 && grid_z > 0;
100
+ }
101
+ };
102
+
103
+ } // anonymous namespace
104
+
105
+ static inline CUfunction loadKernel(
106
+ std::string filePath,
107
+ const std::string &funcName,
108
+ uint32_t sharedMemBytes,
109
+ const std::optional<std::string> &cubinDir = std::nullopt) {
110
+ if (cubinDir) {
111
+ std::filesystem::path p1{*cubinDir};
112
+ std::filesystem::path p2{filePath};
113
+ filePath = (p1 / p2.filename()).string();
114
+ }
115
+
116
+ CUmodule mod;
117
+ CUfunction func;
118
+ CUDA_DRIVER_CHECK(cuModuleLoad(&mod, filePath.c_str()));
119
+ CUDA_DRIVER_CHECK(cuModuleGetFunction(&func, mod, funcName.c_str()));
120
+ if (sharedMemBytes > 0) {
121
+ CUDA_DRIVER_CHECK(cuFuncSetAttribute(
122
+ func,
123
+ CU_FUNC_ATTRIBUTE_MAX_DYNAMIC_SHARED_SIZE_BYTES,
124
+ sharedMemBytes
125
+ ))
126
+ }
127
+ return func;
128
+ }
129
+
130
+ static inline void launchKernel(
131
+ CUfunction func,
132
+ uint32_t gridX,
133
+ uint32_t gridY,
134
+ uint32_t gridZ,
135
+ uint32_t numWarps,
136
+ uint32_t sharedMemBytes,
137
+ void* args[],
138
+ cudaStream_t stream) {
139
+ CUDA_DRIVER_CHECK(cuLaunchKernel(
140
+ func, gridX, gridY, gridZ, 32*numWarps, 1, 1, sharedMemBytes, stream, args, nullptr
141
+ ));
142
+ }
143
+ """
144
+ )
145
+
146
+ def write_get_raw_stream(self, index, graph=None):
147
+ name = f"stream{index}"
148
+ self.writeline(f"cudaStream_t {name};")
149
+ self.writeline(
150
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_current_cuda_stream({index}, (void**)&{name}));"
151
+ )
152
+ return name
153
+
154
+ def define_kernel(
155
+ self, name: str, kernel: str, metadata: Optional[str] = None, cuda=True
156
+ ):
157
+ if not cuda:
158
+ return super().define_kernel(name, kernel, metadata, cuda)
159
+
160
+ def generate(self, is_inference):
161
+ self.prefix.writeline("\n")
162
+ if not V.graph.aot_mode:
163
+ for kernel in chain(
164
+ self.src_to_kernel.values(),
165
+ [entry[0] for entry in self.user_defined_kernel_cache.values()],
166
+ ):
167
+ self.prefix.writeline(f"static CUfunction {kernel} = nullptr;")
168
+ self.prefix.writeline("\n")
169
+ return super().generate(is_inference)
170
+
171
+ @functools.lru_cache(None)
172
+ def generate_load_kernel_once(
173
+ self,
174
+ name: str,
175
+ mangled_name: str,
176
+ cubin_path: str,
177
+ shared_mem: int,
178
+ graph: "GraphLowering", # for per-graph caching
179
+ ):
180
+ if V.graph.aot_mode:
181
+ self.writeline(f"if (kernels.{name} == nullptr) {{")
182
+ self.writeline(
183
+ f""" kernels.{name} = loadKernel("{cubin_path}", "{mangled_name}", {shared_mem}, this->cubin_dir_);"""
184
+ )
185
+ self.writeline("}")
186
+ else:
187
+ self.writeline(f"if ({name} == nullptr) {{")
188
+ self.writeline(
189
+ f""" {name} = loadKernel("{cubin_path}", "{mangled_name}", {shared_mem});"""
190
+ )
191
+ self.writeline("}")
192
+
193
+ def generate_args_decl(self, call_args):
194
+ dynamic_symbols = V.graph.sizevars.free_symbols()
195
+ # TODO: only works for constant now, need type info
196
+ new_args = []
197
+ for arg in call_args:
198
+ var_name = f"var_{next(self.arg_var_id)}"
199
+ if isinstance(arg, (sympy.Integer, sympy.Symbol, SymbolicCallArg)):
200
+ self.writeline(f"auto {var_name} = {arg};")
201
+ elif isinstance(arg, sympy.Expr):
202
+ self.writeline(f"auto {var_name} = {self.expr_printer(arg)};")
203
+ elif is_int(arg):
204
+ self.writeline(f"int {var_name} = {arg};")
205
+ elif is_float(arg):
206
+ self.writeline(f"float {var_name} = {arg};")
207
+ elif any(str(arg) == s.name for s in dynamic_symbols):
208
+ self.writeline(f"auto {var_name} = {arg};")
209
+ elif arg == "nullptr":
210
+ self.writeline(f"auto {var_name} = nullptr;")
211
+ elif arg == "c10::nullopt":
212
+ self.writeline(f"auto {var_name} = c10::nullopt;")
213
+ else:
214
+ if config.abi_compatible:
215
+ self.writeline(f"CUdeviceptr {var_name};")
216
+ self.writeline(
217
+ f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_data_ptr({arg}, reinterpret_cast<void**>(&{var_name})));"
218
+ )
219
+ else:
220
+ self.writeline(
221
+ f"CUdeviceptr {var_name} = reinterpret_cast<CUdeviceptr>({arg}.data_ptr());"
222
+ )
223
+ new_args.append(f"&{var_name}")
224
+
225
+ return ", ".join(new_args)
226
+
227
+ def generate_default_grid(self, name: str, grid: List[Any], cuda: bool = True):
228
+ """
229
+ Generate grid configs for launching a CUDA kernel using the grid
230
+ function from triton_heuristics.
231
+ """
232
+ if not cuda:
233
+ return grid
234
+ assert isinstance(grid, list), f"expected {grid=} to be a list"
235
+ grid = [e.inner_expr if isinstance(e, SymbolicCallArg) else e for e in grid]
236
+ grid_fn = default_grid(*grid)
237
+ params = CudaKernelParamCache.get(name)
238
+ assert (
239
+ params is not None
240
+ ), f"cuda kernel parameters for {name} should already exist at this moment, only found {CudaKernelParamCache.get_keys()}"
241
+ block_cfg = {
242
+ "XBLOCK": params["x_block"],
243
+ "YBLOCK": params["y_block"],
244
+ "ZBLOCK": params["z_block"],
245
+ }
246
+ return grid_fn(block_cfg)
247
+
248
+ def generate_kernel_call(
249
+ self,
250
+ name,
251
+ call_args,
252
+ grid=None,
253
+ device_index=None,
254
+ cuda=True,
255
+ triton=True,
256
+ arg_types=None,
257
+ grid_fn: str = "grid",
258
+ triton_meta=None,
259
+ ):
260
+ if not cuda:
261
+ # Even in CppWrapperCuda, we may see cpp kernels
262
+ return super().generate_kernel_call(
263
+ name, call_args, grid, device_index, cuda, triton, arg_types
264
+ )
265
+
266
+ params = CudaKernelParamCache.get(name)
267
+ assert (
268
+ params is not None
269
+ ), f"cuda kernel parameters for {name} should already exist at this moment"
270
+ mangled_name = params.get("mangled_name", None)
271
+ assert mangled_name is not None, "missing mangled_name"
272
+ cubin_path = params.get(get_cpp_wrapper_cubin_path_name(), None)
273
+ assert cubin_path is not None and os.path.exists(
274
+ cubin_path
275
+ ), f"cubin file should already exist at this moment: {cubin_path}"
276
+ shared_mem = params.get("shared_mem", 0)
277
+
278
+ self.generate_load_kernel_once(
279
+ name, mangled_name, cubin_path, shared_mem, V.graph
280
+ )
281
+
282
+ # args with value 1 are added into equal_to_1 and constants
283
+ # in triton_meta (in the Python codegen) which makes them
284
+ # inlined in the PTX and compiled CUBIN
285
+ if (
286
+ triton_meta is not None
287
+ and "configs" in triton_meta
288
+ and triton_meta["configs"]
289
+ ):
290
+ equal_to_1 = triton_meta["configs"][0].equal_to_1
291
+ call_args = [arg for i, arg in enumerate(call_args) if i not in equal_to_1]
292
+
293
+ call_args = self.generate_args_decl(call_args)
294
+ kernel_args_var = f"kernel_args_var_{next(self.kernel_callsite_id)}"
295
+ self.writeline(f"void* {kernel_args_var}[] = {{{call_args}}};")
296
+ stream = (
297
+ "stream"
298
+ if V.graph.aot_mode
299
+ else self.write_get_raw_stream(device_index, V.graph)
300
+ )
301
+ grid_name = f"{name}_grid_{next(self.grid_id)}"
302
+ assert isinstance(
303
+ grid, (list, tuple)
304
+ ), f"expected grid to be a list or tuple but got: {grid=}"
305
+
306
+ grid = [V.graph.sizevars.simplify(item) for item in grid]
307
+ grid_uses_symbolic_shapes = any(item.free_symbols for item in grid)
308
+ grid_args = [self.grid_expr_printer(item) for item in grid]
309
+ grid_args_str = ", ".join(grid_args)
310
+ self.writeline(f"Grid {grid_name} = Grid({grid_args_str});")
311
+
312
+ if grid_uses_symbolic_shapes:
313
+ self.writeline(f"if ({grid_name}.is_non_zero()) {{")
314
+ kernel_var_name = f"kernels.{name}" if V.graph.aot_mode else name
315
+ self.writeline(
316
+ "launchKernel({}, {}, {}, {}, {}, {}, {}, {});".format(
317
+ kernel_var_name,
318
+ f"{grid_name}.grid_x",
319
+ f"{grid_name}.grid_y",
320
+ f"{grid_name}.grid_z",
321
+ params["num_warps"],
322
+ params["shared_mem"],
323
+ kernel_args_var,
324
+ stream,
325
+ )
326
+ )
327
+ if grid_uses_symbolic_shapes:
328
+ self.writeline("}")
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (201 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_cpp_scheduling.cpython-310.pyc ADDED
Binary file (7.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_env.cpython-310.pyc ADDED
Binary file (1.36 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_kernel.cpython-310.pyc ADDED
Binary file (12.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cuda_template.cpython-310.pyc ADDED
Binary file (8.48 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cutlass_epilogue_gen.cpython-310.pyc ADDED
Binary file (14.2 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/cutlass_utils.cpython-310.pyc ADDED
Binary file (7.07 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/device_op_overrides.cpython-310.pyc ADDED
Binary file (1.23 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/__pycache__/gemm_template.cpython-310.pyc ADDED
Binary file (19.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/cutlass_lib_extensions/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (224 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/torch/_inductor/codegen/cuda/cutlass_lib_extensions/__pycache__/gemm_operation_extensions.cpython-310.pyc ADDED
Binary file (6.64 kB). View file