applied-ai-018 commited on
Commit
6162c59
·
verified ·
1 Parent(s): 0ee13d9

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. env-llmeval/lib/python3.10/site-packages/torchgen/__pycache__/model.cpython-310.pyc +0 -0
  2. env-llmeval/lib/python3.10/site-packages/torchgen/__pycache__/native_function_generation.cpython-310.pyc +0 -0
  3. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/__init__.py +0 -0
  4. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/__init__.py +0 -0
  5. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/__pycache__/__init__.cpython-310.pyc +0 -0
  6. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/custom_ops.py +131 -0
  7. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/et_cpp.py +368 -0
  8. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/unboxing.py +213 -0
  9. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/model.py +220 -0
  10. env-llmeval/lib/python3.10/site-packages/torchgen/executorch/parse.py +151 -0
  11. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/CET +0 -0
  12. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/EET +0 -0
  13. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/EST +0 -0
  14. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/EST5EDT +0 -0
  15. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Amsterdam +0 -0
  16. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Andorra +0 -0
  17. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Astrakhan +0 -0
  18. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Athens +0 -0
  19. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Belfast +0 -0
  20. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Belgrade +0 -0
  21. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Berlin +0 -0
  22. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Bratislava +0 -0
  23. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Brussels +0 -0
  24. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Bucharest +0 -0
  25. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Dublin +0 -0
  26. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Helsinki +0 -0
  27. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Kaliningrad +0 -0
  28. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Kirov +0 -0
  29. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Kyiv +0 -0
  30. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/London +0 -0
  31. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Luxembourg +0 -0
  32. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Madrid +0 -0
  33. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Mariehamn +0 -0
  34. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Minsk +0 -0
  35. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Monaco +0 -0
  36. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Nicosia +0 -0
  37. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Oslo +0 -0
  38. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Paris +0 -0
  39. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Podgorica +0 -0
  40. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Prague +0 -0
  41. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Rome +0 -0
  42. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Samara +0 -0
  43. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Sarajevo +0 -0
  44. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Saratov +0 -0
  45. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Skopje +0 -0
  46. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Sofia +0 -0
  47. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Stockholm +0 -0
  48. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Tiraspol +0 -0
  49. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Ulyanovsk +0 -0
  50. env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Uzhgorod +0 -0
env-llmeval/lib/python3.10/site-packages/torchgen/__pycache__/model.cpython-310.pyc ADDED
Binary file (64.6 kB). View file
 
env-llmeval/lib/python3.10/site-packages/torchgen/__pycache__/native_function_generation.cpython-310.pyc ADDED
Binary file (12.5 kB). View file
 
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/__init__.py ADDED
File without changes
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/__init__.py ADDED
File without changes
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (188 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/custom_ops.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import defaultdict
2
+
3
+ from dataclasses import dataclass
4
+ from typing import Dict, List, Optional, Sequence, Tuple
5
+
6
+ from torchgen import dest
7
+
8
+ # disable import sorting to avoid circular dependency.
9
+ from torchgen.api.types import DispatcherSignature # isort:skip
10
+ from torchgen.context import method_with_native_function
11
+ from torchgen.executorch.model import ETKernelIndex
12
+ from torchgen.model import DispatchKey, NativeFunction, Variant
13
+ from torchgen.selective_build.selector import SelectiveBuilder
14
+ from torchgen.utils import concatMap, Target
15
+
16
+
17
+ # Generates RegisterKernelStub.cpp, which provides placeholder kernels for custom operators. This will be used at
18
+ # model authoring side.
19
+ @dataclass(frozen=True)
20
+ class ComputeNativeFunctionStub:
21
+ @method_with_native_function
22
+ def __call__(self, f: NativeFunction) -> Optional[str]:
23
+ if Variant.function not in f.variants:
24
+ return None
25
+
26
+ sig = DispatcherSignature.from_schema(
27
+ f.func, prefix=f"wrapper_CPU_{f.func.name.overload_name}_", symint=False
28
+ )
29
+ assert sig is not None
30
+ if len(f.func.returns) == 0:
31
+ ret_name = ""
32
+ elif len(f.func.returns) == 1:
33
+ if f.func.arguments.out:
34
+ ret_name = f.func.arguments.out[0].name
35
+ else:
36
+ ret_name = next(
37
+ (
38
+ a.name
39
+ for a in f.func.arguments.flat_non_out
40
+ if a.type == f.func.returns[0].type
41
+ ),
42
+ "",
43
+ )
44
+ if not ret_name:
45
+ raise Exception(f"Can't handle this return type {f.func}")
46
+ else:
47
+ assert len(f.func.arguments.out) == len(f.func.returns), (
48
+ "Out variant number of returns need to match the number of out arguments."
49
+ f" Got outs {str(f.func.arguments.out)} but returns {str(f.func.returns)}"
50
+ )
51
+ # returns a tuple of out arguments
52
+ tensor_type = "at::Tensor &"
53
+ comma = ", "
54
+ ret_name = f"""::std::tuple<{comma.join([tensor_type] * len(f.func.returns))}>(
55
+ {comma.join([r.name for r in f.func.arguments.out])}
56
+ )"""
57
+ ret_str = f"return {ret_name};" if len(f.func.returns) > 0 else ""
58
+ return f"""
59
+ {sig.defn()} {{
60
+ {ret_str}
61
+ }}
62
+ """
63
+
64
+
65
+ def gen_custom_ops_registration(
66
+ *,
67
+ native_functions: Sequence[NativeFunction],
68
+ selector: SelectiveBuilder,
69
+ kernel_index: ETKernelIndex,
70
+ rocm: bool,
71
+ ) -> Tuple[str, str]:
72
+ """
73
+ Generate custom ops registration code for dest.RegisterDispatchKey.
74
+
75
+ :param native_functions: a sequence of `NativeFunction`
76
+ :param selector: for selective build.
77
+ :param kernel_index: kernels for all the ops.
78
+ :param rocm: bool for dest.RegisterDispatchKey.
79
+ :return: generated C++ code to register custom operators into PyTorch
80
+ """
81
+
82
+ # convert kernel index to BackendIndex. This is because we can't handle ETKernelIndex yet.
83
+ # TODO larryliu: evaluate if this code is still needed. If yes let it handle ETKernelIndex.
84
+
85
+ dispatch_key = DispatchKey.CPU
86
+ backend_index = kernel_index._to_backend_index()
87
+ static_init_dispatch_registrations = ""
88
+ ns_grouped_native_functions: Dict[str, List[NativeFunction]] = defaultdict(list)
89
+ for native_function in native_functions:
90
+ ns_grouped_native_functions[native_function.namespace].append(native_function)
91
+
92
+ for namespace, functions in ns_grouped_native_functions.items():
93
+ if len(functions) == 0:
94
+ continue
95
+ dispatch_registrations_body = "\n".join(
96
+ list(
97
+ concatMap(
98
+ dest.RegisterDispatchKey(
99
+ backend_index,
100
+ Target.REGISTRATION,
101
+ selector,
102
+ rocm=rocm,
103
+ symint=False,
104
+ class_method_name=None,
105
+ skip_dispatcher_op_registration=False,
106
+ ),
107
+ functions,
108
+ )
109
+ )
110
+ )
111
+ static_init_dispatch_registrations += f"""
112
+ TORCH_LIBRARY_IMPL({namespace}, {dispatch_key}, m) {{
113
+ {dispatch_registrations_body}
114
+ }};"""
115
+ anonymous_definition = "\n".join(
116
+ list(
117
+ concatMap(
118
+ dest.RegisterDispatchKey(
119
+ backend_index,
120
+ Target.ANONYMOUS_DEFINITION,
121
+ selector,
122
+ rocm=rocm,
123
+ symint=False,
124
+ class_method_name=None,
125
+ skip_dispatcher_op_registration=False,
126
+ ),
127
+ native_functions,
128
+ )
129
+ )
130
+ )
131
+ return anonymous_definition, static_init_dispatch_registrations
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/et_cpp.py ADDED
@@ -0,0 +1,368 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, Sequence, Set, Union
2
+
3
+ from torchgen import local
4
+ from torchgen.api.types import (
5
+ ArgName,
6
+ ArrayCType,
7
+ BaseCType,
8
+ Binding,
9
+ ConstRefCType,
10
+ CType,
11
+ MutRefCType,
12
+ NamedCType,
13
+ SpecialArgName,
14
+ TupleCType,
15
+ VectorCType,
16
+ voidT,
17
+ )
18
+ from torchgen.model import (
19
+ Argument,
20
+ Arguments,
21
+ BaseTy,
22
+ BaseType,
23
+ ListType,
24
+ NativeFunction,
25
+ OptionalType,
26
+ Return,
27
+ SelfArgument,
28
+ TensorOptionsArguments,
29
+ Type,
30
+ )
31
+ from torchgen.utils import assert_never
32
+ from .types import (
33
+ ArrayRefCType,
34
+ BaseTypeToCppMapping,
35
+ OptionalCType,
36
+ scalarT,
37
+ tensorListT,
38
+ tensorT,
39
+ )
40
+
41
+ """
42
+ This file describes the translation of JIT schema to the public C++ API, which is what people use when they call
43
+ functions like at::add. It also serves as a native function API, which is the signature of kernels,
44
+ since in Executorch CppSignature is the same as NativeSignature.
45
+
46
+ Difference between this file and torchgen.api.cpp.py:
47
+
48
+ - Executorch doesn't support TensorOptions, however in this file we still keep the logic here to be compatible with
49
+ torchgen.api.cpp, so that we can do stuff like ATen mode (running ATen kernels in Executorch).
50
+
51
+ - Executorch doesn't support Dimname.
52
+
53
+ - Executorch runtime doesn't support SymInt, will treat it as int.
54
+ """
55
+
56
+
57
+ # Translation of "value types" in JIT schema to C++ API type. Value
58
+ # types look the same no matter if they are argument types or return
59
+ # types. Returns None if the type in question is not a value type.
60
+ def valuetype_type(
61
+ t: Type,
62
+ *,
63
+ binds: ArgName,
64
+ remove_non_owning_ref_types: bool = False,
65
+ ) -> Optional[NamedCType]:
66
+ if isinstance(t, BaseType):
67
+ if t.name == BaseTy.Tensor or t.name == BaseTy.Scalar:
68
+ return None
69
+ # For SymInt we simply treat it as int.
70
+ elif str(t) == "SymInt":
71
+ return NamedCType(binds, BaseCType(BaseTypeToCppMapping[BaseTy.int]))
72
+ if remove_non_owning_ref_types:
73
+ if t.name == BaseTy.str:
74
+ raise AssertionError(
75
+ "string ref->value conversion: not implemented yet"
76
+ )
77
+ # All other BaseType currently map directly to BaseCppTypes.
78
+ return NamedCType(binds, BaseCType(BaseTypeToCppMapping[t.name]))
79
+ elif isinstance(t, OptionalType):
80
+ elem = valuetype_type(t.elem, binds=binds)
81
+ if elem is None:
82
+ return None
83
+ return NamedCType(binds, OptionalCType(elem.type))
84
+ elif isinstance(t, ListType):
85
+ if str(t.elem) == "bool":
86
+ assert t.size is not None
87
+ return NamedCType(
88
+ binds, ArrayCType(BaseCType(BaseTypeToCppMapping[BaseTy.bool]), t.size)
89
+ )
90
+ else:
91
+ return None
92
+ else:
93
+ raise AssertionError(f"unrecognized type {repr(t)}")
94
+
95
+
96
+ # Translation of types occurring in JIT arguments to a C++ argument type.
97
+ # If remove_non_owning_ref_types is set, we'll guarantee that the outputed CType is not a non-owning reference type.
98
+ # For example, we'll return std::vector<int> instead of IntArrayRef.
99
+ # See Note [translation from C++ reference to value types]
100
+ def argumenttype_type(
101
+ t: Type,
102
+ *,
103
+ mutable: bool,
104
+ binds: ArgName,
105
+ remove_non_owning_ref_types: bool = False,
106
+ ) -> NamedCType:
107
+ # If it's a value type, do the value type translation
108
+ r = valuetype_type(
109
+ t,
110
+ binds=binds,
111
+ remove_non_owning_ref_types=remove_non_owning_ref_types,
112
+ )
113
+ if r is not None:
114
+ return r
115
+ if isinstance(t, BaseType):
116
+ if t.name == BaseTy.Tensor:
117
+ if mutable and not local.use_const_ref_for_mutable_tensors():
118
+ return NamedCType(binds, MutRefCType(BaseCType(tensorT)))
119
+ else:
120
+ return NamedCType(binds, ConstRefCType(BaseCType(tensorT)))
121
+ elif t.name == BaseTy.Scalar:
122
+ return NamedCType(binds, ConstRefCType(BaseCType(scalarT)))
123
+ else:
124
+ raise AssertionError(f"base type should have been value type {t}")
125
+ elif isinstance(t, OptionalType):
126
+ if str(t.elem) == "Tensor":
127
+ if mutable and not local.use_const_ref_for_mutable_tensors():
128
+ return NamedCType(
129
+ binds, MutRefCType(BaseCType(tensorT))
130
+ ) # TODO: fix this discrepancy
131
+ else:
132
+ return NamedCType(
133
+ binds, ConstRefCType(OptionalCType(BaseCType(tensorT)))
134
+ )
135
+ elif str(t.elem) == "Scalar":
136
+ return NamedCType(binds, ConstRefCType(OptionalCType(BaseCType(scalarT))))
137
+ elem = argumenttype_type(t.elem, mutable=mutable, binds=binds)
138
+ return NamedCType(binds, OptionalCType(elem.type))
139
+ elif isinstance(t, ListType):
140
+ # TODO: keeping these special cases for Tensor[] and Tensor?[] so that we can hookup with ATen kernels.
141
+ if str(t.elem) == "Tensor":
142
+ return NamedCType(binds, BaseCType(tensorListT))
143
+ elif str(t.elem) == "Dimname":
144
+ raise NotImplementedError("Executorch doesn't support Dimname")
145
+ elif str(t.elem) == "Tensor?":
146
+ return NamedCType(binds, ArrayRefCType(OptionalCType(BaseCType(tensorT))))
147
+ elem = argumenttype_type(t.elem, mutable=mutable, binds=binds)
148
+ return NamedCType(binds, ArrayRefCType(elem.type))
149
+ else:
150
+ raise AssertionError(f"unrecognized type {repr(t)}")
151
+
152
+
153
+ # Translate a JIT argument into its C++ type
154
+ def argument_type(a: Argument, *, binds: ArgName) -> NamedCType:
155
+ return argumenttype_type(a.type, mutable=a.is_write, binds=binds)
156
+
157
+
158
+ # Translation of a (non-multi) return type from JIT to C++
159
+ # N.B: returntype_type returns a CType, not a NamedCType.
160
+ # This is mostly because of the mismatch between return types and return names.
161
+ # e.g. a function with a return type of 'void' has 0 return names,
162
+ # and a function with a return type of 'std::tuple' has >1 return name.
163
+ def returntype_type(t: Type, *, mutable: bool) -> CType:
164
+ # placeholder is ignored
165
+ r = valuetype_type(t, binds="__placeholder__")
166
+ if r is not None:
167
+ return r.type
168
+
169
+ if isinstance(t, BaseType):
170
+ if t.name == BaseTy.Tensor:
171
+ if mutable:
172
+ if local.use_const_ref_for_mutable_tensors():
173
+ return ConstRefCType(BaseCType(tensorT))
174
+ else:
175
+ return MutRefCType(BaseCType(tensorT))
176
+ else:
177
+ # Note [Tensor Copy Returns]
178
+ # Currently, we use "Argument.is_write" to determine
179
+ # whether or not Tensor return types should be copies or references.
180
+ # If that ever changes, take a look at other locations of this note!
181
+ return BaseCType(tensorT)
182
+ elif t.name == BaseTy.Scalar:
183
+ return BaseCType(scalarT)
184
+ elif isinstance(t, ListType):
185
+ assert (
186
+ not mutable
187
+ ), "Native functions should never return a mutable tensor list. They should return void."
188
+ elem = returntype_type(t.elem, mutable=False)
189
+ assert t.size is None, f"fixed size list returns not supported: {t}"
190
+ return VectorCType(elem)
191
+
192
+ raise AssertionError(f"unrecognized return type {t}")
193
+
194
+
195
+ # Translation of a single return to its C++ type
196
+ def return_type(r: Return) -> CType:
197
+ return returntype_type(r.type, mutable=r.is_write)
198
+
199
+
200
+ # Translation of a full (possibly multi) return from JIT to its C++ type
201
+ def returns_type(rs: Sequence[Return]) -> CType:
202
+ if len(rs) == 0:
203
+ return BaseCType(voidT)
204
+ elif len(rs) == 1:
205
+ return return_type(rs[0])
206
+ else:
207
+ return TupleCType([return_type(r) for r in rs])
208
+
209
+
210
+ def return_names(f: NativeFunction, *, fallback_name: str = "result") -> Sequence[str]:
211
+ returns: List[str] = []
212
+ for i, r in enumerate(f.func.returns):
213
+ # If we have an inplace function, the return argument is
214
+ # implicitly named self.
215
+ # TODO: Consider incorporating this into the data model
216
+ if f.func.name.name.inplace:
217
+ assert i == 0, "illegal inplace function with multiple returns"
218
+ name = "self"
219
+ # If we are out function, the name is the name of the
220
+ # corresponding output function (r.name will get recorded
221
+ # in field_name later.)
222
+ elif f.func.is_out_fn():
223
+ name = f.func.arguments.out[i].name
224
+ # If the return argument is explicitly named...
225
+ elif r.name:
226
+ name_conflict = any(
227
+ r.name == a.name for a in f.func.schema_order_arguments()
228
+ )
229
+ if name_conflict and not f.func.is_out_fn():
230
+ name = f"{r.name}_return"
231
+ else:
232
+ name = r.name
233
+ # If there is no explicit name and no fallback name was passed in, we just name the output result,
234
+ # unless it's a multi-return, in which case it's result0,
235
+ # result1, etc (zero-indexed)
236
+ else:
237
+ name = fallback_name if len(f.func.returns) == 1 else f"{fallback_name}{i}"
238
+ returns.append(name)
239
+ return returns
240
+
241
+
242
+ JIT_TO_CPP_DEFAULT = {
243
+ "False": "false",
244
+ "True": "true",
245
+ "None": "torch::executorch::nullopt", # UGH this one is type directed
246
+ "[]": "{}",
247
+ "contiguous_format": "torch::executorch::MemoryFormat::Contiguous",
248
+ "long": "torch::executorch::kLong",
249
+ }
250
+
251
+
252
+ # Convert a JIT default into C++ expression representing the default
253
+ def default_expr(d: str, t: Type) -> str:
254
+ if d == "None" and str(t) == "Tensor?":
255
+ return "{}"
256
+ if isinstance(t, BaseType) and t.name is BaseTy.str:
257
+ # Schema allows single quotes but C++ needs double
258
+ if len(d) >= 2 and d[0] == "'" and d[-1] == "'":
259
+ s = ""
260
+ i = 1
261
+ while i + 1 < len(d):
262
+ if d[i] != "\\":
263
+ if d[i] == '"':
264
+ s += '\\"'
265
+ else:
266
+ s += d[i]
267
+ i += 1
268
+ else:
269
+ if d[i + 1] == "'":
270
+ s += "'"
271
+ else:
272
+ s += d[i : i + 2]
273
+ i += 2
274
+
275
+ return f'"{s}"'
276
+
277
+ if isinstance(t, OptionalType):
278
+ if d == "None":
279
+ return "torch::executor::nullopt"
280
+
281
+ return default_expr(d, t.elem)
282
+
283
+ if isinstance(t, ListType):
284
+ if d.startswith("[") and d.endswith("]"):
285
+ return "{" + d[1:-1] + "}"
286
+ elif t.size is None:
287
+ # NOTE: Sized lists can have scalar defaults
288
+ raise ValueError(f"Expected a list default '[...]' but found: '{d}'")
289
+
290
+ return JIT_TO_CPP_DEFAULT.get(d, d)
291
+
292
+
293
+ # Convert an argument into its C++ API form
294
+
295
+
296
+ def argument(
297
+ a: Union[Argument, TensorOptionsArguments, SelfArgument],
298
+ *,
299
+ cpp_no_default_args: Set[str],
300
+ method: bool,
301
+ faithful: bool,
302
+ has_tensor_options: bool,
303
+ ) -> List[Binding]:
304
+ def sub_argument(
305
+ a: Union[Argument, TensorOptionsArguments, SelfArgument]
306
+ ) -> List[Binding]:
307
+ return argument(
308
+ a,
309
+ cpp_no_default_args=cpp_no_default_args,
310
+ method=method,
311
+ faithful=faithful,
312
+ has_tensor_options=has_tensor_options,
313
+ )
314
+
315
+ if isinstance(a, Argument):
316
+ binds: ArgName
317
+ if a.name == "memory_format" and has_tensor_options:
318
+ binds = SpecialArgName.possibly_redundant_memory_format
319
+ else:
320
+ binds = a.name
321
+ default: Optional[str] = None
322
+ if a.name not in cpp_no_default_args and a.default is not None:
323
+ default = default_expr(a.default, a.type)
324
+ return [
325
+ Binding(
326
+ nctype=argument_type(a, binds=binds),
327
+ name=a.name,
328
+ default=default,
329
+ argument=a,
330
+ )
331
+ ]
332
+ elif isinstance(a, TensorOptionsArguments):
333
+ raise NotImplementedError("Need to implement type resolution for TensorOptions")
334
+ elif isinstance(a, SelfArgument):
335
+ if method:
336
+ # Caller is responsible for installing implicit this in context!
337
+ return []
338
+ else:
339
+ return sub_argument(a.argument)
340
+ else:
341
+ assert_never(a)
342
+
343
+
344
+ def arguments(
345
+ arguments: Arguments,
346
+ *,
347
+ faithful: bool,
348
+ method: bool,
349
+ cpp_no_default_args: Set[str],
350
+ ) -> List[Binding]:
351
+ args: List[Union[Argument, TensorOptionsArguments, SelfArgument]] = []
352
+ if faithful:
353
+ args.extend(arguments.non_out)
354
+ args.extend(arguments.out)
355
+ else:
356
+ args.extend(arguments.out)
357
+ args.extend(arguments.non_out)
358
+ return [
359
+ r.no_default() if faithful else r
360
+ for a in args
361
+ for r in argument(
362
+ a,
363
+ faithful=faithful,
364
+ method=method,
365
+ has_tensor_options=arguments.tensor_options is not None,
366
+ cpp_no_default_args=cpp_no_default_args,
367
+ )
368
+ ]
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/api/unboxing.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass
2
+ from typing import Callable, List, Sequence, Tuple
3
+
4
+ from torchgen.api.types import Binding, CType, NamedCType
5
+ from torchgen.model import (
6
+ Argument,
7
+ BaseTy,
8
+ BaseType,
9
+ ListType,
10
+ NativeFunction,
11
+ OptionalType,
12
+ Type,
13
+ )
14
+
15
+ connector = "\n\t"
16
+
17
+
18
+ # Return unboxing function name for a NativeFunction
19
+ def name(f: NativeFunction) -> str:
20
+ return f.func.name.unambiguous_name()
21
+
22
+
23
+ @dataclass(frozen=True)
24
+ class Unboxing:
25
+ """
26
+ Takes a sequence of Bindings and unbox EValues to these Bindings. Return generated code that performs correct unboxing.
27
+ A sample generated code:
28
+ // aten::mul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)
29
+ void mul_out(EValue** stack) {
30
+ EValue& self = *stack[0];
31
+ EValue& other = *stack[1];
32
+ EValue& out = *stack[2];
33
+ const torch::executor::Tensor & self_base = self.to<torch::executor::Tensor>();
34
+ const torch::executor::Tensor & other_base = other.to<torch::executor::Tensor>();
35
+ torch::executor::Tensor & out_base = out.to<torch::executor::Tensor>();
36
+
37
+ EXECUTORCH_SCOPE_PROF("native_call_mul.out");
38
+ torch::executor::mul_outf(self_base, other_base, out_base);
39
+
40
+
41
+ }
42
+ """
43
+
44
+ # this is a callable that converts a JIT argument, into its C++ type.
45
+ # Translates (type, mutability, binds) to NamedCType. E.g., torchgen.api.cpp.argumenttype_type.
46
+ argument_type_gen: Callable[
47
+ ...,
48
+ NamedCType,
49
+ ]
50
+
51
+ # Convert all the arguments in a NativeFunction to C++ code
52
+ def convert_arguments(
53
+ self, args: Sequence[Binding]
54
+ ) -> Tuple[List[Binding], List[str]]:
55
+ code_list = [f"EValue& {args[i].name} = *stack[{i}];" for i in range(len(args))]
56
+ binding_list = []
57
+ for arg in args:
58
+ # expecting only Argument
59
+ if not isinstance(arg.argument, Argument):
60
+ raise Exception(
61
+ f"Unexpected argument type, expecting `Argument` but got {arg}"
62
+ )
63
+ argument: Argument = arg.argument
64
+ unboxed_name, _, code, decl = self.argumenttype_evalue_convert(
65
+ argument.type, argument.name, mutable=argument.is_write
66
+ )
67
+ code_list.extend(decl)
68
+ code_list.extend(code)
69
+ binding_list.append(arg.with_name(unboxed_name))
70
+ return binding_list, code_list
71
+
72
+ def argumenttype_evalue_convert(
73
+ self, t: Type, arg_name: str, *, mutable: bool = False
74
+ ) -> Tuple[str, CType, List[str], List[str]]:
75
+ """
76
+ Takes in the type, name and mutability corresponding to an argument, and generates a tuple of:
77
+ (1) the C++ code necessary to unbox the argument
78
+ (2) A Binding corresponding to the newly created unboxed variable, including variable name and its CType
79
+ :param t: a `Type` of an argument
80
+ :param arg_name: argument name
81
+ :param mutable: boolean for whether this argument type is mutable
82
+ :return: unboxed result
83
+ """
84
+ ctype = self.argument_type_gen(t, mutable=mutable, binds=arg_name).type
85
+
86
+ if isinstance(t, BaseType):
87
+ out_name = f"{arg_name}_base"
88
+ code, decl = self._gen_code_base_type(
89
+ arg_name=arg_name, out_name=out_name, ctype=ctype
90
+ )
91
+ elif isinstance(t, OptionalType):
92
+ out_name = f"{arg_name}_opt_out"
93
+ code, decl = self._gen_code_optional_type(
94
+ arg_name=arg_name, out_name=out_name, t=t, ctype=ctype
95
+ )
96
+ elif isinstance(t, ListType):
97
+ out_name = f"{arg_name}_list_out"
98
+ code, decl = self._gen_code_list_type(
99
+ arg_name=arg_name, out_name=out_name, t=t, ctype=ctype
100
+ )
101
+ else:
102
+ raise Exception(f"Cannot handle type {t}. arg_name: {arg_name}")
103
+ return out_name, ctype, code, decl
104
+
105
+ def _gen_code_base_type(
106
+ self, arg_name: str, out_name: str, ctype: CType
107
+ ) -> Tuple[List[str], List[str]]:
108
+ return [
109
+ f"{ctype.cpp_type()} {out_name} = {arg_name}.to<{ctype.cpp_type(strip_ref=True)}>();"
110
+ ], []
111
+
112
+ def _gen_code_optional_type(
113
+ self, arg_name: str, out_name: str, t: OptionalType, ctype: CType
114
+ ) -> Tuple[List[str], List[str]]:
115
+ in_name = f"{arg_name}_opt_in"
116
+ res_name, base_type, res_code, decl = self.argumenttype_evalue_convert(
117
+ t.elem, in_name
118
+ )
119
+ return (
120
+ f"""
121
+ {ctype.cpp_type(strip_ref=True)} {out_name} = {arg_name}.toOptional<{base_type.cpp_type(strip_ref=True)}>();
122
+ """.split(
123
+ "\n"
124
+ ),
125
+ decl,
126
+ )
127
+
128
+ def _gen_code_list_type(
129
+ self, arg_name: str, out_name: str, t: ListType, ctype: CType
130
+ ) -> Tuple[List[str], List[str]]:
131
+ in_name = f"{arg_name}_list_in"
132
+ elem_name = f"{arg_name}_elem"
133
+ code = []
134
+ res_name, res_ctype, res_code, decl = self.argumenttype_evalue_convert(
135
+ t.elem, elem_name
136
+ )
137
+
138
+ if isinstance(t.elem, BaseType) and t.elem.name == BaseTy.Tensor:
139
+ code.extend(
140
+ f"""
141
+ {ctype.cpp_type(strip_ref=True)} {out_name} = {arg_name}.toTensorList();
142
+ """.split(
143
+ "\n"
144
+ )
145
+ )
146
+ elif isinstance(t.elem, BaseType) and (
147
+ t.elem.name == BaseTy.int or t.elem.name == BaseTy.SymInt
148
+ ):
149
+ code.extend(
150
+ f"""
151
+ {ctype.cpp_type(strip_ref=True)} {out_name} = {arg_name}.toIntList();
152
+ """.split(
153
+ "\n"
154
+ )
155
+ )
156
+ elif isinstance(t.elem, BaseType) and t.elem.name == BaseTy.float:
157
+ code.extend(
158
+ f"""
159
+ {ctype.cpp_type(strip_ref=True)} {out_name} = {arg_name}.toDoubleList();
160
+ """.split(
161
+ "\n"
162
+ )
163
+ )
164
+ elif isinstance(t.elem, BaseType) and t.elem.name == BaseTy.bool:
165
+ # handle list type with size, e.g., bool[4]
166
+ code.extend(
167
+ f"""
168
+ {ctype.cpp_type(strip_ref=True)} {out_name} = {arg_name}.toBoolList();
169
+ """.split(
170
+ "\n"
171
+ )
172
+ )
173
+ # pytorch codegen:
174
+ # we have to use c10::List for optional element. e.g., Tensor?[] -> c10::List<c10::optional<at::Tensor>>
175
+ elif (
176
+ isinstance(t.elem, OptionalType)
177
+ and isinstance(t.elem.elem, BaseType)
178
+ and t.elem.elem.name == BaseTy.Tensor
179
+ ):
180
+ code.extend(
181
+ f"""
182
+ #ifdef USE_ATEN_LIB
183
+ at::ArrayRef<c10::optional<at::Tensor>> {in_name} = {arg_name}.toListOptionalTensor();
184
+ c10::List<c10::optional<at::Tensor>> {out_name};
185
+ for (auto {elem_name}: {in_name}) {{
186
+ {out_name}.push_back({elem_name});
187
+ }}
188
+ #else
189
+ torch::executor::ArrayRef<torch::executor::optional<torch::executor::Tensor>> {out_name} = {arg_name}.toListOptionalTensor();
190
+ #endif
191
+ """.split(
192
+ "\n"
193
+ )
194
+ )
195
+ else:
196
+ # use ArrayRef as default.
197
+ vec_name = arg_name + "_vec"
198
+ # need to bring vector instantiation out of scope so that ArrayRef has valid data
199
+ decl.append(
200
+ f"std::vector<{res_ctype.cpp_type(strip_ref=True)}> {vec_name};"
201
+ )
202
+ code.extend(
203
+ f"""
204
+ for (EValue {elem_name}: {in_name}) {{
205
+ {connector.join(res_code)}
206
+ {vec_name}.push_back({res_name});
207
+ }}
208
+ {ctype.cpp_type(strip_ref=True)} {out_name}({vec_name});
209
+ """.split(
210
+ "\n"
211
+ )
212
+ )
213
+ return code, decl
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/model.py ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Represents all kernels used by an Executorch model.
2
+ # It maintains a Dict[OperatorName, Dict[ETKernelKey, BackendMetadata]] structure.
3
+
4
+ import itertools
5
+ from collections import defaultdict, namedtuple
6
+ from dataclasses import dataclass
7
+ from enum import IntEnum
8
+ from typing import Dict, List, Tuple, Union
9
+
10
+ from torchgen.model import (
11
+ BackendIndex,
12
+ BackendMetadata,
13
+ DispatchKey,
14
+ NativeFunction,
15
+ NativeFunctionsGroup,
16
+ OperatorName,
17
+ )
18
+ from torchgen.utils import assert_never
19
+
20
+ KERNEL_KEY_VERSION = 1
21
+
22
+
23
+ # TODO: Duplicated Subset from codegen.tool.gen_oplist, remove declaration in codegen
24
+ class ScalarType(IntEnum):
25
+ Byte = 0
26
+ Char = 1
27
+ Short = 2
28
+ Int = 3
29
+ Long = 4
30
+ Float = 6
31
+ Double = 7
32
+ Bool = 11
33
+
34
+
35
+ ETParsedYaml = namedtuple("ETParsedYaml", ["native_functions", "kernel_index"])
36
+
37
+
38
+ @dataclass(frozen=True)
39
+ class ETKernelKeyOpArgMeta:
40
+ arg_name: str
41
+ dtype: str
42
+ # The order of the dimensions if entry is a Tensor
43
+ dim_order: Tuple[int, ...]
44
+
45
+ def to_native_string(self) -> str:
46
+ dtype_str = ScalarType[self.dtype].value
47
+ dim_str = str(self.dim_order)[1:-1].replace(" ", "")
48
+ return f"{dtype_str};{dim_str}"
49
+
50
+
51
+ @dataclass(frozen=True)
52
+ class ETKernelKey:
53
+ # Field undefined is default = True
54
+ arg_meta: Tuple[ETKernelKeyOpArgMeta, ...] = ()
55
+
56
+ # Indicator for this kernel being used as a catch all
57
+ default: bool = False
58
+
59
+ version: int = KERNEL_KEY_VERSION
60
+
61
+ @staticmethod
62
+ def gen_from_yaml(
63
+ args: Dict[str, Tuple[str, str]],
64
+ type_alias_map: Dict[str, List[str]], # TODO: Support unwrapped str val
65
+ dim_order_alias_map: Dict[str, List[int]],
66
+ ) -> List["ETKernelKey"]:
67
+ """Generate ETKernelKeys from arg kernel specs
68
+ Multiple ETKernelKeys are returned due to dtype permutations from utilizing
69
+ type_alias_map (actualizing each potential type permutation as a KernelKey)
70
+
71
+ Args:
72
+ args: Mapping from argument name to kernel specs
73
+ Kernel specs are a tuple of (dtype, dim_order).
74
+ Currently tuple entries must be aliased via the alias map arguments
75
+ type_alias_map: Mapping from type alias to potential type enums
76
+ i.e { T0 : [Double, Int] } means T0 can be either Double or Int
77
+ Used for lookup by args
78
+ dim_order_alias_map: Mapping from alias to a list of dimension orders
79
+ Used for lookup by args
80
+ """
81
+ # Cast to dim order to int
82
+ dim_order_alias_map = {
83
+ k: [int(alias) for alias in v] for k, v in dim_order_alias_map.items()
84
+ }
85
+ kernel_keys = []
86
+
87
+ # Get all used Dtype Alias
88
+ dtype_alias_used = set()
89
+ for type_alias, dim_order in args.values():
90
+ # Enforce usage of alias initially
91
+ # TODO: Support inlined arguments
92
+ assert type_alias in type_alias_map, "Undefined type alias: " + str(
93
+ type_alias
94
+ )
95
+ assert (
96
+ dim_order in dim_order_alias_map
97
+ ), "Undefined dim_order alias: " + str(dim_order)
98
+ dtype_alias_used.add(type_alias)
99
+
100
+ # Generate all permutations of dtype alias values
101
+ alias_dtypes = [
102
+ [(alias, dtype) for dtype in type_alias_map[alias]]
103
+ for alias in dtype_alias_used
104
+ ]
105
+ alias_permutations = [
106
+ dict(permutation) for permutation in list(itertools.product(*alias_dtypes))
107
+ ]
108
+
109
+ # Using each alias value permutation, generate kernel keys
110
+ op_arg_cache = {}
111
+ for permutation in alias_permutations:
112
+ arg_list = []
113
+ for arg_name, arg_spec in args.items():
114
+ dtype = permutation[arg_spec[0]]
115
+ dim_order = dim_order_alias_map[arg_spec[1]] # type: ignore[assignment]
116
+ if (
117
+ cache_key := (arg_name, dtype, tuple(dim_order))
118
+ ) not in op_arg_cache:
119
+ op_arg_cache[cache_key] = ETKernelKeyOpArgMeta(*cache_key) # type: ignore[arg-type]
120
+
121
+ arg_list.append(op_arg_cache[cache_key])
122
+ kernel_keys.append(ETKernelKey(tuple(arg_list)))
123
+
124
+ return kernel_keys
125
+
126
+ def to_native_string(self) -> str:
127
+ if self.default:
128
+ return "default"
129
+ return (
130
+ "v"
131
+ + str(KERNEL_KEY_VERSION)
132
+ + "/"
133
+ + "|".join([arg.to_native_string() for arg in self.arg_meta])
134
+ )
135
+
136
+
137
+ @dataclass(frozen=True)
138
+ class ETKernelIndex:
139
+ index: Dict[OperatorName, Dict[ETKernelKey, BackendMetadata]]
140
+
141
+ def has_kernels(self, g: Union[NativeFunction, NativeFunctionsGroup]) -> bool:
142
+ m = self.get_kernels(g)
143
+ return m is not None
144
+
145
+ def get_kernels(
146
+ self, g: Union[NativeFunction, NativeFunctionsGroup]
147
+ ) -> Dict[ETKernelKey, BackendMetadata]:
148
+ if isinstance(g, NativeFunction):
149
+ f = g
150
+ elif isinstance(g, NativeFunctionsGroup):
151
+ f = g.functional
152
+ else:
153
+ assert_never(g)
154
+ if f.func.name not in self.index:
155
+ return {}
156
+ return self.index[f.func.name]
157
+
158
+ @staticmethod
159
+ def grow_from_backend_indices(
160
+ kernel_index: Dict[OperatorName, Dict[ETKernelKey, BackendMetadata]],
161
+ backend_indices: Dict[DispatchKey, Dict[OperatorName, BackendMetadata]],
162
+ ) -> None:
163
+ for dk in backend_indices:
164
+ index = backend_indices[dk]
165
+ for op, backend_metadata in index.items():
166
+ if op in kernel_index:
167
+ kernel_index[op][ETKernelKey(default=True)] = backend_metadata
168
+ else:
169
+ kernel_index[op] = {ETKernelKey(default=True): backend_metadata}
170
+
171
+ @staticmethod
172
+ def from_backend_indices(
173
+ backend_indices: Dict[DispatchKey, Dict[OperatorName, BackendMetadata]]
174
+ ) -> "ETKernelIndex":
175
+ kernel_index: Dict[
176
+ OperatorName, Dict[ETKernelKey, BackendMetadata]
177
+ ] = defaultdict(dict)
178
+ ETKernelIndex.grow_from_backend_indices(kernel_index, backend_indices)
179
+ return ETKernelIndex(kernel_index)
180
+
181
+ def grow(
182
+ self, backend_indices: Dict[DispatchKey, Dict[OperatorName, BackendMetadata]]
183
+ ) -> "ETKernelIndex":
184
+ ETKernelIndex.grow_from_backend_indices(self.index, backend_indices)
185
+ return self
186
+
187
+ def _to_backend_index(self) -> BackendIndex:
188
+ """
189
+ WARNING: this will be deprecated once all the codegen places know how to handle ETKernelIndex.
190
+ """
191
+ index: Dict[OperatorName, BackendMetadata] = {}
192
+ for op in self.index:
193
+ kernel_dict = self.index[op]
194
+ assert (
195
+ len(kernel_dict.values()) == 1
196
+ ), f"Can't convert ETKernelIndex to BackendIndex because {op} has more than one kernels. Got {kernel_dict}"
197
+ index[op] = kernel_dict.get(
198
+ ETKernelKey(default=True),
199
+ BackendMetadata(kernel="", structured=False, cpp_namespace=""),
200
+ )
201
+ return BackendIndex(
202
+ dispatch_key=DispatchKey.CPU,
203
+ use_out_as_primary=False,
204
+ device_guard=False,
205
+ external=False,
206
+ index=index,
207
+ )
208
+
209
+ # Note duplicate ETKernelKey from index_b will clobber the metadata from index_a
210
+ @staticmethod
211
+ def merge_indices(
212
+ index_a: "ETKernelIndex", index_b: "ETKernelIndex"
213
+ ) -> "ETKernelIndex":
214
+ combined = defaultdict(dict, index_a.index.copy())
215
+
216
+ for op, entry in index_b.index.items():
217
+ for key, metadata in entry.items():
218
+ combined[op][key] = metadata
219
+
220
+ return ETKernelIndex(combined)
env-llmeval/lib/python3.10/site-packages/torchgen/executorch/parse.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import defaultdict, namedtuple
2
+ from typing import Any, Dict, List, Optional, Set, Tuple
3
+
4
+ import yaml
5
+
6
+ from torchgen.executorch.model import ETKernelIndex, ETKernelKey
7
+
8
+ from torchgen.gen import LineLoader, parse_native_yaml
9
+ from torchgen.model import (
10
+ BackendMetadata,
11
+ DispatchKey,
12
+ FunctionSchema,
13
+ NativeFunction,
14
+ OperatorName,
15
+ )
16
+ from torchgen.utils import NamespaceHelper
17
+
18
+ # Parse native_functions.yaml into a sequence of NativeFunctions and ET Backend Indices.
19
+ ETParsedYaml = namedtuple("ETParsedYaml", ["native_functions", "et_kernel_indices"])
20
+
21
+ # Fields in native_functions.yaml used to determine which kernels should be used
22
+ ET_FIELDS = ["kernels", "type_alias", "dim_order_alias"]
23
+
24
+
25
+ def parse_from_yaml(ei: Dict[str, object]) -> Dict[ETKernelKey, BackendMetadata]:
26
+ """Given a loaded yaml representing kernel assignment information, extract the
27
+ mapping from `kernel keys` to `BackendMetadata` (the latter representing the kernel instance)
28
+
29
+ Args:
30
+ ei: Dict keys {kernels, type_alias, dim_order_alias}
31
+ See ETKernelKey for description of arguments
32
+ """
33
+ e = ei.copy()
34
+ if (kernels := e.pop("kernels", None)) is None:
35
+ return {}
36
+
37
+ type_alias: Dict[str, List[str]] = e.pop("type_alias", {}) # type: ignore[assignment]
38
+ dim_order_alias: Dict[str, List[str]] = e.pop("dim_order_alias", {}) # type: ignore[assignment]
39
+ dim_order_alias.pop("__line__", None)
40
+
41
+ kernel_mapping: Dict[ETKernelKey, BackendMetadata] = {}
42
+
43
+ for entry in kernels: # type: ignore[attr-defined]
44
+ arg_meta = entry.get("arg_meta")
45
+ if arg_meta is not None:
46
+ arg_meta.pop("__line__")
47
+
48
+ kernel_name = entry.get("kernel_name")
49
+ namespace_helper = NamespaceHelper.from_namespaced_entity(
50
+ kernel_name, max_level=3
51
+ )
52
+ kernel_namespace = namespace_helper.get_cpp_namespace(default="at")
53
+ backend_metadata = BackendMetadata(
54
+ kernel=namespace_helper.entity_name,
55
+ structured=False,
56
+ cpp_namespace=(kernel_namespace + "::native"),
57
+ )
58
+
59
+ kernel_keys = (
60
+ [ETKernelKey((), default=True)]
61
+ if arg_meta is None
62
+ else ETKernelKey.gen_from_yaml(arg_meta, type_alias, dim_order_alias) # type: ignore[arg-type]
63
+ )
64
+
65
+ for kernel_key in kernel_keys:
66
+ assert kernel_key not in kernel_mapping, (
67
+ "Duplicate kernel key: " + str(kernel_key) + " " + str(e)
68
+ )
69
+ kernel_mapping[kernel_key] = backend_metadata
70
+
71
+ return kernel_mapping
72
+
73
+
74
+ def parse_et_yaml_struct(es: object) -> ETKernelIndex:
75
+ """Given a loaded yaml representing a list of operators, for each op extract the mapping
76
+ of `kernel keys` to `BackendMetadata` (the latter representing the kernel instance
77
+ that should be used by the kernel key).
78
+ """
79
+ indices: Dict[OperatorName, Dict[ETKernelKey, BackendMetadata]] = {}
80
+ for ei in es: # type: ignore[attr-defined]
81
+ e = ei.copy()
82
+
83
+ funcs = e.pop("func")
84
+ assert isinstance(funcs, str), f"not a str: {funcs}"
85
+ namespace_helper = NamespaceHelper.from_namespaced_entity(
86
+ namespaced_entity=funcs, max_level=1
87
+ )
88
+ opname = FunctionSchema.parse(namespace_helper.entity_name).name
89
+
90
+ assert opname not in indices, f"Duplicate func found in yaml: {opname} already"
91
+
92
+ if len(index := parse_from_yaml(e)) != 0:
93
+ indices[opname] = index
94
+
95
+ return ETKernelIndex(indices)
96
+
97
+
98
+ def extract_kernel_fields(es: object) -> Dict[OperatorName, Dict[str, Any]]:
99
+ """Given a loaded yaml representing a list of operators, extract the
100
+ kernel key related fields indexed by the operator name.
101
+ """
102
+ fields: Dict[OperatorName, Dict[str, Any]] = defaultdict(dict)
103
+ for ei in es: # type: ignore[attr-defined]
104
+ funcs = ei.get("func")
105
+ assert isinstance(funcs, str), f"not a str: {funcs}"
106
+ namespace_helper = NamespaceHelper.from_namespaced_entity(
107
+ namespaced_entity=funcs, max_level=1
108
+ )
109
+ opname = FunctionSchema.parse(namespace_helper.entity_name).name
110
+
111
+ for field in ET_FIELDS:
112
+ if (value := ei.get(field)) is not None:
113
+ fields[opname][field] = value
114
+
115
+ return fields
116
+
117
+
118
+ def parse_et_yaml(
119
+ path: str,
120
+ tags_yaml_path: str,
121
+ ignore_keys: Optional[Set[DispatchKey]] = None,
122
+ skip_native_fns_gen: bool = False,
123
+ ) -> Tuple[List[NativeFunction], Dict[OperatorName, Dict[str, Any]]]:
124
+ """Parse native_functions.yaml into NativeFunctions and an Operator Indexed Dict
125
+ of fields to persist from native_functions.yaml to functions.yaml
126
+ """
127
+ with open(path) as f:
128
+ es = yaml.load(f, Loader=LineLoader)
129
+
130
+ et_kernel = extract_kernel_fields(es)
131
+
132
+ # Remove ET specific fields from entries for BC compatibility
133
+ strip_et_fields(es)
134
+
135
+ native_yaml = parse_native_yaml(
136
+ path,
137
+ tags_yaml_path,
138
+ ignore_keys,
139
+ skip_native_fns_gen=skip_native_fns_gen,
140
+ loaded_yaml=es,
141
+ )
142
+ return native_yaml.native_functions, et_kernel
143
+
144
+
145
+ def strip_et_fields(es: object) -> None:
146
+ """Given a loaded yaml representing a list of operators,
147
+ remove ET specific fields from every entries for BC compatibility
148
+ """
149
+ for entry in es: # type: ignore[attr-defined]
150
+ for field in ET_FIELDS:
151
+ entry.pop(field, None)
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/CET ADDED
Binary file (621 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/EET ADDED
Binary file (497 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/EST ADDED
Binary file (111 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/EST5EDT ADDED
Binary file (951 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Amsterdam ADDED
Binary file (1.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Andorra ADDED
Binary file (389 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Astrakhan ADDED
Binary file (726 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Athens ADDED
Binary file (682 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Belfast ADDED
Binary file (1.6 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Belgrade ADDED
Binary file (478 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Berlin ADDED
Binary file (705 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Bratislava ADDED
Binary file (723 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Brussels ADDED
Binary file (1.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Bucharest ADDED
Binary file (661 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Dublin ADDED
Binary file (1.5 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Helsinki ADDED
Binary file (481 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Kaliningrad ADDED
Binary file (904 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Kirov ADDED
Binary file (735 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Kyiv ADDED
Binary file (558 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/London ADDED
Binary file (1.6 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Luxembourg ADDED
Binary file (1.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Madrid ADDED
Binary file (897 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Mariehamn ADDED
Binary file (481 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Minsk ADDED
Binary file (808 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Monaco ADDED
Binary file (1.11 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Nicosia ADDED
Binary file (597 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Oslo ADDED
Binary file (705 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Paris ADDED
Binary file (1.11 kB). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Podgorica ADDED
Binary file (478 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Prague ADDED
Binary file (723 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Rome ADDED
Binary file (947 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Samara ADDED
Binary file (732 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Sarajevo ADDED
Binary file (478 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Saratov ADDED
Binary file (726 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Skopje ADDED
Binary file (478 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Sofia ADDED
Binary file (592 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Stockholm ADDED
Binary file (705 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Tiraspol ADDED
Binary file (755 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Ulyanovsk ADDED
Binary file (760 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/tzdata/zoneinfo/Europe/Uzhgorod ADDED
Binary file (558 Bytes). View file