Understanding Trace Files in BackendBench
Format
Trace files capture PyTorch operations and their arguments from real model executions:
Operator: operation_name
cnt: count, serialized_arguments
cnt: count, serialized_arguments
...
Structure
Operator line: Specifies the PyTorch operation
Operator: aten.add.Tensor
Operator: aten.relu.default
Operator: aten.linear.default
Count lines: Show how often each argument combination was used
cnt: 42, ((T([10, 20], f16), T([10, 20], f16)), {})
cnt: 0, ((T([5, 5], f32), T([5, 5], f32)), {})
Reading Count Lines
Count 42
: This argument combination appeared 42 times in traced models
cnt: 0
= Synthetic/generated arguments (not from real models)cnt: >0
= Real usage frequency from model traces
Arguments: Same format as serialized arguments - ((args), {kwargs})
Complete Example
Operator: aten.add.Tensor
cnt: 156, ((T([1, 512, 768], f16), T([1, 512, 768], f16)), {})
cnt: 89, ((T([32, 128], f32), T([32, 128], f32)), {})
cnt: 0, ((T([10, 10], f16), T([10, 10], f16)), {})
Operator: aten.relu.default
cnt: 234, ((T([64, 256], f16),), {})
This shows:
aten.add.Tensor
called 156 times with 1×512×768 tensors- Same operation called 89 times with 32×128 tensors
- One synthetic test case (cnt: 0)
aten.relu.default
called 234 times with 64×256 tensor
Interpretation
Trace files provide real-world operation usage patterns, showing which tensor shapes and operations are most common in actual PyTorch models. These are fairly useful for debugging.
Note: These may be deprecated in the future, but are described as they are currently included in the dataset / codebase.
Understanding Serialized Arguments in BackendBench
Format
BackendBench stores function arguments as strings containing all parameters needed to reproduce PyTorch operations:
((arg1, arg2, ...), {'key1': val1, 'key2': val2})
Tensor Representation
Tensors use the format T([shape], dtype)
or T([shape], dtype, [stride])
:
T([10, 20], f32) # 10×20 float32 tensor
T([1, 512, 768], f16) # 1×512×768 float16 tensor
T([64], i32) # 64-element int32 vector
Data types: f16/f32/f64
(float), bf16
(bfloat16), i32/i64
(int), b8
(bool)
Complete Examples
Single tensor argument:
((T([48, 24, 28, 28], f16),), {})
= Function called with one 48×24×28×28 float16 tensor, no keyword arguments
Multiple tensors:
((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})
= Function with two identical 5D tensors
Mixed arguments:
((T([128, 256], f16), [1024, 249, 249]), {'dtype': torch.float16, 'device': 'cuda'})
= Function with tensor, list, and keyword arguments
Complex nested:
(([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)})
= Function with list containing tensors and numbers, plus tensor keyword argument
Argument Types
- Tensors:
T([shape], dtype)
format - Lists:
[item1, item2, ...]
(can contain tensors) - Primitives:
42
,'hello'
,True
,None
- PyTorch objects:
torch.float16
,torch.strided
Acknowledgements
We are extremely grateful for the folks working on TritonBench for these traces and intuitive format