CustomOp API Usage Example
This README provides an example of how to write custom TensorFlow Ops using a TPC Kernel supported on an HPU device. For more details, refer to TensorFlow CustomOP API documentation. For further information on training deep learning models using Gaudi, refer to developer.habana.ai.
Table of Contents
- Model-References
- Prerequisites
- Content
- Build and Run CustomDivOp with Default Kernels
- Build and Run CustomDivOp with Custom Kernel
- Important to Know
Prerequisites
A TPC kernel on which the HpuKernel will run. To write a CustomOp, you must define the TPC kernel that HpuKernel will run on first. This document provides the required steps for using the existing default TPC kernels
div_fwd_fp32
anddiv_fwd_bf16
as we all as the custom kernelcustomdiv_fwd_f32
to implement CustomOp. For further information on how to write TPC kernels, refer to the Habana Custom Kernel GitHub page.habana-tensorflow Python package must be installed. Make sure to install by following the instructions detailed in the Installation Guide.
PyTest for testing the CustomOp library only.
Content
- C++ file with CustomDivOp definition and Kernel implementation on HPU:
- CustomDivOp performs a division on input parameters.
- As an addition, it accepts input attribute
example_attr
only to demonstrate how it can be retrieved in C++.
- CMake file for building the solution:
- Looks for TensorFlow using Python3 and sets compiler/linker flags.
- Looks for
habana-tensorflow
using Python3 and sets include directory as well as compiler/linker flags.
- Python test to run and validate CustomDivOp. PyTest is configured to accept a command line argument:
custom_op_lib
- location of the built library.
Build and Run CustomDivOp with Default Kernels
To build and run CustomDivOp with default kernels div_fwd_fp32
and div_fwd_bf16
, run the following:
mkdir build/ && cd build/
cmake .. && make
pytest -v --custom_op_lib lib/libhpu_custom_div_op.so ../test_custom_div_op.py
If all steps are successful, the environment is set up properly.
Build and Run CustomDivOp with Custom Kernel
To build and run CustomDivOp with custom kernel customdiv_fwd_f32
, perform the following steps:
- Refer to the Habana Custom Kernel GitHub page to obtain the custom kernel build and generate the custom kernel binary
libcustom_tpc_perf_lib.so
. - Remove build directory
rm -rf build
if a build directory already exists. - Build the solution:
mkdir build/ && cd build/
cmake -DUSE_CUSTOM_KERNEL=1 .. && make
- Run the PyTest:
export GC_KERNEL_PATH=<path to custom kernel binary>/libcustom_tpc_perf_lib.so:$GC_KERNEL_PATH
# The custom kernel only implemented float32 flavor for now.
pytest -v --custom_op_lib lib/libhpu_custom_div_op.so -k float32 ../test_custom_div_op.py
If all steps are successful, the environment is set up properly.
Important to Know
Implementing a single Op requires additional setup in order to use it in a trainable topology.
During training, TensorFlow injects Gradient Ops to all trainable Ops defined. In this case, there is no Gradient defined, therefore the Op cannot be used in a topology. To make sure the Op can be used, CustomDivGradOp should be defined and registered in Python with tf.RegisterGradient method.