Spaces:
Sleeping
Sleeping
### DeGirum PySDK Tutorial for Hailo8L/HAILO8 | |
In this notebook, we illustrate the main features of PySDK and how it can be used to quickly develop edge AI applications using the `Hailo8L/HAILO8` accelerator. See [Setup Instructions for PySDK Notebooks](../README.md#setup-instructions-for-pysdk-notebooks). | |
DeGirum's PySDK provides simple APIs to run AI model inference. In general, there are three steps in running an AI model: | |
1. Loading model using `degirum.load_model` method | |
2. Running inference on an input using the `model.predict` method | |
3. Visualizing inference results using the `results.image_overlay` method | |
-------------------------------------------------------------------------------- | |
# import degirum and degirum_tools | |
import degirum as dg, degirum_tools | |
inference_host_address = "@local" | |
zoo_url = 'degirum/hailo' | |
token='' | |
device_type='HAILORT/HAILO8L' | |
# set model name, and image source | |
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1" | |
image_source='../assets/ThreePersons.jpg' | |
# load AI model | |
model = dg.load_model( | |
model_name=model_name, | |
inference_host_address=inference_host_address, | |
zoo_url=zoo_url, | |
token=token, | |
device_type=device_type, | |
) | |
# perform AI model inference on given image source | |
print(f" Running inference using '{model_name}' on image source '{image_source}'") | |
inference_result = model(image_source) | |
# print('Inference Results \n', inference_result) # numeric results | |
print(inference_result) | |
print("Press 'x' or 'q' to stop.") | |
# show results of inference | |
with degirum_tools.Display("AI Camera") as output_display: | |
output_display.show_image(inference_result.image_overlay) | |
-------------------------------------------------------------------------------- | |
#### Running Inference on Video Stream | |
- The `predict_stream` function in `degirum_tools` provides a powerful and efficient way to perform AI inference on video streams in real-time. It processes video frames sequentially and returns inference results frame by frame, enabling seamless integration with various video input sources. | |
- The code below shows how to use `predict_stream` on a video file. | |
-------------------------------------------------------------------------------- | |
import degirum as dg, degirum_tools | |
inference_host_address = "@local" | |
zoo_url = 'degirum/hailo' | |
token='' | |
device_type='HAILORT/HAILO8L' | |
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1" | |
video_source = '../assets/Traffic.mp4' | |
# load AI model | |
model = dg.load_model( | |
model_name=model_name, | |
inference_host_address=inference_host_address, | |
zoo_url=zoo_url, | |
token=token, | |
device_type=device_type | |
) | |
with degirum_tools.Display("AI Camera") as output_display: | |
for inference_result in degirum_tools.predict_stream(model, video_source): | |
output_display.show(inference_result) | |
-------------------------------------------------------------------------------- | |
### Listing Models | |
You can explore the models available using `degirum.list_models()` method. | |
-------------------------------------------------------------------------------- | |
import degirum as dg, degirum_tools | |
inference_host_address = "@local" | |
zoo_url = 'degirum/hailo' | |
token='' | |
device_type=['HAILORT/HAILO8L'] | |
model_list=dg.list_models( | |
inference_host_address=inference_host_address, | |
zoo_url=zoo_url, | |
token=token, | |
device_type=device_type | |
) | |
for index, model_name in enumerate(model_list): | |
print(index, model_name) | |
-------------------------------------------------------------------------------- | |
### Switch to local model zoo | |
In this repo, we provide a `models` folder with a couple of example models. You can use the code block as a reference to run models from a local folder. | |
-------------------------------------------------------------------------------- | |
import degirum as dg | |
inference_host_address = "@local" | |
zoo_url = '../models' | |
token='' | |
device_type='HAILORT/HAILO8L' | |
model_list = dg.list_models( | |
inference_host_address=inference_host_address, | |
zoo_url=zoo_url, | |
token=token, | |
device_type=device_type | |
) | |
for model_name in model_list.keys(): | |
print(model_name) | |
-------------------------------------------------------------------------------- | |
import degirum as dg, degirum_tools | |
inference_host_address = "@local" | |
zoo_url = '../models' | |
token='' | |
device_type='HAILORT/HAILO8L' | |
# set model name, inference host address, zoo url, token, and image source | |
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1" | |
image_source='../assets/ThreePersons.jpg' | |
# load AI model | |
model = dg.load_model( | |
model_name=model_name, | |
inference_host_address=inference_host_address, | |
zoo_url='../models', | |
token=token, | |
device_type=device_type | |
) | |
# perform AI model inference on given image source | |
print(f" Running inference using '{model_name}' on image source '{image_source}'") | |
inference_result = model(image_source) | |
# print('Inference Results \n', inference_result) # numeric results | |
print(inference_result) | |
print("Press 'x' or 'q' to stop.") | |
# show results of inference | |
with degirum_tools.Display("AI Camera") as output_display: | |
output_display.show_image(inference_result) | |
-------------------------------------------------------------------------------- | |