degirum-llm / text_files /object_detection_video_stream.txt
jagdish-datanova
updated code
b60b6c5
![Degirum banner](https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/degirum_banner.png)
## AI Inference on a video stream
This notebook is a simple example of how to use DeGirum PySDK to do AI inference on a video stream.
This script works with the following inference options:
1. Run inference on DeGirum Cloud Platform;
2. Run inference on DeGirum AI Server deployed on a localhost or on some computer in your LAN or VPN;
3. Run inference on DeGirum ORCA accelerator directly installed on your computer.
To try different options, you need to specify the appropriate `hw_location` option.
When running this notebook locally, you need to specify your cloud API access token in the [env.ini](../../env.ini) file, located in the same directory as this notebook.
When running this notebook in Google Colab, the cloud API access token should be stored in a user secret named `DEGIRUM_CLOUD_TOKEN`.
You can change `video_source` to index of a local webcamera, or URL of an RTSP stream, or URL of a YouTube video, or path to another video file.
--------------------------------------------------------------------------------
# make sure degirum-tools package is installed
!pip show degirum-tools || pip install degirum-tools
--------------------------------------------------------------------------------
#### Specify where you want to run your inferences, model zoo url, model name and video source
--------------------------------------------------------------------------------
# hw_location: where you want to run inference
# "@cloud" to use DeGirum cloud
# "@local" to run on local machine
# IP address for AI server inference
# model_zoo_url: url/path for model zoo
# cloud_zoo_url: valid for @cloud, @local, and ai server inference options
# '': ai server serving models from local folder
# path to json file: single model zoo in case of @local inference
# model_name: name of the model for running AI inference
# video_source: video source for inference
# camera index for local camera
# URL of RTSP stream
# URL of YouTube Video
# path to video file (mp4 etc)
hw_location = "@cloud"
model_zoo_url = "degirum/public"
model_name = "yolo_v5s_coco--512x512_quant_n2x_orca1_1"
video_source = "https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/example_video.mp4"
--------------------------------------------------------------------------------
#### The rest of the cells below should run without any modifications
--------------------------------------------------------------------------------
import degirum as dg, degirum_tools
# load object detection AI model
model = dg.load_model(
model_name=model_name,
inference_host_address=hw_location,
zoo_url=model_zoo_url,
token=degirum_tools.get_token(),
)
# run AI inference on video stream
inference_results = degirum_tools.predict_stream(model, video_source)
# display inference results
# Press 'x' or 'q' to stop
with degirum_tools.Display("AI Camera") as display:
for inference_result in inference_results:
display.show(inference_result)
--------------------------------------------------------------------------------