File size: 3,042 Bytes
b60b6c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
![Degirum banner](https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/degirum_banner.png)
## Simple example script illustrating object detection
This notebook is one of the simplest examples of how to use the DeGirum PySDK to do AI inference on a graphical file using an object detection model.

This script works with the following inference options:

1. Run inference on the DeGirum Cloud Platform;
2. Run inference on a DeGirum AI Server deployed on the local host or on some computer in your LAN or VPN;
3. Run inference on a DeGirum ORCA accelerator directly installed on your computer.

To try different options, you need to specify the appropriate `hw_location` option. 

When running this notebook locally, you need to specify your cloud API access token in the [env.ini](../../env.ini) file, located in the same directory as this notebook.

When running this notebook in Google Colab, the cloud API access token should be stored in a user secret named `DEGIRUM_CLOUD_TOKEN`.

--------------------------------------------------------------------------------

# make sure degirum-tools package is installed
!pip show degirum-tools || pip install degirum-tools

--------------------------------------------------------------------------------

#### Specify where you want to run your inferences, model zoo url, model name and image source

--------------------------------------------------------------------------------

# hw_location: where you want to run inference
#     "@cloud" to use DeGirum cloud
#     "@local" to run on local machine
#     IP address for AI server inference
# model_zoo_url: url/path for model zoo
#     cloud_zoo_url: valid for @cloud, @local, and ai server inference options
#     '': ai server serving models from local folder
#     path to json file: single model zoo in case of @local inference
# model_name: name of the model for running AI inference
# img: image source for inference
#     path to image file
#     URL of image
#     PIL image object
#     numpy array
hw_location = "@cloud"
model_zoo_url = "degirum/public"
model_name = "mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1"
image_source = "https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/TwoCats.jpg"

--------------------------------------------------------------------------------

#### The rest of the cells below should run without any modifications

--------------------------------------------------------------------------------

import degirum as dg, degirum_tools

# load object detection AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=hw_location,
    zoo_url=model_zoo_url,
    token=degirum_tools.get_token(),
)

# perform AI model inference on given image source
inference_result = model(image_source)

# show results of inference
print(inference_result)  # numeric results
with degirum_tools.Display("AI Camera") as display:
    display.show_image(inference_result)

--------------------------------------------------------------------------------