ytfeng WanliZhong commited on
Commit
1047434
·
1 Parent(s): e9da4a7

Bump version 4.9 (#222)

Browse files

* update benchmark results on i7-12700K

* update benchmark results on edge2

* add benchmark results on Horizon Sunrise X3 PI

* add benchmark results on Jetson Nano B01 (CPU)

* add benchmark results on Raspberry Pi 4B

* add benchmark results on Jetson Nano B01 (GPU)

* add MAIX-III and StarFive benchmark results

* update benchmark results on Khadas VIM3

* update hardware setup info

* bump opencv version requirement to 4.9.0

* update benchmark results on RV1126

* regenerate table

* change * to x in input size text

* regenerate table

* rollback for '\\*'

* regenerate table

* add description for atlas 200i dk a2

* tune table

---------

Co-authored-by: Wanli <[email protected]>

README.md CHANGED
@@ -25,16 +25,24 @@ Guidelines:
25
 
26
  Hardware Setup:
27
 
 
28
  - [Intel Core i7-12700K](https://www.intel.com/content/www/us/en/products/sku/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz/specifications.html): 8 Performance-cores (3.60 GHz, turbo up to 4.90 GHz), 4 Efficient-cores (2.70 GHz, turbo up to 3.80 GHz), 20 threads.
29
- - [Raspberry Pi 4B](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/): Broadcom BCM2711 SoC with a Quad core Cortex-A72 (ARM v8) 64-bit @ 1.5 GHz.
30
- - [Toybrick RV1126](https://t.rock-chips.com/en/portal.php?mod=view&aid=26): Rockchip RV1126 SoC with a quard-core ARM Cortex-A7 CPU and a 2.0 TOPs NPU.
 
 
31
  - [Khadas Edge 2](https://www.khadas.com/edge2): Rockchip RK3588S SoC with a CPU of 2.25 GHz Quad Core ARM Cortex-A76 + 1.8 GHz Quad Core Cortex-A55, and a 6 TOPS NPU.
 
 
 
 
 
32
  - [Horizon Sunrise X3](https://developer.horizon.ai/sunrise): an SoC from Horizon Robotics with a quad-core ARM Cortex-A53 1.2 GHz CPU and a 5 TOPS BPU (a.k.a NPU).
33
  - [MAIX-III AXera-Pi](https://wiki.sipeed.com/hardware/en/maixIII/ax-pi/axpi.html#Hardware): Axera AX620A SoC with a quad-core ARM Cortex-A7 CPU and a 3.6 TOPS @ int8 NPU.
 
 
 
34
  - [StarFive VisionFive 2](https://doc-en.rvspace.org/VisionFive2/Product_Brief/VisionFive_2/specification_pb.html): `StarFive JH7110` SoC with a RISC-V quad-core CPU, which can turbo up to 1.5GHz, and an GPU of model `IMG BXE-4-32 MC1` from Imagination, which has a work freq up to 600MHz.
35
- - [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit): a Quad-core ARM A57 @ 1.43 GHz CPU, and a 128-core NVIDIA Maxwell GPU.
36
- - [Khadas VIM3](https://www.khadas.com/vim3): Amlogic A311D SoC with a 2.2GHz Quad core ARM Cortex-A73 + 1.8GHz dual core Cortex-A53 ARM CPU, and a 5 TOPS NPU. Benchmarks are done using **per-tensor quantized** models. Follow [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to build OpenCV with TIM-VX backend enabled.
37
- - [Atlas 200 DK](https://e.huawei.com/en/products/computing/ascend/atlas-200): Ascend 310 NPU with 22 TOPS @ INT8. Follow [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to build OpenCV with CANN backend enabled.
38
  - [Allwinner Nezha D1](https://d1.docs.aw-ol.com/en): Allwinner D1 SoC with a 1.0 GHz single-core RISC-V [Xuantie C906 CPU](https://www.t-head.cn/product/C906?spm=a2ouz.12986968.0.0.7bfc1384auGNPZ) with RVV 0.7.1 support. YuNet is tested for now. Visit [here](https://github.com/fengyuentau/opencv_zoo_cpp) for more details.
39
 
40
  ***Important Notes***:
 
25
 
26
  Hardware Setup:
27
 
28
+ x86-64:
29
  - [Intel Core i7-12700K](https://www.intel.com/content/www/us/en/products/sku/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz/specifications.html): 8 Performance-cores (3.60 GHz, turbo up to 4.90 GHz), 4 Efficient-cores (2.70 GHz, turbo up to 3.80 GHz), 20 threads.
30
+
31
+ ARM:
32
+ - [Khadas VIM3](https://www.khadas.com/vim3): Amlogic A311D SoC with a 2.2GHz Quad core ARM Cortex-A73 + 1.8GHz dual core Cortex-A53 ARM CPU, and a 5 TOPS NPU. Benchmarks are done using **per-tensor quantized** models. Follow [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to build OpenCV with TIM-VX backend enabled.
33
+ - [Khadas VIM4](https://www.khadas.com/vim4): Amlogic A311D2 SoC with 2.2GHz Quad core ARM Cortex-A73 and 2.0GHz Quad core Cortex-A53 CPU, and 3.2 TOPS Build-in NPU.
34
  - [Khadas Edge 2](https://www.khadas.com/edge2): Rockchip RK3588S SoC with a CPU of 2.25 GHz Quad Core ARM Cortex-A76 + 1.8 GHz Quad Core Cortex-A55, and a 6 TOPS NPU.
35
+ - [Atlas 200 DK](https://e.huawei.com/en/products/computing/ascend/atlas-200): Ascend 310 NPU with 22 TOPS @ INT8. Follow [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to build OpenCV with CANN backend enabled.
36
+ - [Atlas 200I DK A2](https://www.hiascend.com/hardware/developer-kit-a2): SoC with 1.0GHz Quad-core CPU and Ascend 310B NPU with 8 TOPS @ INT8.
37
+ - [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit): a Quad-core ARM A57 @ 1.43 GHz CPU, and a 128-core NVIDIA Maxwell GPU.
38
+ - [NVIDIA Jetson Nano Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/): a 6-core Arm® Cortex®-A78AE v8.2 64-bit CPU, and a 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores (max freq 625MHz).
39
+ - [Raspberry Pi 4B](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/): Broadcom BCM2711 SoC with a Quad core Cortex-A72 (ARM v8) 64-bit @ 1.5 GHz.
40
  - [Horizon Sunrise X3](https://developer.horizon.ai/sunrise): an SoC from Horizon Robotics with a quad-core ARM Cortex-A53 1.2 GHz CPU and a 5 TOPS BPU (a.k.a NPU).
41
  - [MAIX-III AXera-Pi](https://wiki.sipeed.com/hardware/en/maixIII/ax-pi/axpi.html#Hardware): Axera AX620A SoC with a quad-core ARM Cortex-A7 CPU and a 3.6 TOPS @ int8 NPU.
42
+ - [Toybrick RV1126](https://t.rock-chips.com/en/portal.php?mod=view&aid=26): Rockchip RV1126 SoC with a quard-core ARM Cortex-A7 CPU and a 2.0 TOPs NPU.
43
+
44
+ RISC-V:
45
  - [StarFive VisionFive 2](https://doc-en.rvspace.org/VisionFive2/Product_Brief/VisionFive_2/specification_pb.html): `StarFive JH7110` SoC with a RISC-V quad-core CPU, which can turbo up to 1.5GHz, and an GPU of model `IMG BXE-4-32 MC1` from Imagination, which has a work freq up to 600MHz.
 
 
 
46
  - [Allwinner Nezha D1](https://d1.docs.aw-ol.com/en): Allwinner D1 SoC with a 1.0 GHz single-core RISC-V [Xuantie C906 CPU](https://www.t-head.cn/product/C906?spm=a2ouz.12986968.0.0.7bfc1384auGNPZ) with RVV 0.7.1 support. YuNet is tested for now. Visit [here](https://github.com/fengyuentau/opencv_zoo_cpp) for more details.
47
 
48
  ***Important Notes***:
benchmark/README.md CHANGED
@@ -72,51 +72,51 @@ Benchmarking ...
72
  backend=cv.dnn.DNN_BACKEND_OPENCV
73
  target=cv.dnn.DNN_TARGET_CPU
74
  mean median min input size model
75
- 0.73 0.81 0.58 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
76
- 0.85 0.78 0.58 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
77
- 4.52 4.70 4.25 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
78
- 6.67 7.25 4.25 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
79
- 2.53 2.33 2.18 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
80
- 3.77 3.71 2.18 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
81
- 3.91 3.84 3.65 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
82
- 4.66 4.99 3.65 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
83
- 8.21 8.97 6.22 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
84
- 8.73 10.08 6.22 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
85
- 4.33 4.70 3.65 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
86
- 4.20 4.05 3.19 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
87
- 4.87 3.92 3.19 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
88
- 5.30 6.19 3.19 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
89
- 24.26 23.81 23.25 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
90
- 29.45 30.19 23.25 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
91
- 9.06 8.40 7.64 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
92
- 10.25 12.59 7.64 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
93
- 44.85 45.84 43.06 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
94
- 46.10 47.53 43.06 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
95
- 144.89 149.58 125.71 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
96
- 143.83 146.39 119.75 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
97
- 12.52 14.47 11.63 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
98
- 12.99 13.11 12.14 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
99
- 12.64 12.44 10.82 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
100
- 12.64 11.83 11.03 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
101
- 22.13 21.99 21.48 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
102
- 26.37 33.51 21.48 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
103
- 10.07 9.68 8.16 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
104
  1.19 1.30 1.07 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
105
- 23.86 24.16 23.26 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
106
- 23.94 23.76 23.26 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
107
- 26.89 24.78 23.26 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
108
- 28.82 29.58 23.26 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
109
- 17.97 16.18 12.43 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
110
- 19.54 20.66 12.43 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
111
- 17.73 24.25 9.65 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
112
- 17.65 18.90 9.65 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
113
- 16.97 15.14 9.65 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
114
- 17.21 16.47 9.65 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
115
- 17.68 14.54 9.65 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
116
- 17.31 16.09 9.65 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
117
- ```
118
-
119
- ### Rasberry Pi 4B
120
 
121
  Specs: [details](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/)
122
  - CPU: Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz.
@@ -129,48 +129,48 @@ Benchmarking ...
129
  backend=cv.dnn.DNN_BACKEND_OPENCV
130
  target=cv.dnn.DNN_TARGET_CPU
131
  mean median min input size model
132
- 5.96 5.93 5.90 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
133
- 6.09 6.11 5.90 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
134
- 73.30 73.22 72.32 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
135
- 88.20 89.95 72.32 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
136
- 32.33 32.20 31.99 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
137
- 39.82 40.78 31.99 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
138
- 108.37 108.31 106.93 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
139
- 75.91 78.95 49.78 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
140
- 76.29 77.10 75.21 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
141
- 77.33 77.73 75.21 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
142
- 66.22 66.09 65.90 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
143
- 59.91 60.72 54.63 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
144
- 62.83 54.85 54.63 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
145
- 62.47 62.13 54.63 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
146
- 625.82 667.05 425.55 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
147
- 508.92 667.04 373.14 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
148
- 147.19 146.62 146.31 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
149
- 143.70 155.87 139.90 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
150
- 214.87 214.19 213.21 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
151
- 212.90 212.93 209.55 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
152
- 1690.06 2303.34 1480.63 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
153
- 1489.54 1435.48 1308.12 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
154
- 90.49 89.23 86.83 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
155
- 356.63 357.29 354.42 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
156
- 217.52 229.39 101.61 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
157
- 198.63 198.25 196.68 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
158
- 417.23 434.54 388.38 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
159
- 381.72 394.15 308.62 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
160
- 194.47 195.18 191.67 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
161
  5.90 5.90 5.81 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
162
- 462.50 463.67 456.98 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
163
- 462.97 464.33 456.98 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
164
- 470.79 464.35 456.98 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
165
- 481.71 479.50 456.98 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
166
- 237.73 237.57 236.82 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
167
- 265.16 270.22 236.82 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
168
- 239.69 298.68 198.88 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
169
- 234.90 249.29 198.88 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
170
- 227.47 200.42 198.88 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
171
- 226.39 213.26 198.88 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
172
- 226.10 227.18 198.88 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
173
- 220.63 217.04 193.47 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
174
  ```
175
 
176
  ### Jetson Nano B01
@@ -187,48 +187,48 @@ Benchmarking ...
187
  backend=cv.dnn.DNN_BACKEND_OPENCV
188
  target=cv.dnn.DNN_TARGET_CPU
189
  mean median min input size model
190
- 5.64 5.55 5.50 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
191
- 5.91 6.00 5.50 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
192
- 61.32 61.38 61.08 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
193
- 76.85 78.69 61.08 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
194
- 27.39 27.54 27.26 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
195
- 34.69 35.62 27.26 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
196
- 50.39 50.31 50.22 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
197
- 48.97 49.42 47.46 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
198
- 68.07 67.81 67.72 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
199
- 73.97 74.83 67.72 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
200
- 63.85 63.63 63.51 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
201
- 55.14 55.93 47.84 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
202
- 60.80 48.09 47.84 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
203
- 60.99 61.22 47.84 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
204
- 352.73 352.51 351.53 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
205
- 374.22 376.71 351.53 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
206
- 134.60 135.00 133.68 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
207
- 137.10 137.32 133.68 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
208
- 215.10 215.30 214.30 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
209
- 216.18 216.19 214.30 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
210
- 1207.83 1208.71 1203.64 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
211
- 1236.98 1250.21 1203.64 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
212
- 123.30 125.37 116.69 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
213
- 124.89 125.25 124.53 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
214
- 107.99 109.82 94.05 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
215
- 108.41 108.33 107.91 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
216
- 354.88 354.70 354.34 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
217
- 343.35 344.56 333.41 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
218
- 89.93 91.58 88.28 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
219
- 5.69 5.72 5.66 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
220
- 238.89 238.22 236.97 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
221
- 238.41 240.39 236.97 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
222
- 276.96 240.19 236.97 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
223
- 304.04 311.21 236.97 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
224
- 258.11 258.13 257.64 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
225
- 275.27 277.20 257.64 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
226
- 254.90 295.88 221.12 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
227
- 252.73 258.90 221.12 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
228
- 245.08 222.01 221.12 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
229
- 245.75 236.58 221.12 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
230
- 248.42 251.65 221.12 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
231
- 244.31 236.64 221.12 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
232
  ```
233
 
234
  GPU (CUDA-FP32):
@@ -239,29 +239,27 @@ Benchmarking ...
239
  backend=cv.dnn.DNN_BACKEND_CUDA
240
  target=cv.dnn.DNN_TARGET_CUDA
241
  mean median min input size model
242
- 11.16 10.31 10.23 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
243
- 24.82 24.90 24.33 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
244
- 14.39 14.44 13.83 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
245
- 24.52 24.01 23.84 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
246
- 69.63 69.88 64.73 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
247
- 29.06 29.10 28.80 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
248
- 28.54 28.57 27.88 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
249
- 99.05 99.65 93.60 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
250
- 54.24 55.24 52.87 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
251
- 63.63 63.43 63.32 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
252
- 371.45 378.00 366.39 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
253
- 43.06 42.32 39.92 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
254
- 33.85 33.90 33.61 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
255
- 38.16 37.33 37.10 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
256
- 91.65 91.98 89.90 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
257
- 91.40 92.74 89.76 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
258
- 112.35 111.90 109.99 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
259
- 112.68 114.63 109.93 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
260
- 183.96 112.72 109.93 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
261
- 234.57 249.45 109.93 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
262
- 44.24 45.21 41.87 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
263
- 45.15 44.15 41.87 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
264
- 36.82 46.54 21.75 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
265
  ```
266
 
267
  GPU (CUDA-FP16):
@@ -272,29 +270,27 @@ Benchmarking ...
272
  backend=cv.dnn.DNN_BACKEND_CUDA
273
  target=cv.dnn.DNN_TARGET_CUDA_FP16
274
  mean median min input size model
275
- 25.41 25.43 25.31 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
276
- 113.14 112.02 111.74 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
277
- 89.04 88.90 88.59 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
278
- 96.62 96.39 96.26 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
279
- 69.78 70.65 66.74 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
280
- 118.47 118.45 118.10 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
281
- 125.69 126.63 118.10 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
282
- 64.08 62.97 62.33 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
283
- 366.46 366.88 363.46 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
284
- 163.06 163.34 161.77 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
285
- 301.10 311.52 297.74 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
286
- 43.36 40.65 39.85 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
287
- 149.37 149.95 148.01 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
288
- 153.89 153.96 153.43 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
289
- 44.29 44.03 43.62 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
290
- 91.28 92.89 89.79 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
291
- 427.53 428.67 425.63 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
292
- 427.79 429.28 425.63 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
293
- 414.07 429.46 387.26 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
294
- 406.10 407.83 383.41 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
295
- 33.07 32.88 32.00 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
296
- 33.88 33.64 32.00 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
297
- 29.32 33.70 20.69 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
298
  ```
299
 
300
  ### Khadas VIM3
@@ -304,82 +300,82 @@ Specs: [details](https://www.khadas.com/vim3)
304
  - NPU: 5 TOPS Performance NPU INT8 inference up to 1536 MAC Supports all major deep learning frameworks including TensorFlow and Caffe
305
 
306
  CPU:
307
-
308
  ```
309
- $ python3 benchmark.py --all
310
  Benchmarking ...
311
  backend=cv.dnn.DNN_BACKEND_OPENCV
312
  target=cv.dnn.DNN_TARGET_CPU
313
  mean median min input size model
314
- 4.60 4.57 4.47 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
315
- 5.10 5.15 4.47 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
316
- 53.88 52.80 51.99 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
317
- 67.86 67.67 51.99 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
318
- 40.93 41.29 27.33 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
319
- 42.81 56.31 27.33 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
320
- 58.84 56.15 53.14 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
321
- 56.36 60.14 45.29 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
322
- 76.53 67.95 65.13 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
323
- 72.25 69.88 65.13 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
324
- 66.50 64.06 58.56 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
325
- 59.10 75.36 45.69 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
326
- 62.44 48.81 45.69 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
327
- 60.46 54.93 45.69 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
328
- 372.65 404.31 326.91 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
329
- 359.72 336.21 326.91 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
330
- 145.21 125.62 124.87 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
331
- 130.10 139.45 116.10 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
332
- 218.21 216.01 199.88 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
333
- 212.69 262.75 170.88 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
334
- 1110.87 1112.27 1085.31 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
335
- 1128.73 1157.12 1085.31 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
336
- 67.31 67.41 66.23 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
337
- 147.01 144.01 139.27 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
338
- 119.70 118.95 94.09 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
339
- 107.63 107.09 105.61 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
340
- 333.03 346.65 322.37 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
341
- 322.95 315.22 303.07 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
342
- 127.16 173.93 99.77 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
343
- 238.38 241.90 233.21 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
344
- 238.05 236.53 232.05 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
345
- 262.58 238.47 232.05 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
346
- 280.63 279.26 232.05 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
347
- 194.80 195.37 192.65 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
348
- 209.49 208.33 192.65 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
349
- 192.90 227.02 161.94 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
350
- 192.52 197.03 161.94 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
351
- 185.92 168.22 161.94 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
352
- 185.01 183.14 161.94 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
353
- 186.09 194.14 161.94 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
354
- 181.79 181.65 154.21 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
355
  ```
356
 
357
  NPU (TIMVX):
358
- <!-- config face_detection and licence_plate are excluded due to https://github.com/opencv/opencv_zoo/pull/190#discussion_r1257832066 -->
359
  ```
360
- $ python3 benchmark.py --all --int8 --cfg_overwrite_backend_target 3 --cfg_exclude face_detection:license_plate
361
  Benchmarking ...
362
  backend=cv.dnn.DNN_BACKEND_TIMVX
363
  target=cv.dnn.DNN_TARGET_NPU
364
  mean median min input size model
365
- 5.08 4.72 4.70 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
366
- 45.83 47.06 43.04 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
367
- 29.20 27.55 26.25 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
368
- 18.47 18.16 17.96 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
369
- 28.25 28.35 27.98 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
370
- 149.05 155.10 144.42 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
371
- 147.40 147.49 135.90 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
372
- 75.91 79.27 71.98 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
373
- 30.98 30.56 29.36 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
374
- 117.71 119.69 107.37 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
375
- 379.46 366.19 360.02 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
376
- 33.90 36.32 31.71 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
377
- 40.34 41.50 38.47 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
378
- 162.54 162.78 155.24 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
379
- 161.50 160.70 147.69 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
380
- 239.68 239.31 236.03 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
381
- 199.42 203.20 166.15 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
382
- 197.49 169.51 166.15 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
383
  ```
384
 
385
  ### Atlas 200 DK
@@ -479,47 +475,47 @@ Benchmarking ...
479
  backend=cv.dnn.DNN_BACKEND_OPENCV
480
  target=cv.dnn.DNN_TARGET_CPU
481
  mean median min input size model
482
- 56.45 56.29 56.18 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
483
- 48.83 49.41 41.52 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
484
- 1554.78 1545.63 1523.62 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
485
- 1215.44 1251.08 921.26 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
486
- 612.58 613.61 587.83 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
487
- 502.02 513.29 399.51 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
488
- 525.72 532.34 502.00 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
489
- 415.87 442.23 318.14 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
490
- 1631.40 1635.83 1608.43 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
491
- 1115.29 1159.60 675.51 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
492
- 1546.54 1547.64 1516.69 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
493
- 1163.10 1227.05 816.99 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
494
- 980.56 852.38 689.31 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
495
- 837.72 778.61 507.03 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
496
- 11819.74 11778.79 11758.31 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
497
- 7742.66 8151.17 4442.93 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
498
- 3266.08 3250.08 3216.03 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
499
- 2260.88 2368.00 1437.58 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
500
- 2335.65 2342.12 2304.69 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
501
- 1903.82 1962.71 1533.79 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
502
- 37604.10 37569.30 37502.48 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
503
- 24229.20 25577.94 13483.54 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
504
- 415.72 403.04 399.44 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
505
- 1133.44 1131.54 1124.83 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
506
- 883.96 919.07 655.33 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
507
- 1430.98 1424.55 1415.68 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
508
- 11131.81 11141.37 11080.20 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
509
- 7065.00 7461.37 3748.85 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
510
- 790.98 823.19 755.99 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
511
- 4422.65 4432.92 4376.19 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
512
- 4407.88 4405.92 4353.22 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
513
- 3782.89 4404.01 2682.63 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
514
- 3472.93 3557.78 2682.63 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
515
- 2183.70 2172.36 2156.29 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
516
- 2225.19 2222.58 2156.29 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
517
- 2214.03 2302.61 2156.29 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
518
- 2203.45 2231.47 2150.19 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
519
- 2201.14 2188.00 2150.19 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
520
- 2029.28 2178.36 1268.17 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
521
- 1923.12 2219.63 1268.17 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
522
- 1818.21 2196.98 1184.98 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
523
  ```
524
 
525
  ### Khadas Edge2 (with RK3588)
@@ -537,47 +533,47 @@ Benchmarking ...
537
  backend=cv.dnn.DNN_BACKEND_OPENCV
538
  target=cv.dnn.DNN_TARGET_CPU
539
  mean median min input size model
540
- 2.29 2.30 2.25 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
541
- 2.62 2.64 2.25 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
542
- 28.19 28.12 28.01 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
543
- 36.68 37.80 28.01 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
544
- 12.56 12.55 12.50 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
545
- 17.28 17.83 12.50 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
546
- 22.74 22.87 22.43 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
547
- 24.56 24.61 22.43 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
548
- 29.91 30.23 28.16 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
549
- 35.54 35.46 28.16 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
550
- 27.28 27.20 27.20 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
551
- 22.91 23.33 19.28 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
552
- 27.36 19.46 19.28 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
553
- 28.28 29.17 19.28 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
554
- 150.06 150.89 147.05 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
555
- 180.91 184.75 147.05 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
556
- 54.14 52.95 49.31 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
557
- 60.01 61.20 49.31 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
558
- 117.60 128.98 83.33 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
559
- 117.28 150.31 83.33 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
560
- 553.58 558.76 535.47 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
561
- 594.18 592.64 535.47 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
562
- 49.47 49.21 48.84 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
563
- 56.35 55.73 55.25 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
564
- 57.07 57.19 55.25 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
565
- 47.94 48.41 47.05 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
566
- 146.02 145.89 139.08 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
567
- 157.60 158.88 139.08 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
568
- 41.26 42.74 40.08 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
569
- 110.51 111.04 107.73 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
570
- 110.67 111.54 107.73 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
571
- 131.52 111.76 107.73 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
572
- 146.42 149.47 107.73 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
573
- 68.70 68.63 68.54 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
574
- 78.17 80.48 68.54 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
575
- 71.42 91.44 61.14 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
576
- 70.07 76.28 61.14 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
577
- 67.69 61.72 61.14 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
578
- 68.29 65.04 61.14 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
579
- 69.58 68.63 61.14 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
580
- 68.99 65.02 61.14 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
581
  ```
582
 
583
  ### Horizon Sunrise X3 PI
@@ -594,48 +590,48 @@ Benchmarking ...
594
  backend=cv.dnn.DNN_BACKEND_OPENCV
595
  target=cv.dnn.DNN_TARGET_CPU
596
  mean median min input size model
597
- 10.15 10.07 10.04 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
598
- 11.27 11.40 10.04 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
599
- 116.44 116.29 116.15 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
600
- 158.75 164.22 116.15 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
601
- 55.42 55.80 55.27 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
602
- 76.04 78.44 55.27 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
603
- 91.39 95.06 90.66 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
604
- 95.54 95.39 90.66 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
605
- 135.16 134.82 134.75 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
606
- 148.05 149.55 134.75 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
607
- 115.69 115.73 115.38 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
608
- 99.37 100.71 85.65 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
609
- 111.02 85.94 85.65 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
610
- 112.94 112.72 85.65 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
611
- 641.92 643.42 640.64 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
612
- 700.42 708.18 640.64 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
613
- 251.52 250.97 250.36 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
614
- 261.00 280.82 250.36 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
615
- 395.23 398.77 385.68 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
616
- 406.28 416.58 385.68 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
617
- 2608.90 2612.42 2597.93 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
618
- 2609.88 2609.39 2597.93 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
619
- 189.23 188.72 182.28 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
620
- 228.95 228.74 228.35 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
621
- 227.97 228.61 226.76 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
622
- 192.29 192.26 191.74 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
623
- 660.62 662.28 659.49 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
624
- 646.25 647.89 631.03 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
625
- 182.57 185.52 179.71 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
626
  9.93 9.97 9.82 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
627
- 495.04 493.75 489.41 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
628
- 493.63 491.89 489.41 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
629
- 598.94 496.42 489.41 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
630
- 667.75 683.91 489.41 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
631
- 439.96 441.91 436.49 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
632
- 465.56 466.86 436.49 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
633
- 431.93 495.94 373.61 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
634
- 432.47 435.40 373.61 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
635
- 418.75 375.76 373.61 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
636
- 421.81 410.25 373.61 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
637
- 429.30 437.71 373.61 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
638
- 422.15 406.50 373.61 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
639
  ```
640
 
641
  ### MAIX-III AX-PI
@@ -653,47 +649,47 @@ Benchmarking ...
653
  backend=cv.dnn.DNN_BACKEND_OPENCV
654
  target=cv.dnn.DNN_TARGET_CPU
655
  mean median min input size model
656
- 83.67 83.60 83.50 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
657
- 76.45 77.17 70.53 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
658
- 2102.93 2102.75 2102.23 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
659
- 1846.25 1872.36 1639.46 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
660
- 825.27 825.74 824.83 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
661
- 752.57 759.68 693.90 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
662
- 742.35 742.87 741.42 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
663
- 630.16 641.82 539.73 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
664
- 2190.53 2188.01 2187.75 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
665
- 1662.81 1712.08 1235.22 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
666
- 2099.43 2099.39 2098.89 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
667
- 1589.86 1641.45 1181.62 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
668
- 1451.24 1182.16 1181.62 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
669
- 1277.21 1224.66 888.62 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
670
- 15832.31 15832.41 15830.59 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
671
- 11649.30 12067.68 8300.79 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
672
- 4376.55 4398.44 4371.68 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
673
- 3376.78 3480.89 2574.72 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
674
- 3422.70 3414.45 3413.72 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
675
- 3002.36 3047.94 2655.38 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
676
- 50678.08 50651.82 50651.19 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
677
- 36249.71 37771.22 24606.37 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
678
- 707.79 706.32 699.40 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
679
- 1502.15 1501.98 1500.99 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
680
- 1300.15 1320.44 1137.60 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
681
- 1993.05 1993.98 1991.86 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
682
- 14925.56 14926.90 14912.28 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
683
- 10507.96 10944.15 6974.74 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
684
- 1113.51 1124.83 1106.81 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
685
- 6094.40 6093.77 6091.85 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
686
- 6073.33 6076.77 6055.13 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
687
- 5547.32 6057.15 4653.05 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
688
- 5284.79 5356.47 4653.05 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
689
- 3230.93 3228.61 3228.29 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
690
- 3312.02 3323.17 3228.29 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
691
- 3262.32 3413.03 3182.11 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
692
- 3250.66 3298.06 3182.11 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
693
- 3231.37 3185.37 3179.37 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
694
- 3064.17 3213.91 2345.80 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
695
- 2975.21 3227.38 2345.80 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
696
- 2862.33 3212.57 2205.48 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
697
  ```
698
 
699
  ### StarFive VisionFive 2
@@ -710,47 +706,47 @@ Benchmarking ...
710
  backend=cv.dnn.DNN_BACKEND_OPENCV
711
  target=cv.dnn.DNN_TARGET_CPU
712
  mean median min input size model
713
- 41.10 41.09 41.04 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
714
- 35.87 36.37 31.62 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
715
- 1050.45 1050.38 1050.01 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
716
- 832.25 854.08 657.41 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
717
- 425.36 425.42 425.19 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
718
- 351.86 372.26 292.72 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
719
- 348.67 347.98 347.67 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
720
- 290.95 297.03 243.79 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
721
- 1135.09 1135.25 1134.72 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
722
- 788.33 822.69 509.67 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
723
- 1065.61 1065.99 1065.30 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
724
- 805.26 830.66 595.78 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
725
- 687.98 609.35 514.14 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
726
- 592.59 555.25 381.33 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
727
- 8091.50 8090.44 8088.72 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
728
- 5394.46 5666.14 3235.23 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
729
- 2270.14 2270.29 2267.51 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
730
- 1584.83 1656.13 1033.23 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
731
- 1732.53 1732.14 1731.47 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
732
- 1434.56 1463.32 1194.57 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
733
- 26172.62 26160.04 26151.67 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
734
- 17004.06 17909.88 9659.54 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
735
- 304.58 309.56 280.05 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
736
- 734.97 735.58 733.95 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
737
- 609.61 621.69 508.04 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
738
- 961.41 962.26 960.39 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
739
- 7594.21 7590.75 7589.16 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
740
- 4884.04 5154.38 2715.94 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
741
- 548.41 550.86 546.09 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
742
- 3031.81 3031.79 3030.41 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
743
- 3031.41 3031.17 3029.99 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
744
- 2638.47 3031.01 1969.10 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
745
- 2446.99 2500.65 1967.72 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
746
- 1397.09 1396.95 1396.74 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
747
- 1428.65 1432.59 1396.74 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
748
- 1429.56 1467.34 1396.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
749
- 1419.29 1450.55 1395.55 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
750
- 1421.72 1434.46 1395.55 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
751
- 1307.27 1415.63 807.66 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
752
- 1237.00 1395.68 807.66 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
753
- 1169.59 1415.29 774.09 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
754
  ```
755
 
756
  ### Khadas VIM4
 
72
  backend=cv.dnn.DNN_BACKEND_OPENCV
73
  target=cv.dnn.DNN_TARGET_CPU
74
  mean median min input size model
75
+ 0.69 0.70 0.68 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
76
+ 0.79 0.80 0.68 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
77
+ 5.09 5.13 4.96 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
78
+ 6.50 6.79 4.96 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
79
+ 1.79 1.76 1.75 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
80
+ 2.92 3.11 1.75 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
81
+ 2.40 2.43 2.37 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
82
+ 3.11 3.15 2.37 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
83
+ 5.59 5.56 5.28 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
84
+ 6.07 6.22 5.28 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
85
+ 3.13 3.14 3.05 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
86
+ 3.04 3.02 2.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
87
+ 3.46 3.03 2.92 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
88
+ 3.84 3.77 2.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
89
+ 19.47 19.47 19.08 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
90
+ 21.52 21.86 19.08 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
91
+ 5.68 5.66 5.51 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
92
+ 7.41 7.36 5.51 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
93
+ 41.02 40.99 40.86 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
94
+ 42.23 42.30 40.86 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
95
+ 78.77 79.76 77.16 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
96
+ 75.69 75.58 72.57 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
97
+ 4.01 3.84 3.79 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
98
+ 5.35 5.41 5.22 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
99
+ 6.73 6.85 5.22 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
100
+ 7.65 7.65 7.55 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
101
+ 15.56 15.57 15.10 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
102
+ 16.67 16.57 15.10 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
103
+ 6.33 6.63 6.14 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
104
  1.19 1.30 1.07 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
105
+ 18.76 19.59 18.48 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
106
+ 18.59 19.33 18.12 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
107
+ 22.05 18.60 18.12 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
108
+ 24.47 25.06 18.12 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
109
+ 10.61 10.66 10.50 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
110
+ 11.03 11.23 10.50 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
111
+ 9.85 11.62 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
112
+ 10.02 9.71 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
113
+ 9.53 7.83 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
114
+ 9.68 9.21 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
115
+ 9.85 10.63 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
116
+ 9.63 9.28 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
117
+ ```
118
+
119
+ ### Raspberry Pi 4B
120
 
121
  Specs: [details](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/)
122
  - CPU: Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz.
 
129
  backend=cv.dnn.DNN_BACKEND_OPENCV
130
  target=cv.dnn.DNN_TARGET_CPU
131
  mean median min input size model
132
+ 6.23 6.27 6.18 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
133
+ 6.68 6.73 6.18 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
134
+ 68.82 69.06 68.45 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
135
+ 87.42 89.84 68.45 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
136
+ 27.81 27.77 27.67 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
137
+ 35.71 36.67 27.67 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
138
+ 42.58 42.41 42.25 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
139
+ 46.49 46.95 42.25 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
140
+ 71.35 71.62 70.78 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
141
+ 73.81 74.23 70.78 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
142
+ 64.20 64.30 63.98 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
143
+ 57.91 58.41 52.53 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
144
+ 61.35 52.83 52.53 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
145
+ 61.49 61.28 52.53 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
146
+ 420.93 420.73 419.04 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
147
+ 410.96 395.74 364.68 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
148
+ 153.87 152.71 140.85 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
149
+ 157.86 145.90 140.85 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
150
+ 214.59 211.95 210.98 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
151
+ 215.09 238.39 208.18 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
152
+ 1614.13 1639.80 1476.58 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
153
+ 1597.92 1599.12 1476.58 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
154
+ 48.55 46.87 41.75 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
155
+ 97.05 95.40 80.93 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
156
+ 112.39 116.22 80.93 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
157
+ 105.60 113.27 88.55 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
158
+ 478.89 498.05 444.14 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
159
+ 442.56 477.87 369.59 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
160
+ 116.15 120.13 106.81 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
161
  5.90 5.90 5.81 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
162
+ 325.02 325.88 303.55 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
163
+ 323.54 332.45 303.55 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
164
+ 372.32 328.56 303.55 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
165
+ 407.90 411.97 303.55 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
166
+ 235.70 236.07 234.87 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
167
+ 240.95 241.14 234.87 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
168
+ 226.09 247.02 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
169
+ 229.25 224.63 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
170
+ 224.10 201.29 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
171
+ 223.58 219.82 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
172
+ 225.60 243.89 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
173
+ 220.97 223.16 193.91 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
174
  ```
175
 
176
  ### Jetson Nano B01
 
187
  backend=cv.dnn.DNN_BACKEND_OPENCV
188
  target=cv.dnn.DNN_TARGET_CPU
189
  mean median min input size model
190
+ 5.62 5.54 5.52 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
191
+ 6.14 6.24 5.52 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
192
+ 64.80 64.95 64.60 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
193
+ 78.31 79.85 64.60 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
194
+ 26.54 26.61 26.37 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
195
+ 33.96 34.85 26.37 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
196
+ 38.45 41.45 38.20 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
197
+ 42.62 43.20 38.20 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
198
+ 64.95 64.85 64.73 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
199
+ 72.39 73.16 64.73 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
200
+ 65.72 65.98 65.59 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
201
+ 56.66 57.56 49.10 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
202
+ 62.09 49.27 49.10 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
203
+ 62.17 62.02 49.10 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
204
+ 346.78 348.06 345.53 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
205
+ 371.11 373.54 345.53 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
206
+ 134.36 134.33 133.45 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
207
+ 140.62 140.94 133.45 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
208
+ 215.67 216.76 214.69 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
209
+ 216.58 216.78 214.69 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
210
+ 1209.12 1213.05 1201.68 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
211
+ 1240.02 1249.95 1201.68 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
212
+ 48.39 47.38 45.00 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
213
+ 75.30 75.25 74.96 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
214
+ 83.83 84.99 74.96 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
215
+ 87.65 87.59 87.37 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
216
+ 356.78 357.77 355.69 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
217
+ 346.84 351.10 335.96 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
218
+ 75.20 79.36 73.71 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
219
+ 5.56 5.56 5.48 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
220
+ 209.80 210.04 208.84 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
221
+ 209.60 212.74 208.49 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
222
+ 254.56 211.17 208.49 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
223
+ 286.57 296.56 208.49 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
224
+ 252.60 252.48 252.21 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
225
+ 259.28 261.38 252.21 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
226
+ 245.18 266.94 220.49 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
227
+ 247.72 244.25 220.49 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
228
+ 241.63 221.43 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
229
+ 243.46 238.98 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
230
+ 246.87 256.05 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
231
+ 243.37 238.90 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
232
  ```
233
 
234
  GPU (CUDA-FP32):
 
239
  backend=cv.dnn.DNN_BACKEND_CUDA
240
  target=cv.dnn.DNN_TARGET_CUDA
241
  mean median min input size model
242
+ 10.99 10.71 9.64 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
243
+ 25.25 25.81 24.54 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
244
+ 13.97 14.01 13.72 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
245
+ 24.47 24.36 23.69 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
246
+ 67.25 67.99 64.90 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
247
+ 28.96 28.92 28.85 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
248
+ 28.61 28.45 27.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
249
+ 98.80 100.11 94.57 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
250
+ 54.88 56.51 52.78 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
251
+ 63.86 63.59 63.35 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
252
+ 371.32 374.79 367.78 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
253
+ 47.26 45.56 44.69 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
254
+ 37.61 37.61 33.64 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
255
+ 37.39 37.71 37.03 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
256
+ 90.84 91.34 85.77 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
257
+ 76.44 78.00 74.90 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
258
+ 112.68 112.21 110.42 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
259
+ 112.48 111.86 110.04 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
260
+ 43.99 43.33 41.68 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
261
+ 44.97 44.42 41.68 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
262
+ 36.77 46.38 21.77 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
 
 
263
  ```
264
 
265
  GPU (CUDA-FP16):
 
270
  backend=cv.dnn.DNN_BACKEND_CUDA
271
  target=cv.dnn.DNN_TARGET_CUDA_FP16
272
  mean median min input size model
273
+ 25.05 25.05 24.95 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
274
+ 117.82 126.96 113.17 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
275
+ 88.54 88.33 88.04 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
276
+ 97.43 97.38 96.98 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
277
+ 69.40 68.28 66.36 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
278
+ 120.92 131.57 119.37 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
279
+ 128.43 128.08 119.37 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
280
+ 64.90 63.88 62.81 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
281
+ 370.21 371.97 366.38 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
282
+ 164.28 164.75 162.94 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
283
+ 299.22 300.54 295.64 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
284
+ 49.61 47.58 47.14 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
285
+ 149.50 151.12 147.24 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
286
+ 156.59 154.01 153.92 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
287
+ 43.66 43.64 43.31 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
288
+ 75.87 77.33 74.38 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
289
+ 428.97 428.99 426.11 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
290
+ 428.66 427.46 425.66 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
291
+ 32.41 31.90 31.68 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
292
+ 33.42 35.75 31.68 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
293
+ 29.34 36.44 21.27 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
 
 
294
  ```
295
 
296
  ### Khadas VIM3
 
300
  - NPU: 5 TOPS Performance NPU INT8 inference up to 1536 MAC Supports all major deep learning frameworks including TensorFlow and Caffe
301
 
302
  CPU:
303
+ <!-- config wechat is excluded due to it needs building with opencv_contrib -->
304
  ```
305
+ $ python3 benchmark.py --all --cfg_exclude wechat
306
  Benchmarking ...
307
  backend=cv.dnn.DNN_BACKEND_OPENCV
308
  target=cv.dnn.DNN_TARGET_CPU
309
  mean median min input size model
310
+ 4.62 4.62 4.53 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
311
+ 5.24 5.29 4.53 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
312
+ 55.04 54.55 53.54 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
313
+ 67.34 67.96 53.54 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
314
+ 29.50 45.62 26.14 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
315
+ 35.59 36.22 26.14 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
316
+ 35.80 35.08 34.76 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
317
+ 40.32 45.32 34.76 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
318
+ 71.92 66.92 62.98 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
319
+ 70.68 72.31 62.98 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
320
+ 59.27 53.91 52.09 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
321
+ 52.17 67.58 41.23 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
322
+ 55.44 47.28 41.23 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
323
+ 55.83 56.80 41.23 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
324
+ 335.75 329.39 325.42 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
325
+ 340.42 335.78 325.42 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
326
+ 128.58 127.15 124.03 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
327
+ 125.85 126.47 110.14 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
328
+ 179.93 170.66 166.76 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
329
+ 178.61 213.72 164.61 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
330
+ 1108.12 1100.93 1072.45 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
331
+ 1100.58 1121.31 982.74 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
332
+ 32.20 32.84 30.99 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
333
+ 78.26 78.96 75.60 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
334
+ 87.18 88.22 75.60 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
335
+ 83.22 84.20 80.07 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
336
+ 327.07 339.80 321.98 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
337
+ 316.56 302.60 269.10 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
338
+ 75.38 73.67 70.15 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
339
+ 211.02 213.14 199.28 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
340
+ 210.19 217.15 199.28 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
341
+ 242.34 225.59 199.28 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
342
+ 265.33 271.87 199.28 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
343
+ 194.77 195.13 192.69 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
344
+ 197.16 200.94 192.69 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
345
+ 185.45 199.47 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
346
+ 187.64 180.57 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
347
+ 182.53 166.96 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
348
+ 182.90 178.97 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
349
+ 184.26 194.43 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
350
+ 180.65 180.59 155.36 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
351
  ```
352
 
353
  NPU (TIMVX):
354
+
355
  ```
356
+ $ python3 benchmark.py --all --int8 --cfg_overwrite_backend_target 3
357
  Benchmarking ...
358
  backend=cv.dnn.DNN_BACKEND_TIMVX
359
  target=cv.dnn.DNN_TARGET_NPU
360
  mean median min input size model
361
+ 5.24 7.45 4.77 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
362
+ 45.96 46.10 43.21 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
363
+ 30.25 30.30 28.68 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
364
+ 19.75 20.18 18.19 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
365
+ 28.75 28.85 28.47 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
366
+ 148.80 148.85 143.45 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
367
+ 143.17 141.11 136.58 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
368
+ 73.19 78.57 62.89 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
369
+ 32.11 30.50 29.97 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
370
+ 116.32 120.72 99.40 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
371
+ 408.18 418.89 374.12 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
372
+ 37.34 38.57 32.03 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
373
+ 41.82 39.84 37.63 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
374
+ 160.70 160.90 153.15 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
375
+ 160.47 160.48 151.88 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
376
+ 239.38 237.47 231.95 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
377
+ 197.61 201.16 162.69 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
378
+ 196.69 164.78 162.69 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
379
  ```
380
 
381
  ### Atlas 200 DK
 
475
  backend=cv.dnn.DNN_BACKEND_OPENCV
476
  target=cv.dnn.DNN_TARGET_CPU
477
  mean median min input size model
478
+ 56.78 56.74 56.46 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
479
+ 51.16 51.41 45.18 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
480
+ 1737.74 1733.23 1723.65 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
481
+ 1298.48 1336.02 920.44 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
482
+ 609.51 611.79 584.89 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
483
+ 500.21 517.38 399.97 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
484
+ 465.12 471.89 445.36 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
485
+ 389.95 385.01 318.29 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
486
+ 10.16.66.1781623.94 1607.90 1595.09 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
487
+ 1109.61 1186.03 671.15 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
488
+ 1567.09 1578.61 1542.75 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
489
+ 1188.83 1219.46 850.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
490
+ 996.30 884.80 689.11 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
491
+ 849.51 805.93 507.78 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
492
+ 11855.64 11836.80 11750.10 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
493
+ 7752.60 8149.00 4429.83 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
494
+ 3260.22 3251.14 3204.85 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
495
+ 2287.10 2400.53 1482.04 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
496
+ 2335.89 2335.93 2313.63 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
497
+ 1899.16 1945.72 1529.46 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
498
+ 37600.81 37558.85 37414.98 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
499
+ 24185.35 25519.27 13395.47 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
500
+ 411.41 448.29 397.86 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
501
+ 905.77 890.22 866.06 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
502
+ 780.94 817.69 653.26 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
503
+ 1315.48 1321.44 1299.68 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
504
+ 11143.23 11155.05 11105.11 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
505
+ 7056.60 7457.76 3753.42 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
506
+ 736.02 732.90 701.14 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
507
+ 4267.03 4288.42 4229.69 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
508
+ 4265.58 4276.54 4222.22 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
509
+ 3678.65 4265.95 2636.57 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
510
+ 3383.73 3490.66 2636.57 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
511
+ 2180.44 2197.45 2152.67 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
512
+ 2217.08 2241.77 2152.67 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
513
+ 2217.15 2251.65 2152.67 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
514
+ 2206.73 2219.60 2152.63 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
515
+ 2208.84 2219.14 2152.63 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
516
+ 2035.98 2185.05 1268.94 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
517
+ 1927.93 2178.84 1268.94 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
518
+ 1822.23 2213.30 1183.93 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
519
  ```
520
 
521
  ### Khadas Edge2 (with RK3588)
 
533
  backend=cv.dnn.DNN_BACKEND_OPENCV
534
  target=cv.dnn.DNN_TARGET_CPU
535
  mean median min input size model
536
+ 2.30 2.29 2.26 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
537
+ 2.70 2.73 2.26 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
538
+ 28.94 29.00 28.60 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
539
+ 37.46 38.85 28.60 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
540
+ 12.44 12.40 12.36 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
541
+ 17.14 17.64 12.36 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
542
+ 20.22 20.36 20.08 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
543
+ 23.11 23.50 20.08 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
544
+ 29.63 29.78 28.61 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
545
+ 35.57 35.61 28.61 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
546
+ 27.45 27.46 27.25 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
547
+ 22.95 23.37 19.13 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
548
+ 27.50 19.40 19.13 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
549
+ 28.46 29.33 19.13 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
550
+ 151.10 151.79 146.96 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
551
+ 181.69 184.19 146.96 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
552
+ 53.83 52.64 50.24 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
553
+ 60.95 60.06 50.24 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
554
+ 98.03 104.53 83.47 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
555
+ 106.91 110.68 83.47 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
556
+ 554.30 550.32 538.99 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
557
+ 591.95 599.62 538.99 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
558
+ 14.02 13.89 13.56 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
559
+ 45.03 44.65 43.28 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
560
+ 50.87 52.24 43.28 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
561
+ 42.90 42.68 42.40 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
562
+ 148.01 146.42 139.56 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
563
+ 159.16 155.98 139.56 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
564
+ 37.06 37.43 36.39 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
565
+ 103.42 104.24 101.26 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
566
+ 103.41 104.41 100.08 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
567
+ 126.21 103.90 100.08 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
568
+ 142.53 147.66 100.08 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
569
+ 69.49 69.52 69.17 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
570
+ 70.63 70.69 69.17 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
571
+ 67.15 72.03 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
572
+ 67.74 66.72 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
573
+ 66.26 61.46 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
574
+ 67.36 65.65 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
575
+ 68.52 69.93 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
576
+ 68.36 65.65 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
577
  ```
578
 
579
  ### Horizon Sunrise X3 PI
 
590
  backend=cv.dnn.DNN_BACKEND_OPENCV
591
  target=cv.dnn.DNN_TARGET_CPU
592
  mean median min input size model
593
+ 10.56 10.69 10.46 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
594
+ 12.45 12.60 10.46 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
595
+ 124.80 127.36 124.45 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
596
+ 168.67 174.03 124.45 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
597
+ 55.12 55.38 54.91 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
598
+ 76.31 79.00 54.91 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
599
+ 77.44 77.53 77.07 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
600
+ 89.22 90.40 77.07 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
601
+ 132.95 133.21 132.35 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
602
+ 147.40 149.99 132.35 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
603
+ 119.71 120.69 119.32 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
604
+ 102.57 104.40 88.49 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
605
+ 114.56 88.81 88.49 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
606
+ 117.12 116.07 88.49 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
607
+ 653.39 653.85 651.99 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
608
+ 706.43 712.61 651.99 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
609
+ 252.05 252.16 250.98 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
610
+ 273.03 274.27 250.98 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
611
+ 399.35 405.40 390.82 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
612
+ 413.37 410.75 390.82 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
613
+ 2516.91 2516.82 2506.54 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
614
+ 2544.65 2551.55 2506.54 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
615
+ 84.15 85.18 77.31 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
616
+ 168.54 169.05 168.15 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
617
+ 196.46 199.81 168.15 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
618
+ 172.55 172.83 171.85 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
619
+ 678.74 678.04 677.44 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
620
+ 653.71 655.74 631.68 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
621
+ 162.87 165.82 160.04 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
622
  9.93 9.97 9.82 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
623
+ 475.98 475.34 472.72 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
624
+ 475.90 477.57 472.44 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
625
+ 585.72 475.98 472.44 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
626
+ 663.34 687.10 472.44 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
627
+ 446.82 445.92 444.32 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
628
+ 453.60 456.07 444.32 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
629
+ 427.47 463.88 381.10 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
630
+ 432.15 421.18 381.10 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
631
+ 420.61 386.28 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
632
+ 425.24 426.69 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
633
+ 431.14 447.85 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
634
+ 424.77 417.01 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
635
  ```
636
 
637
  ### MAIX-III AX-PI
 
649
  backend=cv.dnn.DNN_BACKEND_OPENCV
650
  target=cv.dnn.DNN_TARGET_CPU
651
  mean median min input size model
652
+ 83.95 83.76 83.62 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
653
+ 79.35 79.92 75.47 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
654
+ 2326.96 2326.49 2326.08 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
655
+ 1950.83 1988.86 1648.47 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
656
+ 823.42 823.35 822.50 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
657
+ 750.31 757.91 691.41 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
658
+ 664.73 664.61 663.84 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
659
+ 596.29 603.96 540.72 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
660
+ 2175.34 2173.62 2172.91 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
661
+ 1655.11 1705.43 1236.22 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
662
+ 2123.08 2122.92 2122.18 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
663
+ 1619.08 1672.32 1215.05 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
664
+ 1470.74 1216.86 1215.05 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
665
+ 1287.09 1242.01 873.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
666
+ 15841.89 15841.20 15828.32 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
667
+ 11652.03 12079.50 8299.15 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
668
+ 4371.75 4396.81 4370.29 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
669
+ 3428.89 3521.87 2670.46 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
670
+ 3421.19 3412.22 3411.20 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
671
+ 2990.22 3034.11 2645.09 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
672
+ 50633.38 50617.44 50614.78 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
673
+ 36260.23 37731.28 24683.40 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
674
+ 548.36 551.97 537.90 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
675
+ 1285.54 1285.40 1284.43 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
676
+ 1204.04 1211.89 1137.65 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
677
+ 1849.87 1848.78 1847.80 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
678
+ 14895.99 14894.27 14884.17 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
679
+ 10496.44 10931.97 6976.60 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
680
+ 1045.98 1052.05 1040.56 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
681
+ 5899.23 5900.08 5896.73 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
682
+ 5889.39 5890.58 5878.81 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
683
+ 5436.61 5884.03 4665.77 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
684
+ 5185.53 5273.76 4539.47 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
685
+ 3230.95 3226.14 3225.53 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
686
+ 3281.31 3295.46 3225.53 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
687
+ 3247.56 3337.52 3196.25 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
688
+ 3243.20 3276.35 3196.25 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
689
+ 3230.49 3196.80 3195.02 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
690
+ 3065.33 3217.99 2348.42 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
691
+ 2976.24 3244.75 2348.42 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
692
+ 2864.72 3219.46 2208.44 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
693
  ```
694
 
695
  ### StarFive VisionFive 2
 
706
  backend=cv.dnn.DNN_BACKEND_OPENCV
707
  target=cv.dnn.DNN_TARGET_CPU
708
  mean median min input size model
709
+ 41.13 41.07 41.06 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
710
+ 37.43 37.83 34.35 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
711
+ 1169.96 1169.72 1168.74 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
712
+ 887.13 987.00 659.71 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
713
+ 423.91 423.98 423.62 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
714
+ 350.89 358.26 292.27 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
715
+ 319.69 319.26 318.76 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
716
+ 278.74 282.75 245.22 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
717
+ 1127.61 1127.36 1127.17 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
718
+ 785.44 819.07 510.77 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
719
+ 1079.69 1079.66 1079.31 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
720
+ 820.15 845.54 611.26 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
721
+ 698.13 612.64 516.41 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
722
+ 600.12 564.13 382.59 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
723
+ 8116.21 8127.96 8113.70 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
724
+ 5408.02 5677.71 3240.16 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
725
+ 2267.96 2268.26 2266.59 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
726
+ 1605.80 1671.91 1073.50 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
727
+ 1731.61 1733.17 1730.54 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
728
+ 1435.43 1477.52 1196.01 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
729
+ 26185.41 26190.85 26168.68 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
730
+ 17019.14 17923.20 9673.68 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
731
+ 288.95 290.28 260.40 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
732
+ 628.64 628.47 628.27 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
733
+ 562.90 569.91 509.93 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
734
+ 910.38 910.94 909.64 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
735
+ 7613.64 7626.26 7606.07 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
736
+ 4895.28 5166.85 2716.65 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
737
+ 524.52 526.33 522.71 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
738
+ 2988.22 2996.51 2980.17 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
739
+ 2981.84 2979.74 2975.80 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
740
+ 2610.78 2979.14 1979.37 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
741
+ 2425.29 2478.92 1979.37 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
742
+ 1404.01 1415.46 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
743
+ 1425.42 1426.51 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
744
+ 1432.21 1450.47 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
745
+ 1425.24 1448.27 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
746
+ 1428.84 1446.76 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
747
+ 1313.68 1427.46 808.70 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
748
+ 1242.07 1408.93 808.70 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
749
+ 1174.32 1426.07 774.78 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
750
  ```
751
 
752
  ### Khadas VIM4
benchmark/benchmark.py CHANGED
@@ -9,7 +9,7 @@ from models import MODELS
9
  from utils import METRICS, DATALOADERS
10
 
11
  # Check OpenCV version
12
- assert cv.__version__ >= "4.8.0", \
13
  "Please install latest opencv-python for benchmark: python3 -m pip install --upgrade opencv-python"
14
 
15
  # Valid combinations of backends and targets
 
9
  from utils import METRICS, DATALOADERS
10
 
11
  # Check OpenCV version
12
+ assert cv.__version__ >= "4.9.0", \
13
  "Please install latest opencv-python for benchmark: python3 -m pip install --upgrade opencv-python"
14
 
15
  # Valid combinations of backends and targets
benchmark/color_table.svg CHANGED
benchmark/table_config.yaml CHANGED
@@ -75,14 +75,14 @@ Models:
75
 
76
  - name: "CRNN-EN"
77
  task: "Text Recognition"
78
- input_size: "100*32"
79
  folder: "text_recognition_crnn"
80
  acceptable_time: 2000
81
  keyword: "text_recognition_CRNN_EN"
82
 
83
  - name: "CRNN-CN"
84
  task: "Text Recognition"
85
- input_size: "100*32"
86
  folder: "text_recognition_crnn"
87
  acceptable_time: 2000
88
  keyword: "text_recognition_CRNN_CN"
@@ -170,28 +170,24 @@ Devices:
170
  display_info: "Intel\n12700K\nCPU"
171
  platform: "CPU"
172
 
173
- - name: "Rasberry Pi 4B"
174
- display_info: "Rasberry Pi 4B\nBCM2711\nCPU"
175
- platform: "CPU"
176
-
177
- - name: "StarFive VisionFive 2"
178
- display_info: "StarFive VisionFive 2\nStarFive JH7110\nCPU"
179
  platform: "CPU"
180
 
181
- - name: "Toybrick RV1126"
182
- display_info: "Toybrick\nRV1126\nCPU"
183
  platform: "CPU"
184
 
185
  - name: "Khadas Edge2 (with RK3588)"
186
  display_info: "Khadas Edge2\nRK3588S\nCPU"
187
  platform: "CPU"
188
 
189
- - name: "Horizon Sunrise X3 PI"
190
- display_info: "Horizon Sunrise Pi\nX3\nCPU"
191
  platform: "CPU"
192
 
193
- - name: "MAIX-III AX-PI"
194
- display_info: "MAIX-III AX-Pi\nAX620A\nCPU"
195
  platform: "CPU"
196
 
197
  - name: "Jetson Nano B01"
@@ -202,20 +198,24 @@ Devices:
202
  display_info: "Jetson Nano\nOrin\nCPU"
203
  platform: "CPU"
204
 
205
- - name: "Khadas VIM3"
206
- display_info: "Khadas VIM3\nA311D\nCPU"
207
  platform: "CPU"
208
 
209
- - name: "Khadas VIM4"
210
- display_info: "Khadas VIM4\nA311D2\nCPU"
211
  platform: "CPU"
212
 
213
- - name: "Atlas 200 DK"
214
- display_info: "Atlas 200 DK\nAscend 310\nCPU"
215
  platform: "CPU"
216
 
217
- - name: "Atlas 200I DK A2"
218
- display_info: "Atlas 200I DK A2\nAscend 310B\nCPU"
 
 
 
 
219
  platform: "CPU"
220
 
221
  - name: "Jetson Nano B01"
@@ -243,4 +243,4 @@ Suffixes:
243
  - model: "MobileNet-V2"
244
  device: "Khadas VIM3"
245
  platform: "NPU (TIMVX)"
246
- str: "\\*"
 
75
 
76
  - name: "CRNN-EN"
77
  task: "Text Recognition"
78
+ input_size: "100x32"
79
  folder: "text_recognition_crnn"
80
  acceptable_time: 2000
81
  keyword: "text_recognition_CRNN_EN"
82
 
83
  - name: "CRNN-CN"
84
  task: "Text Recognition"
85
+ input_size: "100x32"
86
  folder: "text_recognition_crnn"
87
  acceptable_time: 2000
88
  keyword: "text_recognition_CRNN_CN"
 
170
  display_info: "Intel\n12700K\nCPU"
171
  platform: "CPU"
172
 
173
+ - name: "Khadas VIM3"
174
+ display_info: "Khadas VIM3\nA311D\nCPU"
 
 
 
 
175
  platform: "CPU"
176
 
177
+ - name: "Khadas VIM4"
178
+ display_info: "Khadas VIM4\nA311D2\nCPU"
179
  platform: "CPU"
180
 
181
  - name: "Khadas Edge2 (with RK3588)"
182
  display_info: "Khadas Edge2\nRK3588S\nCPU"
183
  platform: "CPU"
184
 
185
+ - name: "Atlas 200 DK"
186
+ display_info: "Atlas 200 DK\nAscend 310\nCPU"
187
  platform: "CPU"
188
 
189
+ - name: "Atlas 200I DK A2"
190
+ display_info: "Atlas 200I DK A2\nAscend 310B\nCPU"
191
  platform: "CPU"
192
 
193
  - name: "Jetson Nano B01"
 
198
  display_info: "Jetson Nano\nOrin\nCPU"
199
  platform: "CPU"
200
 
201
+ - name: "Raspberry Pi 4B"
202
+ display_info: "Raspberry Pi 4B\nBCM2711\nCPU"
203
  platform: "CPU"
204
 
205
+ - name: "Horizon Sunrise X3 PI"
206
+ display_info: "Horizon Sunrise Pi\nX3\nCPU"
207
  platform: "CPU"
208
 
209
+ - name: "MAIX-III AX-PI"
210
+ display_info: "MAIX-III AX-Pi\nAX620A\nCPU"
211
  platform: "CPU"
212
 
213
+ - name: "Toybrick RV1126"
214
+ display_info: "Toybrick\nRV1126\nCPU"
215
+ platform: "CPU"
216
+
217
+ - name: "StarFive VisionFive 2"
218
+ display_info: "StarFive VisionFive 2\nStarFive JH7110\nCPU"
219
  platform: "CPU"
220
 
221
  - name: "Jetson Nano B01"
 
243
  - model: "MobileNet-V2"
244
  device: "Khadas VIM3"
245
  platform: "NPU (TIMVX)"
246
+ str: "\\*"
models/face_detection_yunet/CMakeLists.txt CHANGED
@@ -1,7 +1,7 @@
1
  cmake_minimum_required(VERSION 3.24.0)
2
  project(opencv_zoo_face_detection_yunet)
3
 
4
- set(OPENCV_VERSION "4.8.0")
5
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
6
 
7
  # Find OpenCV
 
1
  cmake_minimum_required(VERSION 3.24.0)
2
  project(opencv_zoo_face_detection_yunet)
3
 
4
+ set(OPENCV_VERSION "4.9.0")
5
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
6
 
7
  # Find OpenCV
models/face_detection_yunet/demo.py CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
12
  from yunet import YuNet
13
 
14
  # Check OpenCV version
15
- assert cv.__version__ >= "4.8.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
 
12
  from yunet import YuNet
13
 
14
  # Check OpenCV version
15
+ assert cv.__version__ >= "4.9.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
models/face_recognition_sface/demo.py CHANGED
@@ -16,7 +16,7 @@ sys.path.append('../face_detection_yunet')
16
  from yunet import YuNet
17
 
18
  # Check OpenCV version
19
- assert cv.__version__ >= "4.8.0", \
20
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
21
 
22
  # Valid combinations of backends and targets
 
16
  from yunet import YuNet
17
 
18
  # Check OpenCV version
19
+ assert cv.__version__ >= "4.9.0", \
20
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
21
 
22
  # Valid combinations of backends and targets
models/facial_expression_recognition/demo.py CHANGED
@@ -12,7 +12,7 @@ sys.path.append('../face_detection_yunet')
12
  from yunet import YuNet
13
 
14
  # Check OpenCV version
15
- assert cv.__version__ >= "4.8.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
 
12
  from yunet import YuNet
13
 
14
  # Check OpenCV version
15
+ assert cv.__version__ >= "4.9.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
models/handpose_estimation_mediapipe/demo.py CHANGED
@@ -10,7 +10,7 @@ sys.path.append('../palm_detection_mediapipe')
10
  from mp_palmdet import MPPalmDet
11
 
12
  # Check OpenCV version
13
- assert cv.__version__ >= "4.8.0", \
14
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
15
 
16
  # Valid combinations of backends and targets
 
10
  from mp_palmdet import MPPalmDet
11
 
12
  # Check OpenCV version
13
+ assert cv.__version__ >= "4.9.0", \
14
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
15
 
16
  # Valid combinations of backends and targets
models/human_segmentation_pphumanseg/demo.py CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
12
  from pphumanseg import PPHumanSeg
13
 
14
  # Check OpenCV version
15
- assert cv.__version__ >= "4.8.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
 
12
  from pphumanseg import PPHumanSeg
13
 
14
  # Check OpenCV version
15
+ assert cv.__version__ >= "4.9.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
models/image_classification_mobilenet/CMakeLists.txt CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_image_classification_mobilenet")
3
 
4
  PROJECT (${project_name})
5
 
6
- set(OPENCV_VERSION "4.8.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
 
3
 
4
  PROJECT (${project_name})
5
 
6
+ set(OPENCV_VERSION "4.9.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
models/image_classification_mobilenet/demo.py CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
6
  from mobilenet import MobileNet
7
 
8
  # Check OpenCV version
9
- assert cv.__version__ >= "4.8.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
 
6
  from mobilenet import MobileNet
7
 
8
  # Check OpenCV version
9
+ assert cv.__version__ >= "4.9.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
models/image_classification_ppresnet/demo.py CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
12
  from ppresnet import PPResNet
13
 
14
  # Check OpenCV version
15
- assert cv.__version__ >= "4.8.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
 
12
  from ppresnet import PPResNet
13
 
14
  # Check OpenCV version
15
+ assert cv.__version__ >= "4.9.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
models/license_plate_detection_yunet/demo.py CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
6
  from lpd_yunet import LPD_YuNet
7
 
8
  # Check OpenCV version
9
- assert cv.__version__ >= "4.8.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
 
6
  from lpd_yunet import LPD_YuNet
7
 
8
  # Check OpenCV version
9
+ assert cv.__version__ >= "4.9.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
models/object_detection_nanodet/demo.py CHANGED
@@ -5,7 +5,7 @@ import argparse
5
  from nanodet import NanoDet
6
 
7
  # Check OpenCV version
8
- assert cv.__version__ >= "4.8.0", \
9
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
10
 
11
  # Valid combinations of backends and targets
 
5
  from nanodet import NanoDet
6
 
7
  # Check OpenCV version
8
+ assert cv.__version__ >= "4.9.0", \
9
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
10
 
11
  # Valid combinations of backends and targets
models/object_detection_yolox/CMakeLists.txt CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_object_detection_yolox")
3
 
4
  PROJECT (${project_name})
5
 
6
- set(OPENCV_VERSION "4.7.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
 
3
 
4
  PROJECT (${project_name})
5
 
6
+ set(OPENCV_VERSION "4.9.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
models/object_detection_yolox/demo.py CHANGED
@@ -5,7 +5,7 @@ import argparse
5
  from yolox import YoloX
6
 
7
  # Check OpenCV version
8
- assert cv.__version__ >= "4.8.0", \
9
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
10
 
11
  # Valid combinations of backends and targets
 
5
  from yolox import YoloX
6
 
7
  # Check OpenCV version
8
+ assert cv.__version__ >= "4.9.0", \
9
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
10
 
11
  # Valid combinations of backends and targets
models/object_tracking_vittrack/demo.py CHANGED
@@ -6,11 +6,10 @@ import argparse
6
  import numpy as np
7
  import cv2 as cv
8
 
9
-
10
  from vittrack import VitTrack
11
 
12
  # Check OpenCV version
13
- assert cv.__version__ > "4.8.0", \
14
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
15
 
16
  # Valid combinations of backends and targets
 
6
  import numpy as np
7
  import cv2 as cv
8
 
 
9
  from vittrack import VitTrack
10
 
11
  # Check OpenCV version
12
+ assert cv.__version__ > "4.9.0", \
13
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
14
 
15
  # Valid combinations of backends and targets
models/optical_flow_estimation_raft/demo.py CHANGED
@@ -5,6 +5,10 @@ import numpy as np
5
 
6
  from raft import Raft
7
 
 
 
 
 
8
  parser = argparse.ArgumentParser(description='RAFT (https://github.com/princeton-vl/RAFT)')
9
  parser.add_argument('--input1', '-i1', type=str,
10
  help='Usage: Set input1 path to first image, omit if using camera or video.')
 
5
 
6
  from raft import Raft
7
 
8
+ # Check OpenCV version
9
+ assert cv.__version__ > "4.9.0", \
10
+ "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
+
12
  parser = argparse.ArgumentParser(description='RAFT (https://github.com/princeton-vl/RAFT)')
13
  parser.add_argument('--input1', '-i1', type=str,
14
  help='Usage: Set input1 path to first image, omit if using camera or video.')
models/palm_detection_mediapipe/demo.py CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
6
  from mp_palmdet import MPPalmDet
7
 
8
  # Check OpenCV version
9
- assert cv.__version__ >= "4.8.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
 
6
  from mp_palmdet import MPPalmDet
7
 
8
  # Check OpenCV version
9
+ assert cv.__version__ >= "4.9.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
models/person_detection_mediapipe/CMakeLists.txt CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_person_detection_mediapipe")
3
 
4
  PROJECT (${project_name})
5
 
6
- set(OPENCV_VERSION "4.7.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
 
3
 
4
  PROJECT (${project_name})
5
 
6
+ set(OPENCV_VERSION "4.9.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
models/person_detection_mediapipe/demo.py CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
6
  from mp_persondet import MPPersonDet
7
 
8
  # Check OpenCV version
9
- assert cv.__version__ >= "4.8.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
 
6
  from mp_persondet import MPPersonDet
7
 
8
  # Check OpenCV version
9
+ assert cv.__version__ >= "4.9.0", \
10
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
11
 
12
  # Valid combinations of backends and targets
models/person_reid_youtureid/demo.py CHANGED
@@ -13,7 +13,7 @@ import cv2 as cv
13
  from youtureid import YoutuReID
14
 
15
  # Check OpenCV version
16
- assert cv.__version__ >= "4.8.0", \
17
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
18
 
19
  # Valid combinations of backends and targets
 
13
  from youtureid import YoutuReID
14
 
15
  # Check OpenCV version
16
+ assert cv.__version__ >= "4.9.0", \
17
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
18
 
19
  # Valid combinations of backends and targets
models/pose_estimation_mediapipe/CMakeLists.txt CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_pose_estimation_mediapipe")
3
 
4
  PROJECT (${project_name})
5
 
6
- set(OPENCV_VERSION "4.8.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
 
3
 
4
  PROJECT (${project_name})
5
 
6
+ set(OPENCV_VERSION "4.9.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
models/pose_estimation_mediapipe/demo.py CHANGED
@@ -10,7 +10,7 @@ sys.path.append('../person_detection_mediapipe')
10
  from mp_persondet import MPPersonDet
11
 
12
  # Check OpenCV version
13
- assert cv.__version__ >= "4.8.0", \
14
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
15
 
16
  # Valid combinations of backends and targets
 
10
  from mp_persondet import MPPersonDet
11
 
12
  # Check OpenCV version
13
+ assert cv.__version__ >= "4.9.0", \
14
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
15
 
16
  # Valid combinations of backends and targets
models/qrcode_wechatqrcode/demo.py CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
12
  from wechatqrcode import WeChatQRCode
13
 
14
  # Check OpenCV version
15
- assert cv.__version__ >= "4.8.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
 
12
  from wechatqrcode import WeChatQRCode
13
 
14
  # Check OpenCV version
15
+ assert cv.__version__ >= "4.9.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
models/text_detection_ppocr/CMakeLists.txt CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_text_detection_ppocr")
3
 
4
  PROJECT (${project_name})
5
 
6
- set(OPENCV_VERSION "4.8.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
 
3
 
4
  PROJECT (${project_name})
5
 
6
+ set(OPENCV_VERSION "4.9.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
models/text_detection_ppocr/demo.py CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
12
  from ppocr_det import PPOCRDet
13
 
14
  # Check OpenCV version
15
- assert cv.__version__ >= "4.8.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
 
12
  from ppocr_det import PPOCRDet
13
 
14
  # Check OpenCV version
15
+ assert cv.__version__ >= "4.9.0", \
16
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
17
 
18
  # Valid combinations of backends and targets
models/text_recognition_crnn/CMakeLists.txt CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_text_recognition_crnn")
3
 
4
  PROJECT (${project_name})
5
 
6
- set(OPENCV_VERSION "4.7.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
 
3
 
4
  PROJECT (${project_name})
5
 
6
+ set(OPENCV_VERSION "4.9.0")
7
  set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
8
  find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
9
  # Find OpenCV, you may need to set OpenCV_DIR variable
models/text_recognition_crnn/demo.py CHANGED
@@ -16,7 +16,7 @@ sys.path.append('../text_detection_ppocr')
16
  from ppocr_det import PPOCRDet
17
 
18
  # Check OpenCV version
19
- assert cv.__version__ >= "4.8.0", \
20
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
21
 
22
  # Valid combinations of backends and targets
 
16
  from ppocr_det import PPOCRDet
17
 
18
  # Check OpenCV version
19
+ assert cv.__version__ >= "4.9.0", \
20
  "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
21
 
22
  # Valid combinations of backends and targets
tools/quantize/requirements.txt CHANGED
@@ -1,4 +1,4 @@
1
- opencv-python>=4.8.0
2
  onnx
3
  onnxruntime
4
  onnxruntime-extensions
 
1
+ opencv-python>=4.9.0
2
  onnx
3
  onnxruntime
4
  onnxruntime-extensions