Commit
·
1047434
1
Parent(s):
e9da4a7
Bump version 4.9 (#222)
Browse files* update benchmark results on i7-12700K
* update benchmark results on edge2
* add benchmark results on Horizon Sunrise X3 PI
* add benchmark results on Jetson Nano B01 (CPU)
* add benchmark results on Raspberry Pi 4B
* add benchmark results on Jetson Nano B01 (GPU)
* add MAIX-III and StarFive benchmark results
* update benchmark results on Khadas VIM3
* update hardware setup info
* bump opencv version requirement to 4.9.0
* update benchmark results on RV1126
* regenerate table
* change * to x in input size text
* regenerate table
* rollback for '\\*'
* regenerate table
* add description for atlas 200i dk a2
* tune table
---------
Co-authored-by: Wanli <[email protected]>
- README.md +13 -5
- benchmark/README.md +437 -441
- benchmark/benchmark.py +1 -1
- benchmark/color_table.svg +0 -0
- benchmark/table_config.yaml +23 -23
- models/face_detection_yunet/CMakeLists.txt +1 -1
- models/face_detection_yunet/demo.py +1 -1
- models/face_recognition_sface/demo.py +1 -1
- models/facial_expression_recognition/demo.py +1 -1
- models/handpose_estimation_mediapipe/demo.py +1 -1
- models/human_segmentation_pphumanseg/demo.py +1 -1
- models/image_classification_mobilenet/CMakeLists.txt +1 -1
- models/image_classification_mobilenet/demo.py +1 -1
- models/image_classification_ppresnet/demo.py +1 -1
- models/license_plate_detection_yunet/demo.py +1 -1
- models/object_detection_nanodet/demo.py +1 -1
- models/object_detection_yolox/CMakeLists.txt +1 -1
- models/object_detection_yolox/demo.py +1 -1
- models/object_tracking_vittrack/demo.py +1 -2
- models/optical_flow_estimation_raft/demo.py +4 -0
- models/palm_detection_mediapipe/demo.py +1 -1
- models/person_detection_mediapipe/CMakeLists.txt +1 -1
- models/person_detection_mediapipe/demo.py +1 -1
- models/person_reid_youtureid/demo.py +1 -1
- models/pose_estimation_mediapipe/CMakeLists.txt +1 -1
- models/pose_estimation_mediapipe/demo.py +1 -1
- models/qrcode_wechatqrcode/demo.py +1 -1
- models/text_detection_ppocr/CMakeLists.txt +1 -1
- models/text_detection_ppocr/demo.py +1 -1
- models/text_recognition_crnn/CMakeLists.txt +1 -1
- models/text_recognition_crnn/demo.py +1 -1
- tools/quantize/requirements.txt +1 -1
README.md
CHANGED
@@ -25,16 +25,24 @@ Guidelines:
|
|
25 |
|
26 |
Hardware Setup:
|
27 |
|
|
|
28 |
- [Intel Core i7-12700K](https://www.intel.com/content/www/us/en/products/sku/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz/specifications.html): 8 Performance-cores (3.60 GHz, turbo up to 4.90 GHz), 4 Efficient-cores (2.70 GHz, turbo up to 3.80 GHz), 20 threads.
|
29 |
-
|
30 |
-
|
|
|
|
|
31 |
- [Khadas Edge 2](https://www.khadas.com/edge2): Rockchip RK3588S SoC with a CPU of 2.25 GHz Quad Core ARM Cortex-A76 + 1.8 GHz Quad Core Cortex-A55, and a 6 TOPS NPU.
|
|
|
|
|
|
|
|
|
|
|
32 |
- [Horizon Sunrise X3](https://developer.horizon.ai/sunrise): an SoC from Horizon Robotics with a quad-core ARM Cortex-A53 1.2 GHz CPU and a 5 TOPS BPU (a.k.a NPU).
|
33 |
- [MAIX-III AXera-Pi](https://wiki.sipeed.com/hardware/en/maixIII/ax-pi/axpi.html#Hardware): Axera AX620A SoC with a quad-core ARM Cortex-A7 CPU and a 3.6 TOPS @ int8 NPU.
|
|
|
|
|
|
|
34 |
- [StarFive VisionFive 2](https://doc-en.rvspace.org/VisionFive2/Product_Brief/VisionFive_2/specification_pb.html): `StarFive JH7110` SoC with a RISC-V quad-core CPU, which can turbo up to 1.5GHz, and an GPU of model `IMG BXE-4-32 MC1` from Imagination, which has a work freq up to 600MHz.
|
35 |
-
- [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit): a Quad-core ARM A57 @ 1.43 GHz CPU, and a 128-core NVIDIA Maxwell GPU.
|
36 |
-
- [Khadas VIM3](https://www.khadas.com/vim3): Amlogic A311D SoC with a 2.2GHz Quad core ARM Cortex-A73 + 1.8GHz dual core Cortex-A53 ARM CPU, and a 5 TOPS NPU. Benchmarks are done using **per-tensor quantized** models. Follow [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to build OpenCV with TIM-VX backend enabled.
|
37 |
-
- [Atlas 200 DK](https://e.huawei.com/en/products/computing/ascend/atlas-200): Ascend 310 NPU with 22 TOPS @ INT8. Follow [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to build OpenCV with CANN backend enabled.
|
38 |
- [Allwinner Nezha D1](https://d1.docs.aw-ol.com/en): Allwinner D1 SoC with a 1.0 GHz single-core RISC-V [Xuantie C906 CPU](https://www.t-head.cn/product/C906?spm=a2ouz.12986968.0.0.7bfc1384auGNPZ) with RVV 0.7.1 support. YuNet is tested for now. Visit [here](https://github.com/fengyuentau/opencv_zoo_cpp) for more details.
|
39 |
|
40 |
***Important Notes***:
|
|
|
25 |
|
26 |
Hardware Setup:
|
27 |
|
28 |
+
x86-64:
|
29 |
- [Intel Core i7-12700K](https://www.intel.com/content/www/us/en/products/sku/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz/specifications.html): 8 Performance-cores (3.60 GHz, turbo up to 4.90 GHz), 4 Efficient-cores (2.70 GHz, turbo up to 3.80 GHz), 20 threads.
|
30 |
+
|
31 |
+
ARM:
|
32 |
+
- [Khadas VIM3](https://www.khadas.com/vim3): Amlogic A311D SoC with a 2.2GHz Quad core ARM Cortex-A73 + 1.8GHz dual core Cortex-A53 ARM CPU, and a 5 TOPS NPU. Benchmarks are done using **per-tensor quantized** models. Follow [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to build OpenCV with TIM-VX backend enabled.
|
33 |
+
- [Khadas VIM4](https://www.khadas.com/vim4): Amlogic A311D2 SoC with 2.2GHz Quad core ARM Cortex-A73 and 2.0GHz Quad core Cortex-A53 CPU, and 3.2 TOPS Build-in NPU.
|
34 |
- [Khadas Edge 2](https://www.khadas.com/edge2): Rockchip RK3588S SoC with a CPU of 2.25 GHz Quad Core ARM Cortex-A76 + 1.8 GHz Quad Core Cortex-A55, and a 6 TOPS NPU.
|
35 |
+
- [Atlas 200 DK](https://e.huawei.com/en/products/computing/ascend/atlas-200): Ascend 310 NPU with 22 TOPS @ INT8. Follow [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to build OpenCV with CANN backend enabled.
|
36 |
+
- [Atlas 200I DK A2](https://www.hiascend.com/hardware/developer-kit-a2): SoC with 1.0GHz Quad-core CPU and Ascend 310B NPU with 8 TOPS @ INT8.
|
37 |
+
- [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit): a Quad-core ARM A57 @ 1.43 GHz CPU, and a 128-core NVIDIA Maxwell GPU.
|
38 |
+
- [NVIDIA Jetson Nano Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/): a 6-core Arm® Cortex®-A78AE v8.2 64-bit CPU, and a 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores (max freq 625MHz).
|
39 |
+
- [Raspberry Pi 4B](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/): Broadcom BCM2711 SoC with a Quad core Cortex-A72 (ARM v8) 64-bit @ 1.5 GHz.
|
40 |
- [Horizon Sunrise X3](https://developer.horizon.ai/sunrise): an SoC from Horizon Robotics with a quad-core ARM Cortex-A53 1.2 GHz CPU and a 5 TOPS BPU (a.k.a NPU).
|
41 |
- [MAIX-III AXera-Pi](https://wiki.sipeed.com/hardware/en/maixIII/ax-pi/axpi.html#Hardware): Axera AX620A SoC with a quad-core ARM Cortex-A7 CPU and a 3.6 TOPS @ int8 NPU.
|
42 |
+
- [Toybrick RV1126](https://t.rock-chips.com/en/portal.php?mod=view&aid=26): Rockchip RV1126 SoC with a quard-core ARM Cortex-A7 CPU and a 2.0 TOPs NPU.
|
43 |
+
|
44 |
+
RISC-V:
|
45 |
- [StarFive VisionFive 2](https://doc-en.rvspace.org/VisionFive2/Product_Brief/VisionFive_2/specification_pb.html): `StarFive JH7110` SoC with a RISC-V quad-core CPU, which can turbo up to 1.5GHz, and an GPU of model `IMG BXE-4-32 MC1` from Imagination, which has a work freq up to 600MHz.
|
|
|
|
|
|
|
46 |
- [Allwinner Nezha D1](https://d1.docs.aw-ol.com/en): Allwinner D1 SoC with a 1.0 GHz single-core RISC-V [Xuantie C906 CPU](https://www.t-head.cn/product/C906?spm=a2ouz.12986968.0.0.7bfc1384auGNPZ) with RVV 0.7.1 support. YuNet is tested for now. Visit [here](https://github.com/fengyuentau/opencv_zoo_cpp) for more details.
|
47 |
|
48 |
***Important Notes***:
|
benchmark/README.md
CHANGED
@@ -72,51 +72,51 @@ Benchmarking ...
|
|
72 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
73 |
target=cv.dnn.DNN_TARGET_CPU
|
74 |
mean median min input size model
|
75 |
-
0.
|
76 |
-
0.
|
77 |
-
|
78 |
-
6.
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
1.19 1.30 1.07 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
```
|
118 |
-
|
119 |
-
###
|
120 |
|
121 |
Specs: [details](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/)
|
122 |
- CPU: Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz.
|
@@ -129,48 +129,48 @@ Benchmarking ...
|
|
129 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
130 |
target=cv.dnn.DNN_TARGET_CPU
|
131 |
mean median min input size model
|
132 |
-
|
133 |
-
6.
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
214.
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
5.90 5.90 5.81 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
220.
|
174 |
```
|
175 |
|
176 |
### Jetson Nano B01
|
@@ -187,48 +187,48 @@ Benchmarking ...
|
|
187 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
188 |
target=cv.dnn.DNN_TARGET_CPU
|
189 |
mean median min input size model
|
190 |
-
5.
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
200 |
-
|
201 |
-
|
202 |
-
|
203 |
-
|
204 |
-
|
205 |
-
|
206 |
-
134.
|
207 |
-
|
208 |
-
215.
|
209 |
-
216.
|
210 |
-
|
211 |
-
|
212 |
-
|
213 |
-
|
214 |
-
|
215 |
-
|
216 |
-
|
217 |
-
|
218 |
-
|
219 |
-
5.
|
220 |
-
|
221 |
-
|
222 |
-
|
223 |
-
|
224 |
-
|
225 |
-
|
226 |
-
|
227 |
-
|
228 |
-
|
229 |
-
|
230 |
-
|
231 |
-
|
232 |
```
|
233 |
|
234 |
GPU (CUDA-FP32):
|
@@ -239,29 +239,27 @@ Benchmarking ...
|
|
239 |
backend=cv.dnn.DNN_BACKEND_CUDA
|
240 |
target=cv.dnn.DNN_TARGET_CUDA
|
241 |
mean median min input size model
|
242 |
-
|
243 |
-
|
244 |
-
|
245 |
-
24.
|
246 |
-
|
247 |
-
|
248 |
-
28.
|
249 |
-
|
250 |
-
54.
|
251 |
-
63.
|
252 |
-
371.
|
253 |
-
|
254 |
-
|
255 |
-
|
256 |
-
|
257 |
-
|
258 |
-
112.
|
259 |
-
112.
|
260 |
-
|
261 |
-
|
262 |
-
|
263 |
-
45.15 44.15 41.87 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
264 |
-
36.82 46.54 21.75 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
265 |
```
|
266 |
|
267 |
GPU (CUDA-FP16):
|
@@ -272,29 +270,27 @@ Benchmarking ...
|
|
272 |
backend=cv.dnn.DNN_BACKEND_CUDA
|
273 |
target=cv.dnn.DNN_TARGET_CUDA_FP16
|
274 |
mean median min input size model
|
275 |
-
25.
|
276 |
-
|
277 |
-
|
278 |
-
|
279 |
-
69.
|
280 |
-
|
281 |
-
|
282 |
-
64.
|
283 |
-
|
284 |
-
|
285 |
-
|
286 |
-
|
287 |
-
149.
|
288 |
-
|
289 |
-
|
290 |
-
|
291 |
-
|
292 |
-
|
293 |
-
|
294 |
-
|
295 |
-
|
296 |
-
33.88 33.64 32.00 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
297 |
-
29.32 33.70 20.69 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
298 |
```
|
299 |
|
300 |
### Khadas VIM3
|
@@ -304,82 +300,82 @@ Specs: [details](https://www.khadas.com/vim3)
|
|
304 |
- NPU: 5 TOPS Performance NPU INT8 inference up to 1536 MAC Supports all major deep learning frameworks including TensorFlow and Caffe
|
305 |
|
306 |
CPU:
|
307 |
-
|
308 |
```
|
309 |
-
$ python3 benchmark.py --all
|
310 |
Benchmarking ...
|
311 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
312 |
target=cv.dnn.DNN_TARGET_CPU
|
313 |
mean median min input size model
|
314 |
-
4.
|
315 |
-
5.
|
316 |
-
|
317 |
-
67.
|
318 |
-
|
319 |
-
|
320 |
-
|
321 |
-
|
322 |
-
|
323 |
-
|
324 |
-
|
325 |
-
|
326 |
-
|
327 |
-
|
328 |
-
|
329 |
-
|
330 |
-
|
331 |
-
|
332 |
-
|
333 |
-
|
334 |
-
|
335 |
-
|
336 |
-
|
337 |
-
|
338 |
-
|
339 |
-
|
340 |
-
|
341 |
-
|
342 |
-
|
343 |
-
|
344 |
-
|
345 |
-
|
346 |
-
|
347 |
-
194.
|
348 |
-
|
349 |
-
|
350 |
-
|
351 |
-
|
352 |
-
|
353 |
-
|
354 |
-
|
355 |
```
|
356 |
|
357 |
NPU (TIMVX):
|
358 |
-
|
359 |
```
|
360 |
-
$ python3 benchmark.py --all --int8 --cfg_overwrite_backend_target 3
|
361 |
Benchmarking ...
|
362 |
backend=cv.dnn.DNN_BACKEND_TIMVX
|
363 |
target=cv.dnn.DNN_TARGET_NPU
|
364 |
mean median min input size model
|
365 |
-
5.
|
366 |
-
45.
|
367 |
-
|
368 |
-
|
369 |
-
28.
|
370 |
-
|
371 |
-
|
372 |
-
|
373 |
-
|
374 |
-
|
375 |
-
|
376 |
-
|
377 |
-
|
378 |
-
|
379 |
-
|
380 |
-
239.
|
381 |
-
|
382 |
-
|
383 |
```
|
384 |
|
385 |
### Atlas 200 DK
|
@@ -479,47 +475,47 @@ Benchmarking ...
|
|
479 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
480 |
target=cv.dnn.DNN_TARGET_CPU
|
481 |
mean median min input size model
|
482 |
-
56.
|
483 |
-
|
484 |
-
|
485 |
-
|
486 |
-
|
487 |
-
|
488 |
-
|
489 |
-
|
490 |
-
|
491 |
-
|
492 |
-
|
493 |
-
|
494 |
-
|
495 |
-
|
496 |
-
|
497 |
-
|
498 |
-
|
499 |
-
|
500 |
-
2335.
|
501 |
-
|
502 |
-
|
503 |
-
|
504 |
-
|
505 |
-
|
506 |
-
|
507 |
-
|
508 |
-
|
509 |
-
|
510 |
-
|
511 |
-
|
512 |
-
|
513 |
-
|
514 |
-
|
515 |
-
|
516 |
-
|
517 |
-
|
518 |
-
|
519 |
-
|
520 |
-
|
521 |
-
|
522 |
-
|
523 |
```
|
524 |
|
525 |
### Khadas Edge2 (with RK3588)
|
@@ -537,47 +533,47 @@ Benchmarking ...
|
|
537 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
538 |
target=cv.dnn.DNN_TARGET_CPU
|
539 |
mean median min input size model
|
540 |
-
2.
|
541 |
-
2.
|
542 |
-
28.
|
543 |
-
|
544 |
-
12.
|
545 |
-
17.
|
546 |
-
22
|
547 |
-
|
548 |
-
29.
|
549 |
-
35.
|
550 |
-
27.
|
551 |
-
22.
|
552 |
-
27.
|
553 |
-
28.
|
554 |
-
|
555 |
-
|
556 |
-
|
557 |
-
60.
|
558 |
-
|
559 |
-
|
560 |
-
|
561 |
-
|
562 |
-
|
563 |
-
|
564 |
-
|
565 |
-
|
566 |
-
|
567 |
-
|
568 |
-
|
569 |
-
|
570 |
-
|
571 |
-
|
572 |
-
|
573 |
-
|
574 |
-
|
575 |
-
|
576 |
-
|
577 |
-
|
578 |
-
|
579 |
-
|
580 |
-
68.
|
581 |
```
|
582 |
|
583 |
### Horizon Sunrise X3 PI
|
@@ -594,48 +590,48 @@ Benchmarking ...
|
|
594 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
595 |
target=cv.dnn.DNN_TARGET_CPU
|
596 |
mean median min input size model
|
597 |
-
10.
|
598 |
-
|
599 |
-
|
600 |
-
|
601 |
-
55.
|
602 |
-
76.
|
603 |
-
|
604 |
-
|
605 |
-
|
606 |
-
|
607 |
-
|
608 |
-
|
609 |
-
|
610 |
-
|
611 |
-
|
612 |
-
|
613 |
-
|
614 |
-
|
615 |
-
|
616 |
-
|
617 |
-
|
618 |
-
|
619 |
-
|
620 |
-
|
621 |
-
|
622 |
-
|
623 |
-
|
624 |
-
|
625 |
-
|
626 |
9.93 9.97 9.82 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
627 |
-
|
628 |
-
|
629 |
-
|
630 |
-
|
631 |
-
|
632 |
-
|
633 |
-
|
634 |
-
432.
|
635 |
-
|
636 |
-
|
637 |
-
|
638 |
-
|
639 |
```
|
640 |
|
641 |
### MAIX-III AX-PI
|
@@ -653,47 +649,47 @@ Benchmarking ...
|
|
653 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
654 |
target=cv.dnn.DNN_TARGET_CPU
|
655 |
mean median min input size model
|
656 |
-
83.
|
657 |
-
|
658 |
-
|
659 |
-
|
660 |
-
|
661 |
-
|
662 |
-
|
663 |
-
|
664 |
-
|
665 |
-
|
666 |
-
|
667 |
-
|
668 |
-
|
669 |
-
|
670 |
-
|
671 |
-
|
672 |
-
|
673 |
-
|
674 |
-
|
675 |
-
|
676 |
-
|
677 |
-
|
678 |
-
|
679 |
-
|
680 |
-
|
681 |
-
|
682 |
-
|
683 |
-
|
684 |
-
|
685 |
-
|
686 |
-
|
687 |
-
|
688 |
-
|
689 |
-
3230.
|
690 |
-
|
691 |
-
|
692 |
-
|
693 |
-
|
694 |
-
|
695 |
-
|
696 |
-
|
697 |
```
|
698 |
|
699 |
### StarFive VisionFive 2
|
@@ -710,47 +706,47 @@ Benchmarking ...
|
|
710 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
711 |
target=cv.dnn.DNN_TARGET_CPU
|
712 |
mean median min input size model
|
713 |
-
41.
|
714 |
-
|
715 |
-
|
716 |
-
|
717 |
-
|
718 |
-
|
719 |
-
|
720 |
-
|
721 |
-
|
722 |
-
|
723 |
-
|
724 |
-
|
725 |
-
|
726 |
-
|
727 |
-
|
728 |
-
|
729 |
-
|
730 |
-
|
731 |
-
|
732 |
-
|
733 |
-
|
734 |
-
|
735 |
-
|
736 |
-
|
737 |
-
|
738 |
-
|
739 |
-
|
740 |
-
|
741 |
-
|
742 |
-
|
743 |
-
|
744 |
-
|
745 |
-
|
746 |
-
|
747 |
-
|
748 |
-
|
749 |
-
|
750 |
-
|
751 |
-
|
752 |
-
|
753 |
-
|
754 |
```
|
755 |
|
756 |
### Khadas VIM4
|
|
|
72 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
73 |
target=cv.dnn.DNN_TARGET_CPU
|
74 |
mean median min input size model
|
75 |
+
0.69 0.70 0.68 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
76 |
+
0.79 0.80 0.68 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
77 |
+
5.09 5.13 4.96 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
78 |
+
6.50 6.79 4.96 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
79 |
+
1.79 1.76 1.75 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
80 |
+
2.92 3.11 1.75 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
81 |
+
2.40 2.43 2.37 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
82 |
+
3.11 3.15 2.37 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
83 |
+
5.59 5.56 5.28 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
84 |
+
6.07 6.22 5.28 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
85 |
+
3.13 3.14 3.05 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
86 |
+
3.04 3.02 2.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
87 |
+
3.46 3.03 2.92 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
88 |
+
3.84 3.77 2.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
89 |
+
19.47 19.47 19.08 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
90 |
+
21.52 21.86 19.08 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
91 |
+
5.68 5.66 5.51 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
92 |
+
7.41 7.36 5.51 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
93 |
+
41.02 40.99 40.86 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
94 |
+
42.23 42.30 40.86 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
95 |
+
78.77 79.76 77.16 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
96 |
+
75.69 75.58 72.57 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
97 |
+
4.01 3.84 3.79 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
98 |
+
5.35 5.41 5.22 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
99 |
+
6.73 6.85 5.22 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
100 |
+
7.65 7.65 7.55 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
101 |
+
15.56 15.57 15.10 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
102 |
+
16.67 16.57 15.10 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
103 |
+
6.33 6.63 6.14 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
104 |
1.19 1.30 1.07 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
105 |
+
18.76 19.59 18.48 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
106 |
+
18.59 19.33 18.12 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
107 |
+
22.05 18.60 18.12 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
108 |
+
24.47 25.06 18.12 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
109 |
+
10.61 10.66 10.50 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
110 |
+
11.03 11.23 10.50 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
111 |
+
9.85 11.62 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
112 |
+
10.02 9.71 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
113 |
+
9.53 7.83 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
114 |
+
9.68 9.21 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
115 |
+
9.85 10.63 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
116 |
+
9.63 9.28 7.74 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
117 |
+
```
|
118 |
+
|
119 |
+
### Raspberry Pi 4B
|
120 |
|
121 |
Specs: [details](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/)
|
122 |
- CPU: Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz.
|
|
|
129 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
130 |
target=cv.dnn.DNN_TARGET_CPU
|
131 |
mean median min input size model
|
132 |
+
6.23 6.27 6.18 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
133 |
+
6.68 6.73 6.18 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
134 |
+
68.82 69.06 68.45 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
135 |
+
87.42 89.84 68.45 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
136 |
+
27.81 27.77 27.67 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
137 |
+
35.71 36.67 27.67 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
138 |
+
42.58 42.41 42.25 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
139 |
+
46.49 46.95 42.25 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
140 |
+
71.35 71.62 70.78 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
141 |
+
73.81 74.23 70.78 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
142 |
+
64.20 64.30 63.98 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
143 |
+
57.91 58.41 52.53 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
144 |
+
61.35 52.83 52.53 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
145 |
+
61.49 61.28 52.53 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
146 |
+
420.93 420.73 419.04 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
147 |
+
410.96 395.74 364.68 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
148 |
+
153.87 152.71 140.85 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
149 |
+
157.86 145.90 140.85 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
150 |
+
214.59 211.95 210.98 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
151 |
+
215.09 238.39 208.18 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
152 |
+
1614.13 1639.80 1476.58 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
153 |
+
1597.92 1599.12 1476.58 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
154 |
+
48.55 46.87 41.75 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
155 |
+
97.05 95.40 80.93 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
156 |
+
112.39 116.22 80.93 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
157 |
+
105.60 113.27 88.55 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
158 |
+
478.89 498.05 444.14 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
159 |
+
442.56 477.87 369.59 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
160 |
+
116.15 120.13 106.81 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
161 |
5.90 5.90 5.81 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
162 |
+
325.02 325.88 303.55 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
163 |
+
323.54 332.45 303.55 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
164 |
+
372.32 328.56 303.55 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
165 |
+
407.90 411.97 303.55 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
166 |
+
235.70 236.07 234.87 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
167 |
+
240.95 241.14 234.87 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
168 |
+
226.09 247.02 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
169 |
+
229.25 224.63 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
170 |
+
224.10 201.29 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
171 |
+
223.58 219.82 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
172 |
+
225.60 243.89 200.44 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
173 |
+
220.97 223.16 193.91 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
174 |
```
|
175 |
|
176 |
### Jetson Nano B01
|
|
|
187 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
188 |
target=cv.dnn.DNN_TARGET_CPU
|
189 |
mean median min input size model
|
190 |
+
5.62 5.54 5.52 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
191 |
+
6.14 6.24 5.52 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
192 |
+
64.80 64.95 64.60 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
193 |
+
78.31 79.85 64.60 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
194 |
+
26.54 26.61 26.37 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
195 |
+
33.96 34.85 26.37 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
196 |
+
38.45 41.45 38.20 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
197 |
+
42.62 43.20 38.20 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
198 |
+
64.95 64.85 64.73 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
199 |
+
72.39 73.16 64.73 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
200 |
+
65.72 65.98 65.59 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
201 |
+
56.66 57.56 49.10 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
202 |
+
62.09 49.27 49.10 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
203 |
+
62.17 62.02 49.10 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
204 |
+
346.78 348.06 345.53 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
205 |
+
371.11 373.54 345.53 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
206 |
+
134.36 134.33 133.45 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
207 |
+
140.62 140.94 133.45 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
208 |
+
215.67 216.76 214.69 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
209 |
+
216.58 216.78 214.69 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
210 |
+
1209.12 1213.05 1201.68 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
211 |
+
1240.02 1249.95 1201.68 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
212 |
+
48.39 47.38 45.00 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
213 |
+
75.30 75.25 74.96 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
214 |
+
83.83 84.99 74.96 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
215 |
+
87.65 87.59 87.37 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
216 |
+
356.78 357.77 355.69 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
217 |
+
346.84 351.10 335.96 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
218 |
+
75.20 79.36 73.71 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
219 |
+
5.56 5.56 5.48 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
220 |
+
209.80 210.04 208.84 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
221 |
+
209.60 212.74 208.49 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
222 |
+
254.56 211.17 208.49 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
223 |
+
286.57 296.56 208.49 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
224 |
+
252.60 252.48 252.21 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
225 |
+
259.28 261.38 252.21 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
226 |
+
245.18 266.94 220.49 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
227 |
+
247.72 244.25 220.49 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
228 |
+
241.63 221.43 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
229 |
+
243.46 238.98 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
230 |
+
246.87 256.05 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
231 |
+
243.37 238.90 219.06 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
232 |
```
|
233 |
|
234 |
GPU (CUDA-FP32):
|
|
|
239 |
backend=cv.dnn.DNN_BACKEND_CUDA
|
240 |
target=cv.dnn.DNN_TARGET_CUDA
|
241 |
mean median min input size model
|
242 |
+
10.99 10.71 9.64 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
243 |
+
25.25 25.81 24.54 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
244 |
+
13.97 14.01 13.72 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
245 |
+
24.47 24.36 23.69 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
246 |
+
67.25 67.99 64.90 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
247 |
+
28.96 28.92 28.85 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
248 |
+
28.61 28.45 27.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
249 |
+
98.80 100.11 94.57 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
250 |
+
54.88 56.51 52.78 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
251 |
+
63.86 63.59 63.35 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
252 |
+
371.32 374.79 367.78 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
253 |
+
47.26 45.56 44.69 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
254 |
+
37.61 37.61 33.64 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
255 |
+
37.39 37.71 37.03 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
256 |
+
90.84 91.34 85.77 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
257 |
+
76.44 78.00 74.90 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
258 |
+
112.68 112.21 110.42 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
259 |
+
112.48 111.86 110.04 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
260 |
+
43.99 43.33 41.68 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
261 |
+
44.97 44.42 41.68 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
262 |
+
36.77 46.38 21.77 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
|
|
|
|
263 |
```
|
264 |
|
265 |
GPU (CUDA-FP16):
|
|
|
270 |
backend=cv.dnn.DNN_BACKEND_CUDA
|
271 |
target=cv.dnn.DNN_TARGET_CUDA_FP16
|
272 |
mean median min input size model
|
273 |
+
25.05 25.05 24.95 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
274 |
+
117.82 126.96 113.17 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
275 |
+
88.54 88.33 88.04 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
276 |
+
97.43 97.38 96.98 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
277 |
+
69.40 68.28 66.36 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
278 |
+
120.92 131.57 119.37 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
279 |
+
128.43 128.08 119.37 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
280 |
+
64.90 63.88 62.81 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
281 |
+
370.21 371.97 366.38 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
282 |
+
164.28 164.75 162.94 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
283 |
+
299.22 300.54 295.64 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
284 |
+
49.61 47.58 47.14 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
285 |
+
149.50 151.12 147.24 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
286 |
+
156.59 154.01 153.92 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
287 |
+
43.66 43.64 43.31 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
288 |
+
75.87 77.33 74.38 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
289 |
+
428.97 428.99 426.11 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
290 |
+
428.66 427.46 425.66 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
291 |
+
32.41 31.90 31.68 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
292 |
+
33.42 35.75 31.68 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
293 |
+
29.34 36.44 21.27 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
|
|
|
|
294 |
```
|
295 |
|
296 |
### Khadas VIM3
|
|
|
300 |
- NPU: 5 TOPS Performance NPU INT8 inference up to 1536 MAC Supports all major deep learning frameworks including TensorFlow and Caffe
|
301 |
|
302 |
CPU:
|
303 |
+
<!-- config wechat is excluded due to it needs building with opencv_contrib -->
|
304 |
```
|
305 |
+
$ python3 benchmark.py --all --cfg_exclude wechat
|
306 |
Benchmarking ...
|
307 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
308 |
target=cv.dnn.DNN_TARGET_CPU
|
309 |
mean median min input size model
|
310 |
+
4.62 4.62 4.53 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
311 |
+
5.24 5.29 4.53 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
312 |
+
55.04 54.55 53.54 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
313 |
+
67.34 67.96 53.54 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
314 |
+
29.50 45.62 26.14 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
315 |
+
35.59 36.22 26.14 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
316 |
+
35.80 35.08 34.76 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
317 |
+
40.32 45.32 34.76 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
318 |
+
71.92 66.92 62.98 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
319 |
+
70.68 72.31 62.98 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
320 |
+
59.27 53.91 52.09 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
321 |
+
52.17 67.58 41.23 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
322 |
+
55.44 47.28 41.23 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
323 |
+
55.83 56.80 41.23 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
324 |
+
335.75 329.39 325.42 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
325 |
+
340.42 335.78 325.42 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
326 |
+
128.58 127.15 124.03 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
327 |
+
125.85 126.47 110.14 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
328 |
+
179.93 170.66 166.76 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
329 |
+
178.61 213.72 164.61 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
330 |
+
1108.12 1100.93 1072.45 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
331 |
+
1100.58 1121.31 982.74 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
332 |
+
32.20 32.84 30.99 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
333 |
+
78.26 78.96 75.60 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
334 |
+
87.18 88.22 75.60 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
335 |
+
83.22 84.20 80.07 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
336 |
+
327.07 339.80 321.98 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
337 |
+
316.56 302.60 269.10 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
338 |
+
75.38 73.67 70.15 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
339 |
+
211.02 213.14 199.28 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
340 |
+
210.19 217.15 199.28 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
341 |
+
242.34 225.59 199.28 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
342 |
+
265.33 271.87 199.28 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
343 |
+
194.77 195.13 192.69 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
344 |
+
197.16 200.94 192.69 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
345 |
+
185.45 199.47 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
346 |
+
187.64 180.57 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
347 |
+
182.53 166.96 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
348 |
+
182.90 178.97 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
349 |
+
184.26 194.43 161.37 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
350 |
+
180.65 180.59 155.36 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
351 |
```
|
352 |
|
353 |
NPU (TIMVX):
|
354 |
+
|
355 |
```
|
356 |
+
$ python3 benchmark.py --all --int8 --cfg_overwrite_backend_target 3
|
357 |
Benchmarking ...
|
358 |
backend=cv.dnn.DNN_BACKEND_TIMVX
|
359 |
target=cv.dnn.DNN_TARGET_NPU
|
360 |
mean median min input size model
|
361 |
+
5.24 7.45 4.77 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
362 |
+
45.96 46.10 43.21 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
363 |
+
30.25 30.30 28.68 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
364 |
+
19.75 20.18 18.19 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
365 |
+
28.75 28.85 28.47 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
366 |
+
148.80 148.85 143.45 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
367 |
+
143.17 141.11 136.58 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
368 |
+
73.19 78.57 62.89 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
369 |
+
32.11 30.50 29.97 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
370 |
+
116.32 120.72 99.40 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
371 |
+
408.18 418.89 374.12 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
372 |
+
37.34 38.57 32.03 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
373 |
+
41.82 39.84 37.63 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
374 |
+
160.70 160.90 153.15 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
375 |
+
160.47 160.48 151.88 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
376 |
+
239.38 237.47 231.95 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
377 |
+
197.61 201.16 162.69 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
378 |
+
196.69 164.78 162.69 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
379 |
```
|
380 |
|
381 |
### Atlas 200 DK
|
|
|
475 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
476 |
target=cv.dnn.DNN_TARGET_CPU
|
477 |
mean median min input size model
|
478 |
+
56.78 56.74 56.46 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
479 |
+
51.16 51.41 45.18 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
480 |
+
1737.74 1733.23 1723.65 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
481 |
+
1298.48 1336.02 920.44 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
482 |
+
609.51 611.79 584.89 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
483 |
+
500.21 517.38 399.97 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
484 |
+
465.12 471.89 445.36 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
485 |
+
389.95 385.01 318.29 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
486 |
+
10.16.66.1781623.94 1607.90 1595.09 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
487 |
+
1109.61 1186.03 671.15 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
488 |
+
1567.09 1578.61 1542.75 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
489 |
+
1188.83 1219.46 850.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
490 |
+
996.30 884.80 689.11 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
491 |
+
849.51 805.93 507.78 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
492 |
+
11855.64 11836.80 11750.10 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
493 |
+
7752.60 8149.00 4429.83 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
494 |
+
3260.22 3251.14 3204.85 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
495 |
+
2287.10 2400.53 1482.04 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
496 |
+
2335.89 2335.93 2313.63 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
497 |
+
1899.16 1945.72 1529.46 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
498 |
+
37600.81 37558.85 37414.98 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
499 |
+
24185.35 25519.27 13395.47 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
500 |
+
411.41 448.29 397.86 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
501 |
+
905.77 890.22 866.06 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
502 |
+
780.94 817.69 653.26 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
503 |
+
1315.48 1321.44 1299.68 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
504 |
+
11143.23 11155.05 11105.11 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
505 |
+
7056.60 7457.76 3753.42 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
506 |
+
736.02 732.90 701.14 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
507 |
+
4267.03 4288.42 4229.69 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
508 |
+
4265.58 4276.54 4222.22 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
509 |
+
3678.65 4265.95 2636.57 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
510 |
+
3383.73 3490.66 2636.57 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
511 |
+
2180.44 2197.45 2152.67 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
512 |
+
2217.08 2241.77 2152.67 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
513 |
+
2217.15 2251.65 2152.67 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
514 |
+
2206.73 2219.60 2152.63 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
515 |
+
2208.84 2219.14 2152.63 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
516 |
+
2035.98 2185.05 1268.94 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
517 |
+
1927.93 2178.84 1268.94 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
518 |
+
1822.23 2213.30 1183.93 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
519 |
```
|
520 |
|
521 |
### Khadas Edge2 (with RK3588)
|
|
|
533 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
534 |
target=cv.dnn.DNN_TARGET_CPU
|
535 |
mean median min input size model
|
536 |
+
2.30 2.29 2.26 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
537 |
+
2.70 2.73 2.26 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
538 |
+
28.94 29.00 28.60 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
539 |
+
37.46 38.85 28.60 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
540 |
+
12.44 12.40 12.36 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
541 |
+
17.14 17.64 12.36 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
542 |
+
20.22 20.36 20.08 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
543 |
+
23.11 23.50 20.08 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
544 |
+
29.63 29.78 28.61 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
545 |
+
35.57 35.61 28.61 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
546 |
+
27.45 27.46 27.25 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
547 |
+
22.95 23.37 19.13 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
548 |
+
27.50 19.40 19.13 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
549 |
+
28.46 29.33 19.13 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
550 |
+
151.10 151.79 146.96 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
551 |
+
181.69 184.19 146.96 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
552 |
+
53.83 52.64 50.24 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
553 |
+
60.95 60.06 50.24 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
554 |
+
98.03 104.53 83.47 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
555 |
+
106.91 110.68 83.47 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
556 |
+
554.30 550.32 538.99 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
557 |
+
591.95 599.62 538.99 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
558 |
+
14.02 13.89 13.56 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
559 |
+
45.03 44.65 43.28 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
560 |
+
50.87 52.24 43.28 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
561 |
+
42.90 42.68 42.40 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
562 |
+
148.01 146.42 139.56 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
563 |
+
159.16 155.98 139.56 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
564 |
+
37.06 37.43 36.39 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
565 |
+
103.42 104.24 101.26 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
566 |
+
103.41 104.41 100.08 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
567 |
+
126.21 103.90 100.08 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
568 |
+
142.53 147.66 100.08 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
569 |
+
69.49 69.52 69.17 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
570 |
+
70.63 70.69 69.17 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
571 |
+
67.15 72.03 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
572 |
+
67.74 66.72 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
573 |
+
66.26 61.46 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
574 |
+
67.36 65.65 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
575 |
+
68.52 69.93 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
576 |
+
68.36 65.65 61.13 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
577 |
```
|
578 |
|
579 |
### Horizon Sunrise X3 PI
|
|
|
590 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
591 |
target=cv.dnn.DNN_TARGET_CPU
|
592 |
mean median min input size model
|
593 |
+
10.56 10.69 10.46 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
594 |
+
12.45 12.60 10.46 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
595 |
+
124.80 127.36 124.45 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
596 |
+
168.67 174.03 124.45 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
597 |
+
55.12 55.38 54.91 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
598 |
+
76.31 79.00 54.91 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
599 |
+
77.44 77.53 77.07 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
600 |
+
89.22 90.40 77.07 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
601 |
+
132.95 133.21 132.35 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
602 |
+
147.40 149.99 132.35 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
603 |
+
119.71 120.69 119.32 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
604 |
+
102.57 104.40 88.49 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
605 |
+
114.56 88.81 88.49 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
606 |
+
117.12 116.07 88.49 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
607 |
+
653.39 653.85 651.99 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
608 |
+
706.43 712.61 651.99 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
609 |
+
252.05 252.16 250.98 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
610 |
+
273.03 274.27 250.98 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
611 |
+
399.35 405.40 390.82 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
612 |
+
413.37 410.75 390.82 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
613 |
+
2516.91 2516.82 2506.54 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
614 |
+
2544.65 2551.55 2506.54 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
615 |
+
84.15 85.18 77.31 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
616 |
+
168.54 169.05 168.15 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
617 |
+
196.46 199.81 168.15 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
618 |
+
172.55 172.83 171.85 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
619 |
+
678.74 678.04 677.44 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
620 |
+
653.71 655.74 631.68 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
621 |
+
162.87 165.82 160.04 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
622 |
9.93 9.97 9.82 [100, 100] WeChatQRCode with ['detect_2021nov.prototxt', 'detect_2021nov.caffemodel', 'sr_2021nov.prototxt', 'sr_2021nov.caffemodel']
|
623 |
+
475.98 475.34 472.72 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
624 |
+
475.90 477.57 472.44 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
625 |
+
585.72 475.98 472.44 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
626 |
+
663.34 687.10 472.44 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
627 |
+
446.82 445.92 444.32 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
628 |
+
453.60 456.07 444.32 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
629 |
+
427.47 463.88 381.10 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
630 |
+
432.15 421.18 381.10 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
631 |
+
420.61 386.28 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
632 |
+
425.24 426.69 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
633 |
+
431.14 447.85 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
634 |
+
424.77 417.01 380.35 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
635 |
```
|
636 |
|
637 |
### MAIX-III AX-PI
|
|
|
649 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
650 |
target=cv.dnn.DNN_TARGET_CPU
|
651 |
mean median min input size model
|
652 |
+
83.95 83.76 83.62 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
653 |
+
79.35 79.92 75.47 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
654 |
+
2326.96 2326.49 2326.08 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
655 |
+
1950.83 1988.86 1648.47 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
656 |
+
823.42 823.35 822.50 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
657 |
+
750.31 757.91 691.41 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
658 |
+
664.73 664.61 663.84 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
659 |
+
596.29 603.96 540.72 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
660 |
+
2175.34 2173.62 2172.91 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
661 |
+
1655.11 1705.43 1236.22 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
662 |
+
2123.08 2122.92 2122.18 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
663 |
+
1619.08 1672.32 1215.05 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
664 |
+
1470.74 1216.86 1215.05 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
665 |
+
1287.09 1242.01 873.92 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
666 |
+
15841.89 15841.20 15828.32 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
667 |
+
11652.03 12079.50 8299.15 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
668 |
+
4371.75 4396.81 4370.29 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
669 |
+
3428.89 3521.87 2670.46 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
670 |
+
3421.19 3412.22 3411.20 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
671 |
+
2990.22 3034.11 2645.09 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
672 |
+
50633.38 50617.44 50614.78 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
673 |
+
36260.23 37731.28 24683.40 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
674 |
+
548.36 551.97 537.90 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
675 |
+
1285.54 1285.40 1284.43 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
676 |
+
1204.04 1211.89 1137.65 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
677 |
+
1849.87 1848.78 1847.80 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
678 |
+
14895.99 14894.27 14884.17 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
679 |
+
10496.44 10931.97 6976.60 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
680 |
+
1045.98 1052.05 1040.56 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
681 |
+
5899.23 5900.08 5896.73 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
682 |
+
5889.39 5890.58 5878.81 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
683 |
+
5436.61 5884.03 4665.77 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
684 |
+
5185.53 5273.76 4539.47 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
685 |
+
3230.95 3226.14 3225.53 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
686 |
+
3281.31 3295.46 3225.53 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
687 |
+
3247.56 3337.52 3196.25 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
688 |
+
3243.20 3276.35 3196.25 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
689 |
+
3230.49 3196.80 3195.02 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
690 |
+
3065.33 3217.99 2348.42 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
691 |
+
2976.24 3244.75 2348.42 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
692 |
+
2864.72 3219.46 2208.44 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
693 |
```
|
694 |
|
695 |
### StarFive VisionFive 2
|
|
|
706 |
backend=cv.dnn.DNN_BACKEND_OPENCV
|
707 |
target=cv.dnn.DNN_TARGET_CPU
|
708 |
mean median min input size model
|
709 |
+
41.13 41.07 41.06 [160, 120] YuNet with ['face_detection_yunet_2023mar.onnx']
|
710 |
+
37.43 37.83 34.35 [160, 120] YuNet with ['face_detection_yunet_2023mar_int8.onnx']
|
711 |
+
1169.96 1169.72 1168.74 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
712 |
+
887.13 987.00 659.71 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
713 |
+
423.91 423.98 423.62 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
714 |
+
350.89 358.26 292.27 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
715 |
+
319.69 319.26 318.76 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
716 |
+
278.74 282.75 245.22 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
717 |
+
1127.61 1127.36 1127.17 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
718 |
+
785.44 819.07 510.77 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar_int8.onnx']
|
719 |
+
1079.69 1079.66 1079.31 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
720 |
+
820.15 845.54 611.26 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
721 |
+
698.13 612.64 516.41 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
722 |
+
600.12 564.13 382.59 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
723 |
+
8116.21 8127.96 8113.70 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
724 |
+
5408.02 5677.71 3240.16 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
725 |
+
2267.96 2268.26 2266.59 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
726 |
+
1605.80 1671.91 1073.50 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar_int8.onnx']
|
727 |
+
1731.61 1733.17 1730.54 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
728 |
+
1435.43 1477.52 1196.01 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
729 |
+
26185.41 26190.85 26168.68 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
730 |
+
17019.14 17923.20 9673.68 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
731 |
+
288.95 290.28 260.40 [1280, 720] VitTrack with ['object_tracking_vittrack_2023sep.onnx']
|
732 |
+
628.64 628.47 628.27 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
733 |
+
562.90 569.91 509.93 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
734 |
+
910.38 910.94 909.64 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
735 |
+
7613.64 7626.26 7606.07 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
736 |
+
4895.28 5166.85 2716.65 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
737 |
+
524.52 526.33 522.71 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
738 |
+
2988.22 2996.51 2980.17 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may.onnx']
|
739 |
+
2981.84 2979.74 2975.80 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may.onnx']
|
740 |
+
2610.78 2979.14 1979.37 [640, 480] PPOCRDet with ['text_detection_cn_ppocrv3_2023may_int8.onnx']
|
741 |
+
2425.29 2478.92 1979.37 [640, 480] PPOCRDet with ['text_detection_en_ppocrv3_2023may_int8.onnx']
|
742 |
+
1404.01 1415.46 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
743 |
+
1425.42 1426.51 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
744 |
+
1432.21 1450.47 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
745 |
+
1425.24 1448.27 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2023feb_fp16.onnx']
|
746 |
+
1428.84 1446.76 1401.36 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2023feb_fp16.onnx']
|
747 |
+
1313.68 1427.46 808.70 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
748 |
+
1242.07 1408.93 808.70 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
749 |
+
1174.32 1426.07 774.78 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
750 |
```
|
751 |
|
752 |
### Khadas VIM4
|
benchmark/benchmark.py
CHANGED
@@ -9,7 +9,7 @@ from models import MODELS
|
|
9 |
from utils import METRICS, DATALOADERS
|
10 |
|
11 |
# Check OpenCV version
|
12 |
-
assert cv.__version__ >= "4.
|
13 |
"Please install latest opencv-python for benchmark: python3 -m pip install --upgrade opencv-python"
|
14 |
|
15 |
# Valid combinations of backends and targets
|
|
|
9 |
from utils import METRICS, DATALOADERS
|
10 |
|
11 |
# Check OpenCV version
|
12 |
+
assert cv.__version__ >= "4.9.0", \
|
13 |
"Please install latest opencv-python for benchmark: python3 -m pip install --upgrade opencv-python"
|
14 |
|
15 |
# Valid combinations of backends and targets
|
benchmark/color_table.svg
CHANGED
|
|
benchmark/table_config.yaml
CHANGED
@@ -75,14 +75,14 @@ Models:
|
|
75 |
|
76 |
- name: "CRNN-EN"
|
77 |
task: "Text Recognition"
|
78 |
-
input_size: "
|
79 |
folder: "text_recognition_crnn"
|
80 |
acceptable_time: 2000
|
81 |
keyword: "text_recognition_CRNN_EN"
|
82 |
|
83 |
- name: "CRNN-CN"
|
84 |
task: "Text Recognition"
|
85 |
-
input_size: "
|
86 |
folder: "text_recognition_crnn"
|
87 |
acceptable_time: 2000
|
88 |
keyword: "text_recognition_CRNN_CN"
|
@@ -170,28 +170,24 @@ Devices:
|
|
170 |
display_info: "Intel\n12700K\nCPU"
|
171 |
platform: "CPU"
|
172 |
|
173 |
-
- name: "
|
174 |
-
display_info: "
|
175 |
-
platform: "CPU"
|
176 |
-
|
177 |
-
- name: "StarFive VisionFive 2"
|
178 |
-
display_info: "StarFive VisionFive 2\nStarFive JH7110\nCPU"
|
179 |
platform: "CPU"
|
180 |
|
181 |
-
- name: "
|
182 |
-
display_info: "
|
183 |
platform: "CPU"
|
184 |
|
185 |
- name: "Khadas Edge2 (with RK3588)"
|
186 |
display_info: "Khadas Edge2\nRK3588S\nCPU"
|
187 |
platform: "CPU"
|
188 |
|
189 |
-
- name: "
|
190 |
-
display_info: "
|
191 |
platform: "CPU"
|
192 |
|
193 |
-
- name: "
|
194 |
-
display_info: "
|
195 |
platform: "CPU"
|
196 |
|
197 |
- name: "Jetson Nano B01"
|
@@ -202,20 +198,24 @@ Devices:
|
|
202 |
display_info: "Jetson Nano\nOrin\nCPU"
|
203 |
platform: "CPU"
|
204 |
|
205 |
-
- name: "
|
206 |
-
display_info: "
|
207 |
platform: "CPU"
|
208 |
|
209 |
-
- name: "
|
210 |
-
display_info: "
|
211 |
platform: "CPU"
|
212 |
|
213 |
-
- name: "
|
214 |
-
display_info: "
|
215 |
platform: "CPU"
|
216 |
|
217 |
-
- name: "
|
218 |
-
display_info: "
|
|
|
|
|
|
|
|
|
219 |
platform: "CPU"
|
220 |
|
221 |
- name: "Jetson Nano B01"
|
@@ -243,4 +243,4 @@ Suffixes:
|
|
243 |
- model: "MobileNet-V2"
|
244 |
device: "Khadas VIM3"
|
245 |
platform: "NPU (TIMVX)"
|
246 |
-
str: "\\*"
|
|
|
75 |
|
76 |
- name: "CRNN-EN"
|
77 |
task: "Text Recognition"
|
78 |
+
input_size: "100x32"
|
79 |
folder: "text_recognition_crnn"
|
80 |
acceptable_time: 2000
|
81 |
keyword: "text_recognition_CRNN_EN"
|
82 |
|
83 |
- name: "CRNN-CN"
|
84 |
task: "Text Recognition"
|
85 |
+
input_size: "100x32"
|
86 |
folder: "text_recognition_crnn"
|
87 |
acceptable_time: 2000
|
88 |
keyword: "text_recognition_CRNN_CN"
|
|
|
170 |
display_info: "Intel\n12700K\nCPU"
|
171 |
platform: "CPU"
|
172 |
|
173 |
+
- name: "Khadas VIM3"
|
174 |
+
display_info: "Khadas VIM3\nA311D\nCPU"
|
|
|
|
|
|
|
|
|
175 |
platform: "CPU"
|
176 |
|
177 |
+
- name: "Khadas VIM4"
|
178 |
+
display_info: "Khadas VIM4\nA311D2\nCPU"
|
179 |
platform: "CPU"
|
180 |
|
181 |
- name: "Khadas Edge2 (with RK3588)"
|
182 |
display_info: "Khadas Edge2\nRK3588S\nCPU"
|
183 |
platform: "CPU"
|
184 |
|
185 |
+
- name: "Atlas 200 DK"
|
186 |
+
display_info: "Atlas 200 DK\nAscend 310\nCPU"
|
187 |
platform: "CPU"
|
188 |
|
189 |
+
- name: "Atlas 200I DK A2"
|
190 |
+
display_info: "Atlas 200I DK A2\nAscend 310B\nCPU"
|
191 |
platform: "CPU"
|
192 |
|
193 |
- name: "Jetson Nano B01"
|
|
|
198 |
display_info: "Jetson Nano\nOrin\nCPU"
|
199 |
platform: "CPU"
|
200 |
|
201 |
+
- name: "Raspberry Pi 4B"
|
202 |
+
display_info: "Raspberry Pi 4B\nBCM2711\nCPU"
|
203 |
platform: "CPU"
|
204 |
|
205 |
+
- name: "Horizon Sunrise X3 PI"
|
206 |
+
display_info: "Horizon Sunrise Pi\nX3\nCPU"
|
207 |
platform: "CPU"
|
208 |
|
209 |
+
- name: "MAIX-III AX-PI"
|
210 |
+
display_info: "MAIX-III AX-Pi\nAX620A\nCPU"
|
211 |
platform: "CPU"
|
212 |
|
213 |
+
- name: "Toybrick RV1126"
|
214 |
+
display_info: "Toybrick\nRV1126\nCPU"
|
215 |
+
platform: "CPU"
|
216 |
+
|
217 |
+
- name: "StarFive VisionFive 2"
|
218 |
+
display_info: "StarFive VisionFive 2\nStarFive JH7110\nCPU"
|
219 |
platform: "CPU"
|
220 |
|
221 |
- name: "Jetson Nano B01"
|
|
|
243 |
- model: "MobileNet-V2"
|
244 |
device: "Khadas VIM3"
|
245 |
platform: "NPU (TIMVX)"
|
246 |
+
str: "\\*"
|
models/face_detection_yunet/CMakeLists.txt
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
cmake_minimum_required(VERSION 3.24.0)
|
2 |
project(opencv_zoo_face_detection_yunet)
|
3 |
|
4 |
-
set(OPENCV_VERSION "4.
|
5 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
6 |
|
7 |
# Find OpenCV
|
|
|
1 |
cmake_minimum_required(VERSION 3.24.0)
|
2 |
project(opencv_zoo_face_detection_yunet)
|
3 |
|
4 |
+
set(OPENCV_VERSION "4.9.0")
|
5 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
6 |
|
7 |
# Find OpenCV
|
models/face_detection_yunet/demo.py
CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
|
|
12 |
from yunet import YuNet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
-
assert cv.__version__ >= "4.
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
|
|
12 |
from yunet import YuNet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
+
assert cv.__version__ >= "4.9.0", \
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
models/face_recognition_sface/demo.py
CHANGED
@@ -16,7 +16,7 @@ sys.path.append('../face_detection_yunet')
|
|
16 |
from yunet import YuNet
|
17 |
|
18 |
# Check OpenCV version
|
19 |
-
assert cv.__version__ >= "4.
|
20 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
21 |
|
22 |
# Valid combinations of backends and targets
|
|
|
16 |
from yunet import YuNet
|
17 |
|
18 |
# Check OpenCV version
|
19 |
+
assert cv.__version__ >= "4.9.0", \
|
20 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
21 |
|
22 |
# Valid combinations of backends and targets
|
models/facial_expression_recognition/demo.py
CHANGED
@@ -12,7 +12,7 @@ sys.path.append('../face_detection_yunet')
|
|
12 |
from yunet import YuNet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
-
assert cv.__version__ >= "4.
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
|
|
12 |
from yunet import YuNet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
+
assert cv.__version__ >= "4.9.0", \
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
models/handpose_estimation_mediapipe/demo.py
CHANGED
@@ -10,7 +10,7 @@ sys.path.append('../palm_detection_mediapipe')
|
|
10 |
from mp_palmdet import MPPalmDet
|
11 |
|
12 |
# Check OpenCV version
|
13 |
-
assert cv.__version__ >= "4.
|
14 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
15 |
|
16 |
# Valid combinations of backends and targets
|
|
|
10 |
from mp_palmdet import MPPalmDet
|
11 |
|
12 |
# Check OpenCV version
|
13 |
+
assert cv.__version__ >= "4.9.0", \
|
14 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
15 |
|
16 |
# Valid combinations of backends and targets
|
models/human_segmentation_pphumanseg/demo.py
CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
|
|
12 |
from pphumanseg import PPHumanSeg
|
13 |
|
14 |
# Check OpenCV version
|
15 |
-
assert cv.__version__ >= "4.
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
|
|
12 |
from pphumanseg import PPHumanSeg
|
13 |
|
14 |
# Check OpenCV version
|
15 |
+
assert cv.__version__ >= "4.9.0", \
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
models/image_classification_mobilenet/CMakeLists.txt
CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_image_classification_mobilenet")
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
-
set(OPENCV_VERSION "4.
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
+
set(OPENCV_VERSION "4.9.0")
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
models/image_classification_mobilenet/demo.py
CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
|
|
6 |
from mobilenet import MobileNet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
-
assert cv.__version__ >= "4.
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
|
|
6 |
from mobilenet import MobileNet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
+
assert cv.__version__ >= "4.9.0", \
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
models/image_classification_ppresnet/demo.py
CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
|
|
12 |
from ppresnet import PPResNet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
-
assert cv.__version__ >= "4.
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
|
|
12 |
from ppresnet import PPResNet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
+
assert cv.__version__ >= "4.9.0", \
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
models/license_plate_detection_yunet/demo.py
CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
|
|
6 |
from lpd_yunet import LPD_YuNet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
-
assert cv.__version__ >= "4.
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
|
|
6 |
from lpd_yunet import LPD_YuNet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
+
assert cv.__version__ >= "4.9.0", \
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
models/object_detection_nanodet/demo.py
CHANGED
@@ -5,7 +5,7 @@ import argparse
|
|
5 |
from nanodet import NanoDet
|
6 |
|
7 |
# Check OpenCV version
|
8 |
-
assert cv.__version__ >= "4.
|
9 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
10 |
|
11 |
# Valid combinations of backends and targets
|
|
|
5 |
from nanodet import NanoDet
|
6 |
|
7 |
# Check OpenCV version
|
8 |
+
assert cv.__version__ >= "4.9.0", \
|
9 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
10 |
|
11 |
# Valid combinations of backends and targets
|
models/object_detection_yolox/CMakeLists.txt
CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_object_detection_yolox")
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
-
set(OPENCV_VERSION "4.
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
+
set(OPENCV_VERSION "4.9.0")
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
models/object_detection_yolox/demo.py
CHANGED
@@ -5,7 +5,7 @@ import argparse
|
|
5 |
from yolox import YoloX
|
6 |
|
7 |
# Check OpenCV version
|
8 |
-
assert cv.__version__ >= "4.
|
9 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
10 |
|
11 |
# Valid combinations of backends and targets
|
|
|
5 |
from yolox import YoloX
|
6 |
|
7 |
# Check OpenCV version
|
8 |
+
assert cv.__version__ >= "4.9.0", \
|
9 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
10 |
|
11 |
# Valid combinations of backends and targets
|
models/object_tracking_vittrack/demo.py
CHANGED
@@ -6,11 +6,10 @@ import argparse
|
|
6 |
import numpy as np
|
7 |
import cv2 as cv
|
8 |
|
9 |
-
|
10 |
from vittrack import VitTrack
|
11 |
|
12 |
# Check OpenCV version
|
13 |
-
assert cv.__version__ > "4.
|
14 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
15 |
|
16 |
# Valid combinations of backends and targets
|
|
|
6 |
import numpy as np
|
7 |
import cv2 as cv
|
8 |
|
|
|
9 |
from vittrack import VitTrack
|
10 |
|
11 |
# Check OpenCV version
|
12 |
+
assert cv.__version__ > "4.9.0", \
|
13 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
14 |
|
15 |
# Valid combinations of backends and targets
|
models/optical_flow_estimation_raft/demo.py
CHANGED
@@ -5,6 +5,10 @@ import numpy as np
|
|
5 |
|
6 |
from raft import Raft
|
7 |
|
|
|
|
|
|
|
|
|
8 |
parser = argparse.ArgumentParser(description='RAFT (https://github.com/princeton-vl/RAFT)')
|
9 |
parser.add_argument('--input1', '-i1', type=str,
|
10 |
help='Usage: Set input1 path to first image, omit if using camera or video.')
|
|
|
5 |
|
6 |
from raft import Raft
|
7 |
|
8 |
+
# Check OpenCV version
|
9 |
+
assert cv.__version__ > "4.9.0", \
|
10 |
+
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
+
|
12 |
parser = argparse.ArgumentParser(description='RAFT (https://github.com/princeton-vl/RAFT)')
|
13 |
parser.add_argument('--input1', '-i1', type=str,
|
14 |
help='Usage: Set input1 path to first image, omit if using camera or video.')
|
models/palm_detection_mediapipe/demo.py
CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
|
|
6 |
from mp_palmdet import MPPalmDet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
-
assert cv.__version__ >= "4.
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
|
|
6 |
from mp_palmdet import MPPalmDet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
+
assert cv.__version__ >= "4.9.0", \
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
models/person_detection_mediapipe/CMakeLists.txt
CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_person_detection_mediapipe")
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
-
set(OPENCV_VERSION "4.
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
+
set(OPENCV_VERSION "4.9.0")
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
models/person_detection_mediapipe/demo.py
CHANGED
@@ -6,7 +6,7 @@ import cv2 as cv
|
|
6 |
from mp_persondet import MPPersonDet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
-
assert cv.__version__ >= "4.
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
|
|
6 |
from mp_persondet import MPPersonDet
|
7 |
|
8 |
# Check OpenCV version
|
9 |
+
assert cv.__version__ >= "4.9.0", \
|
10 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
11 |
|
12 |
# Valid combinations of backends and targets
|
models/person_reid_youtureid/demo.py
CHANGED
@@ -13,7 +13,7 @@ import cv2 as cv
|
|
13 |
from youtureid import YoutuReID
|
14 |
|
15 |
# Check OpenCV version
|
16 |
-
assert cv.__version__ >= "4.
|
17 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
18 |
|
19 |
# Valid combinations of backends and targets
|
|
|
13 |
from youtureid import YoutuReID
|
14 |
|
15 |
# Check OpenCV version
|
16 |
+
assert cv.__version__ >= "4.9.0", \
|
17 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
18 |
|
19 |
# Valid combinations of backends and targets
|
models/pose_estimation_mediapipe/CMakeLists.txt
CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_pose_estimation_mediapipe")
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
-
set(OPENCV_VERSION "4.
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
+
set(OPENCV_VERSION "4.9.0")
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
models/pose_estimation_mediapipe/demo.py
CHANGED
@@ -10,7 +10,7 @@ sys.path.append('../person_detection_mediapipe')
|
|
10 |
from mp_persondet import MPPersonDet
|
11 |
|
12 |
# Check OpenCV version
|
13 |
-
assert cv.__version__ >= "4.
|
14 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
15 |
|
16 |
# Valid combinations of backends and targets
|
|
|
10 |
from mp_persondet import MPPersonDet
|
11 |
|
12 |
# Check OpenCV version
|
13 |
+
assert cv.__version__ >= "4.9.0", \
|
14 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
15 |
|
16 |
# Valid combinations of backends and targets
|
models/qrcode_wechatqrcode/demo.py
CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
|
|
12 |
from wechatqrcode import WeChatQRCode
|
13 |
|
14 |
# Check OpenCV version
|
15 |
-
assert cv.__version__ >= "4.
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
|
|
12 |
from wechatqrcode import WeChatQRCode
|
13 |
|
14 |
# Check OpenCV version
|
15 |
+
assert cv.__version__ >= "4.9.0", \
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
models/text_detection_ppocr/CMakeLists.txt
CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_text_detection_ppocr")
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
-
set(OPENCV_VERSION "4.
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
+
set(OPENCV_VERSION "4.9.0")
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
models/text_detection_ppocr/demo.py
CHANGED
@@ -12,7 +12,7 @@ import cv2 as cv
|
|
12 |
from ppocr_det import PPOCRDet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
-
assert cv.__version__ >= "4.
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
|
|
12 |
from ppocr_det import PPOCRDet
|
13 |
|
14 |
# Check OpenCV version
|
15 |
+
assert cv.__version__ >= "4.9.0", \
|
16 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
17 |
|
18 |
# Valid combinations of backends and targets
|
models/text_recognition_crnn/CMakeLists.txt
CHANGED
@@ -3,7 +3,7 @@ set(project_name "opencv_zoo_text_recognition_crnn")
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
-
set(OPENCV_VERSION "4.
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
|
|
3 |
|
4 |
PROJECT (${project_name})
|
5 |
|
6 |
+
set(OPENCV_VERSION "4.9.0")
|
7 |
set(OPENCV_INSTALLATION_PATH "" CACHE PATH "Where to look for OpenCV installation")
|
8 |
find_package(OpenCV ${OPENCV_VERSION} REQUIRED HINTS ${OPENCV_INSTALLATION_PATH})
|
9 |
# Find OpenCV, you may need to set OpenCV_DIR variable
|
models/text_recognition_crnn/demo.py
CHANGED
@@ -16,7 +16,7 @@ sys.path.append('../text_detection_ppocr')
|
|
16 |
from ppocr_det import PPOCRDet
|
17 |
|
18 |
# Check OpenCV version
|
19 |
-
assert cv.__version__ >= "4.
|
20 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
21 |
|
22 |
# Valid combinations of backends and targets
|
|
|
16 |
from ppocr_det import PPOCRDet
|
17 |
|
18 |
# Check OpenCV version
|
19 |
+
assert cv.__version__ >= "4.9.0", \
|
20 |
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"
|
21 |
|
22 |
# Valid combinations of backends and targets
|
tools/quantize/requirements.txt
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
opencv-python>=4.
|
2 |
onnx
|
3 |
onnxruntime
|
4 |
onnxruntime-extensions
|
|
|
1 |
+
opencv-python>=4.9.0
|
2 |
onnx
|
3 |
onnxruntime
|
4 |
onnxruntime-extensions
|