Wanli commited on
Commit
57699b7
·
1 Parent(s): 4b236af

Add hand pose estimation model from Mediapipe (#83)

Browse files
README.md CHANGED
@@ -14,23 +14,24 @@ Guidelines:
14
 
15
  ## Models & Benchmark Results
16
 
17
- | Model | Task | Input Size | INTEL-CPU (ms) | RPI-CPU (ms) | JETSON-GPU (ms) | KV3-NPU (ms) | D1-CPU (ms) |
18
- |-------|------|----------|----------------|--------------|-----------------|----------|-------------|
19
- | [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 5.21 | 12.18 | 4.04 | 86.69 |
20
- | [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 76.95 | 24.88 | 46.25 | --- |
21
- | [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 134.02 | 56.12 | 154.20\* | |
22
- | [DB-IC15](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2456.49 | 208.41 | --- | --- |
23
- | [DB-TD500](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2572.10 | 210.51 | --- | --- |
24
- | [CRNN-EN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 50.21 | 230.50 | 196.15 | 125.30 | --- |
25
- | [CRNN-CN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 73.52 | 309.60 | 239.76 | 166.79 | --- |
26
- | [PP-ResNet](./models/image_classification_ppresnet) | Image Classification | 224x224 | 56.05 | 440.90 | 98.64 | 75.45 | --- |
27
- | [MobileNet-V1](./models/image_classification_mobilenet) | Image Classification | 224x224 | 9.04 | 67.97 | 33.18 | 145.66\* | --- |
28
- | [MobileNet-V2](./models/image_classification_mobilenet) | Image Classification | 224x224 | 8.86 | 51.64 | 31.92 | 146.31\* | --- |
29
- | [PP-HumanSeg](./models/human_segmentation_pphumanseg) | Human Segmentation | 192x192 | 19.92 | 94.40 | 67.97 | 74.77 | --- |
30
- | [WeChatQRCode](./models/qrcode_wechatqrcode) | QR Code Detection and Parsing | 100x100 | 7.04 | 36.20 | --- | --- | --- |
31
- | [DaSiamRPN](./models/object_tracking_dasiamrpn) | Object Tracking | 1280x720 | 36.15 | 683.90 | 76.82 | --- | --- |
32
- | [YoutuReID](./models/person_reid_youtureid) | Person Re-Identification | 128x256 | 35.81 | 481.54 | 90.07 | 44.61 | --- |
33
- | [MPPalmDet](./models/palm_detection_mediapipe) | Palm Detection | 256x256 | 15.57 | 168.37 | 50.64 | 145.56\* | --- |
 
34
 
35
  \*: Models are quantized in per-channel mode, which run slower than per-tensor quantized models on NPU.
36
 
@@ -69,7 +70,11 @@ Some examples are listed below. You can find more in the directory of each model
69
 
70
  ### Palm Detection with [MP-PalmDet](./models/palm_detection_mediapipe/)
71
 
72
- ![palm det](./models/palm_detection_mediapipe//examples/mppalmdet_demo.gif)
 
 
 
 
73
 
74
  ### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
75
 
 
14
 
15
  ## Models & Benchmark Results
16
 
17
+ | Model | Task | Input Size | INTEL-CPU (ms) | RPI-CPU (ms) | JETSON-GPU (ms) | KV3-NPU (ms) | D1-CPU (ms) |
18
+ |---------------------------------------------------------|-------------------------------|------------|----------------|--------------|-----------------|--------------|-------------|
19
+ | [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 6.22 | 12.18 | 4.04 | 86.69 |
20
+ | [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 99.20 | 24.88 | 46.25 | --- |
21
+ | [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 168.03 | 56.12 | 154.20\* | |
22
+ | [DB-IC15](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2835.91 | 208.41 | --- | --- |
23
+ | [DB-TD500](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2841.71 | 210.51 | --- | --- |
24
+ | [CRNN-EN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 50.21 | 234.32 | 196.15 | 125.30 | --- |
25
+ | [CRNN-CN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 73.52 | 322.16 | 239.76 | 166.79 | --- |
26
+ | [PP-ResNet](./models/image_classification_ppresnet) | Image Classification | 224x224 | 56.05 | 602.58 | 98.64 | 75.45 | --- |
27
+ | [MobileNet-V1](./models/image_classification_mobilenet) | Image Classification | 224x224 | 9.04 | 92.25 | 33.18 | 145.66\* | --- |
28
+ | [MobileNet-V2](./models/image_classification_mobilenet) | Image Classification | 224x224 | 8.86 | 74.03 | 31.92 | 146.31\* | --- |
29
+ | [PP-HumanSeg](./models/human_segmentation_pphumanseg) | Human Segmentation | 192x192 | 19.92 | 105.32 | 67.97 | 74.77 | --- |
30
+ | [WeChatQRCode](./models/qrcode_wechatqrcode) | QR Code Detection and Parsing | 100x100 | 7.04 | 37.68 | --- | --- | --- |
31
+ | [DaSiamRPN](./models/object_tracking_dasiamrpn) | Object Tracking | 1280x720 | 36.15 | 705.48 | 76.82 | --- | --- |
32
+ | [YoutuReID](./models/person_reid_youtureid) | Person Re-Identification | 128x256 | 35.81 | 521.98 | 90.07 | 44.61 | --- |
33
+ | [MP-PalmDet](./models/palm_detection_mediapipe) | Palm Detection | 256x256 | 15.57 | 168.37 | 50.64 | 145.56\* | --- |
34
+ | [MP-HandPose](./models/handpose_estimation_mediapipe) | Hand Pose Estimation | 256x256 | 20.16 | 148.24 | 156.30 | 663.77\* | --- |
35
 
36
  \*: Models are quantized in per-channel mode, which run slower than per-tensor quantized models on NPU.
37
 
 
70
 
71
  ### Palm Detection with [MP-PalmDet](./models/palm_detection_mediapipe/)
72
 
73
+ ![palm det](./models/palm_detection_mediapipe/examples/mppalmdet_demo.gif)
74
+
75
+ ### Hand Pose Estimation with [MP-HandPose](models/handpose_estimation_mediapipe/)
76
+
77
+ ![handpose estimation](models/handpose_estimation_mediapipe/examples/mphandpose_demo.gif)
78
 
79
  ### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
80
 
benchmark/config/handpose_estimation_mediapipe.yaml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Benchmark:
2
+ name: "Hand Pose Estimation Benchmark"
3
+ type: "Recognition"
4
+ data:
5
+ path: "benchmark/data/palm_detection"
6
+ files: ["palm1.jpg", "palm2.jpg", "palm3.jpg"]
7
+ sizes: # [[w1, h1], ...], Omit to run at original scale
8
+ - [256, 256]
9
+ metric:
10
+ warmup: 30
11
+ repeat: 10
12
+ reduction: "median"
13
+ backend: "default"
14
+ target: "cpu"
15
+
16
+ Model:
17
+ name: "MPHandPose"
18
+ modelPath: "models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2022may.onnx"
19
+ confThreshold: 0.9
benchmark/download_data.py CHANGED
@@ -198,9 +198,9 @@ data_downloaders = dict(
198
  sha='5b741fbf34c1fbcf59cad8f2a65327a5899e66f1',
199
  filename='person_reid.zip'),
200
  palm_detection=Downloader(name='palm_detection',
201
- url='https://drive.google.com/u/0/uc?id=1qScOzehV8OIzJJLuD_LMvZq15YcWd_VV&export=download',
202
- sha='c0d4f811d38c6f833364b9196a719307598213a1',
203
- filename='palm_detection.zip'),
204
  license_plate_detection=Downloader(name='license_plate_detection',
205
  url='https://drive.google.com/u/0/uc?id=1cf9MEyUqMMy8lLeDGd1any6tM_SsSmny&export=download',
206
  sha='997acb143ddc4531e6e41365fb7ad4722064564c',
 
198
  sha='5b741fbf34c1fbcf59cad8f2a65327a5899e66f1',
199
  filename='person_reid.zip'),
200
  palm_detection=Downloader(name='palm_detection',
201
+ url='https://drive.google.com/u/0/uc?id=1zYnOsXxYXn-hFIdyIws9louzqjpt8byQ&export=download',
202
+ sha='78ed095b685a9bacdd643782716127afe936f366',
203
+ filename='palm_detection_20220826.zip'),
204
  license_plate_detection=Downloader(name='license_plate_detection',
205
  url='https://drive.google.com/u/0/uc?id=1cf9MEyUqMMy8lLeDGd1any6tM_SsSmny&export=download',
206
  sha='997acb143ddc4531e6e41365fb7ad4722064564c',
models/__init__.py CHANGED
@@ -10,6 +10,7 @@ from .person_reid_youtureid.youtureid import YoutuReID
10
  from .image_classification_mobilenet.mobilenet_v1 import MobileNetV1
11
  from .image_classification_mobilenet.mobilenet_v2 import MobileNetV2
12
  from .palm_detection_mediapipe.mp_palmdet import MPPalmDet
 
13
  from .license_plate_detection_yunet.lpd_yunet import LPD_YuNet
14
 
15
  class Registery:
@@ -36,4 +37,5 @@ MODELS.register(YoutuReID)
36
  MODELS.register(MobileNetV1)
37
  MODELS.register(MobileNetV2)
38
  MODELS.register(MPPalmDet)
 
39
  MODELS.register(LPD_YuNet)
 
10
  from .image_classification_mobilenet.mobilenet_v1 import MobileNetV1
11
  from .image_classification_mobilenet.mobilenet_v2 import MobileNetV2
12
  from .palm_detection_mediapipe.mp_palmdet import MPPalmDet
13
+ from .handpose_estimation_mediapipe.mp_handpose import MPHandPose
14
  from .license_plate_detection_yunet.lpd_yunet import LPD_YuNet
15
 
16
  class Registery:
 
37
  MODELS.register(MobileNetV1)
38
  MODELS.register(MobileNetV2)
39
  MODELS.register(MPPalmDet)
40
+ MODELS.register(MPHandPose)
41
  MODELS.register(LPD_YuNet)
models/handpose_estimation_mediapipe/LICENSE ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
models/handpose_estimation_mediapipe/README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hand pose estimation from MediaPipe Handpose
2
+
3
+ This model estimates 21 hand keypoints per detected hand from [palm detector](../palm_detection_mediapipe). (The image below is referenced from [MediaPipe Hands Keypoints](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection#mediapipe-hands-keypoints-used-in-mediapipe-hands))
4
+
5
+ ![MediaPipe Hands Keypoints](./examples/hand_keypoints.png)
6
+
7
+ This model is converted from Tensorflow-JS to ONNX using following tools:
8
+ - tfjs to tf_saved_model: https://github.com/patlevin/tfjs-to-tf/
9
+ - tf_saved_model to ONNX: https://github.com/onnx/tensorflow-onnx
10
+ - simplified by [onnx-simplifier](https://github.com/daquexian/onnx-simplifier)
11
+
12
+ Also note that the model is quantized in per-channel mode with [Intel's neural compressor](https://github.com/intel/neural-compressor), which gives better accuracy but may lose some speed.
13
+
14
+ ## Demo
15
+
16
+ Run the following commands to try the demo:
17
+ ```bash
18
+ # detect on camera input
19
+ python demo.py
20
+ # detect on an image
21
+ python demo.py -i /path/to/image
22
+ ```
23
+
24
+ ### Example outputs
25
+
26
+ ![webcam demo](./examples/mphandpose_demo.gif)
27
+
28
+ ## License
29
+
30
+ All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
31
+
32
+ ## Reference
33
+
34
+ - MediaPipe Handpose: https://github.com/tensorflow/tfjs-models/tree/master/handpose
models/handpose_estimation_mediapipe/demo.py ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import argparse
3
+
4
+ import numpy as np
5
+ import cv2 as cv
6
+
7
+ from mp_handpose import MPHandPose
8
+
9
+ sys.path.append('../palm_detection_mediapipe')
10
+ from mp_palmdet import MPPalmDet
11
+
12
+ def str2bool(v):
13
+ if v.lower() in ['on', 'yes', 'true', 'y', 't']:
14
+ return True
15
+ elif v.lower() in ['off', 'no', 'false', 'n', 'f']:
16
+ return False
17
+ else:
18
+ raise NotImplementedError
19
+
20
+ backends = [cv.dnn.DNN_BACKEND_OPENCV, cv.dnn.DNN_BACKEND_CUDA]
21
+ targets = [cv.dnn.DNN_TARGET_CPU, cv.dnn.DNN_TARGET_CUDA, cv.dnn.DNN_TARGET_CUDA_FP16]
22
+ help_msg_backends = "Choose one of the computation backends: {:d}: OpenCV implementation (default); {:d}: CUDA"
23
+ help_msg_targets = "Chose one of the target computation devices: {:d}: CPU (default); {:d}: CUDA; {:d}: CUDA fp16"
24
+ try:
25
+ backends += [cv.dnn.DNN_BACKEND_TIMVX]
26
+ targets += [cv.dnn.DNN_TARGET_NPU]
27
+ help_msg_backends += "; {:d}: TIMVX"
28
+ help_msg_targets += "; {:d}: NPU"
29
+ except:
30
+ print('This version of OpenCV does not support TIM-VX and NPU. Visit https://gist.github.com/fengyuentau/5a7a5ba36328f2b763aea026c43fa45f for more information.')
31
+
32
+ parser = argparse.ArgumentParser(description='Hand Pose Estimation from MediaPipe')
33
+ parser.add_argument('--input', '-i', type=str, help='Path to the input image. Omit for using default camera.')
34
+ parser.add_argument('--model', '-m', type=str, default='./handpose_estimation_mediapipe_2022may.onnx', help='Path to the model.')
35
+ parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
36
+ parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
37
+ parser.add_argument('--conf_threshold', type=float, default=0.8, help='Filter out hands of confidence < conf_threshold.')
38
+ parser.add_argument('--save', '-s', type=str, default=False, help='Set true to save results. This flag is invalid when using camera.')
39
+ parser.add_argument('--vis', '-v', type=str2bool, default=True, help='Set true to open a window for result visualization. This flag is invalid when using camera.')
40
+ args = parser.parse_args()
41
+
42
+
43
+ def visualize(image, hands, print_result=False):
44
+ output = image.copy()
45
+
46
+ for idx, handpose in enumerate(hands):
47
+ conf = handpose[-1]
48
+ bbox = handpose[0:4].astype(np.int32)
49
+ landmarks = handpose[4:-1].reshape(21, 2).astype(np.int32)
50
+
51
+ # Print results
52
+ if print_result:
53
+ print('-----------hand {}-----------'.format(idx + 1))
54
+ print('conf: {:.2f}'.format(conf))
55
+ print('hand box: {}'.format(bbox))
56
+ print('hand landmarks: ')
57
+ for l in landmarks:
58
+ print('\t{}'.format(l))
59
+
60
+ # Draw line between each key points
61
+ cv.line(output, landmarks[0], landmarks[1], (255, 255, 255), 2)
62
+ cv.line(output, landmarks[1], landmarks[2], (255, 255, 255), 2)
63
+ cv.line(output, landmarks[2], landmarks[3], (255, 255, 255), 2)
64
+ cv.line(output, landmarks[3], landmarks[4], (255, 255, 255), 2)
65
+
66
+ cv.line(output, landmarks[0], landmarks[5], (255, 255, 255), 2)
67
+ cv.line(output, landmarks[5], landmarks[6], (255, 255, 255), 2)
68
+ cv.line(output, landmarks[6], landmarks[7], (255, 255, 255), 2)
69
+ cv.line(output, landmarks[7], landmarks[8], (255, 255, 255), 2)
70
+
71
+ cv.line(output, landmarks[0], landmarks[9], (255, 255, 255), 2)
72
+ cv.line(output, landmarks[9], landmarks[10], (255, 255, 255), 2)
73
+ cv.line(output, landmarks[10], landmarks[11], (255, 255, 255), 2)
74
+ cv.line(output, landmarks[11], landmarks[12], (255, 255, 255), 2)
75
+
76
+ cv.line(output, landmarks[0], landmarks[13], (255, 255, 255), 2)
77
+ cv.line(output, landmarks[13], landmarks[14], (255, 255, 255), 2)
78
+ cv.line(output, landmarks[14], landmarks[15], (255, 255, 255), 2)
79
+ cv.line(output, landmarks[15], landmarks[16], (255, 255, 255), 2)
80
+
81
+ cv.line(output, landmarks[0], landmarks[17], (255, 255, 255), 2)
82
+ cv.line(output, landmarks[17], landmarks[18], (255, 255, 255), 2)
83
+ cv.line(output, landmarks[18], landmarks[19], (255, 255, 255), 2)
84
+ cv.line(output, landmarks[19], landmarks[20], (255, 255, 255), 2)
85
+
86
+ for p in landmarks:
87
+ cv.circle(output, p, 2, (0, 0, 255), 2)
88
+
89
+ return output
90
+
91
+
92
+ if __name__ == '__main__':
93
+ # palm detector
94
+ palm_detector = MPPalmDet(modelPath='../palm_detection_mediapipe/palm_detection_mediapipe_2022may.onnx',
95
+ nmsThreshold=0.3,
96
+ scoreThreshold=0.8,
97
+ backendId=args.backend,
98
+ targetId=args.target)
99
+ # handpose detector
100
+ handpose_detector = MPHandPose(modelPath=args.model,
101
+ confThreshold=args.conf_threshold,
102
+ backendId=args.backend,
103
+ targetId=args.target)
104
+
105
+ # If input is an image
106
+ if args.input is not None:
107
+ image = cv.imread(args.input)
108
+
109
+ # Palm detector inference
110
+ palms = palm_detector.infer(image)
111
+ hands = np.empty(shape=(0, 47))
112
+
113
+ # Estimate the pose of each hand
114
+ for palm in palms:
115
+ # Handpose detector inference
116
+ handpose = handpose_detector.infer(image, palm)
117
+ if handpose is not None:
118
+ hands = np.vstack((hands, handpose))
119
+ # Draw results on the input image
120
+ image = visualize(image, hands, True)
121
+
122
+ if len(palms) == 0:
123
+ print('No palm detected!')
124
+
125
+ # Save results
126
+ if args.save:
127
+ cv.imwrite('result.jpg', image)
128
+ print('Results saved to result.jpg\n')
129
+
130
+ # Visualize results in a new window
131
+ if args.vis:
132
+ cv.namedWindow(args.input, cv.WINDOW_AUTOSIZE)
133
+ cv.imshow(args.input, image)
134
+ cv.waitKey(0)
135
+ else: # Omit input to call default camera
136
+ deviceId = 0
137
+ cap = cv.VideoCapture(deviceId)
138
+
139
+ tm = cv.TickMeter()
140
+ while cv.waitKey(1) < 0:
141
+ hasFrame, frame = cap.read()
142
+ if not hasFrame:
143
+ print('No frames grabbed!')
144
+ break
145
+
146
+ # Palm detector inference
147
+ palms = palm_detector.infer(frame)
148
+ hands = np.empty(shape=(0, 47))
149
+
150
+ tm.start()
151
+ # Estimate the pose of each hand
152
+ for palm in palms:
153
+ # Handpose detector inference
154
+ handpose = handpose_detector.infer(frame, palm)
155
+ if handpose is not None:
156
+ hands = np.vstack((hands, handpose))
157
+ tm.stop()
158
+ # Draw results on the input image
159
+ frame = visualize(frame, hands)
160
+
161
+ if len(palms) == 0:
162
+ print('No palm detected!')
163
+ else:
164
+ cv.putText(frame, 'FPS: {:.2f}'.format(tm.getFPS()), (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))
165
+
166
+ cv.imshow('MediaPipe Handpose Detection Demo', frame)
167
+ tm.reset()
models/handpose_estimation_mediapipe/mp_handpose.py ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import cv2 as cv
3
+
4
+
5
+ class MPHandPose:
6
+ def __init__(self, modelPath, confThreshold=0.8, backendId=0, targetId=0):
7
+ self.model_path = modelPath
8
+ self.conf_threshold = confThreshold
9
+ self.backend_id = backendId
10
+ self.target_id = targetId
11
+
12
+ self.input_size = np.array([256, 256]) # wh
13
+ self.PALM_LANDMARK_IDS = [0, 5, 9, 13, 17, 1, 2]
14
+ self.PALM_LANDMARKS_INDEX_OF_PALM_BASE = 0
15
+ self.PALM_LANDMARKS_INDEX_OF_MIDDLE_FINGER_BASE = 2
16
+ self.PALM_BOX_SHIFT_VECTOR = [0, -0.4]
17
+ self.PALM_BOX_ENLARGE_FACTOR = 3
18
+ self.HAND_BOX_SHIFT_VECTOR = [0, -0.1]
19
+ self.HAND_BOX_ENLARGE_FACTOR = 1.65
20
+
21
+ self.model = cv.dnn.readNet(self.model_path)
22
+ self.model.setPreferableBackend(self.backend_id)
23
+ self.model.setPreferableTarget(self.target_id)
24
+
25
+ @property
26
+ def name(self):
27
+ return self.__class__.__name__
28
+
29
+ def setBackend(self, backendId):
30
+ self.backend_id = backendId
31
+ self.model.setPreferableBackend(self.backend_id)
32
+
33
+ def setTarget(self, targetId):
34
+ self.target_id = targetId
35
+ self.model.setPreferableTarget(self.target_id)
36
+
37
+ def _preprocess(self, image, palm):
38
+ '''
39
+ Rotate input for inference.
40
+ Parameters:
41
+ image - input image of BGR channel order
42
+ palm_bbox - palm bounding box found in image of format [[x1, y1], [x2, y2]] (top-left and bottom-right points)
43
+ palm_landmarks - 7 landmarks (5 finger base points, 2 palm base points) of shape [7, 2]
44
+ Returns:
45
+ rotated_hand - rotated hand image for inference
46
+ rotation_matrix - matrix for rotation and de-rotation
47
+ '''
48
+ # Rotate input to have vertically oriented hand image
49
+ # compute rotation
50
+ palm_bbox = palm[0:4].reshape(2, 2)
51
+ palm_landmarks = palm[4:18].reshape(7, 2)
52
+ image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
53
+
54
+ p1 = palm_landmarks[self.PALM_LANDMARKS_INDEX_OF_PALM_BASE]
55
+ p2 = palm_landmarks[self.PALM_LANDMARKS_INDEX_OF_MIDDLE_FINGER_BASE]
56
+ radians = np.pi / 2 - np.arctan2(-(p2[1] - p1[1]), p2[0] - p1[0])
57
+ radians = radians - 2 * np.pi * np.floor((radians + np.pi) / (2 * np.pi))
58
+ angle = np.rad2deg(radians)
59
+ # get bbox center
60
+ center_palm_bbox = np.sum(palm_bbox, axis=0) / 2
61
+ # get rotation matrix
62
+ rotation_matrix = cv.getRotationMatrix2D(center_palm_bbox, angle, 1.0)
63
+ # get rotated image
64
+ rotated_image = cv.warpAffine(image, rotation_matrix, (image.shape[1], image.shape[0]))
65
+ # get bounding boxes from rotated palm landmarks
66
+ homogeneous_coord = np.c_[palm_landmarks, np.ones(palm_landmarks.shape[0])]
67
+ rotated_palm_landmarks = np.array([
68
+ np.dot(homogeneous_coord, rotation_matrix[0]),
69
+ np.dot(homogeneous_coord, rotation_matrix[1])])
70
+ # get landmark bounding box
71
+ rotated_palm_bbox = np.array([
72
+ np.amin(rotated_palm_landmarks, axis=1),
73
+ np.amax(rotated_palm_landmarks, axis=1)]) # [top-left, bottom-right]
74
+
75
+ # shift bounding box
76
+ wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
77
+ shift_vector = self.PALM_BOX_SHIFT_VECTOR * wh_rotated_palm_bbox
78
+ rotated_palm_bbox = rotated_palm_bbox + shift_vector
79
+ # squarify bounding boxx
80
+ center_rotated_plam_bbox = np.sum(rotated_palm_bbox, axis=0) / 2
81
+ wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
82
+ new_half_size = np.amax(wh_rotated_palm_bbox) / 2
83
+ rotated_palm_bbox = np.array([
84
+ center_rotated_plam_bbox - new_half_size,
85
+ center_rotated_plam_bbox + new_half_size])
86
+
87
+ # enlarge bounding box
88
+ center_rotated_plam_bbox = np.sum(rotated_palm_bbox, axis=0) / 2
89
+ wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
90
+ new_half_size = wh_rotated_palm_bbox * self.PALM_BOX_ENLARGE_FACTOR / 2
91
+ rotated_palm_bbox = np.array([
92
+ center_rotated_plam_bbox - new_half_size,
93
+ center_rotated_plam_bbox + new_half_size])
94
+
95
+ # Crop and resize the rotated image by the bounding box
96
+ [[x1, y1], [x2, y2]] = rotated_palm_bbox.astype(np.int32)
97
+ diff = np.maximum([-x1, -y1, x2 - rotated_image.shape[1], y2 - rotated_image.shape[0]], 0)
98
+ [x1, y1, x2, y2] = [x1, y1, x2, y2] + diff
99
+ crop = rotated_image[y1:y2, x1:x2, :]
100
+ crop = cv.copyMakeBorder(crop, diff[1], diff[3], diff[0], diff[2], cv.BORDER_CONSTANT, value=(0, 0, 0))
101
+ blob = cv.resize(crop, dsize=self.input_size, interpolation=cv.INTER_AREA).astype(np.float32) / 255.0
102
+
103
+ return blob[np.newaxis, :, :, :], rotated_palm_bbox, angle, rotation_matrix
104
+
105
+ def infer(self, image, palm):
106
+ # Preprocess
107
+ input_blob, rotated_palm_bbox, angle, rotation_matrix = self._preprocess(image, palm)
108
+
109
+ # Forward
110
+ self.model.setInput(input_blob)
111
+ output_blob = self.model.forward(self.model.getUnconnectedOutLayersNames())
112
+
113
+ # Postprocess
114
+ results = self._postprocess(output_blob, rotated_palm_bbox, angle, rotation_matrix)
115
+ return results # [bbox_coords, landmarks_coords, conf]
116
+
117
+ def _postprocess(self, blob, rotated_palm_bbox, angle, rotation_matrix):
118
+ landmarks, conf = blob
119
+
120
+ if conf < self.conf_threshold:
121
+ return None
122
+
123
+ landmarks = landmarks.reshape(-1, 3) # shape: (1, 63) -> (21, 3)
124
+
125
+ # transform coords back to the input coords
126
+ wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
127
+ scale_factor = wh_rotated_palm_bbox / self.input_size
128
+ landmarks[:, :2] = (landmarks[:, :2] - self.input_size / 2) * scale_factor
129
+ coords_rotation_matrix = cv.getRotationMatrix2D((0, 0), angle, 1.0)
130
+ rotated_landmarks = np.dot(landmarks[:, :2], coords_rotation_matrix[:, :2])
131
+ rotated_landmarks = np.c_[rotated_landmarks, landmarks[:, 2]]
132
+ # invert rotation
133
+ rotation_component = np.array([
134
+ [rotation_matrix[0][0], rotation_matrix[1][0]],
135
+ [rotation_matrix[0][1], rotation_matrix[1][1]]])
136
+ translation_component = np.array([
137
+ rotation_matrix[0][2], rotation_matrix[1][2]])
138
+ inverted_translation = np.array([
139
+ -np.dot(rotation_component[0], translation_component),
140
+ -np.dot(rotation_component[1], translation_component)])
141
+ inverse_rotation_matrix = np.c_[rotation_component, inverted_translation]
142
+ # get box center
143
+ center = np.append(np.sum(rotated_palm_bbox, axis=0) / 2, 1)
144
+ original_center = np.array([
145
+ np.dot(center, inverse_rotation_matrix[0]),
146
+ np.dot(center, inverse_rotation_matrix[1])])
147
+ landmarks = rotated_landmarks[:, :2] + original_center
148
+
149
+ # get bounding box from rotated_landmarks
150
+ bbox = np.array([
151
+ np.amin(landmarks, axis=0),
152
+ np.amax(landmarks, axis=0)]) # [top-left, bottom-right]
153
+ # shift bounding box
154
+ wh_bbox = bbox[1] - bbox[0]
155
+ shift_vector = self.HAND_BOX_SHIFT_VECTOR * wh_bbox
156
+ bbox = bbox + shift_vector
157
+ # enlarge bounding box
158
+ center_bbox = np.sum(bbox, axis=0) / 2
159
+ wh_bbox = bbox[1] - bbox[0]
160
+ new_half_size = wh_bbox * self.HAND_BOX_ENLARGE_FACTOR / 2
161
+ bbox = np.array([
162
+ center_bbox - new_half_size,
163
+ center_bbox + new_half_size])
164
+
165
+ return np.r_[bbox.reshape(-1), landmarks.reshape(-1), conf[0]]
tools/quantize/inc_configs/mp_handpose.yaml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2021 Intel Corporation
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ version: 1.0
17
+
18
+ model: # mandatory. used to specify model specific information.
19
+ name: mp_handpose
20
+ framework: onnxrt_qlinearops # mandatory. supported values are tensorflow, pytorch, pytorch_ipex, onnxrt_integer, onnxrt_qlinear or mxnet; allow new framework backend extension.
21
+
22
+ quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
23
+ approach: post_training_static_quant # optional. default value is post_training_static_quant.
24
+ calibration:
25
+ dataloader:
26
+ batch_size: 1
27
+ dataset:
28
+ dummy:
29
+ shape: [1, 256, 256, 3]
30
+ low: -1.0
31
+ high: 1.0
32
+ dtype: float32
33
+ label: True
34
+
35
+ tuning:
36
+ accuracy_criterion:
37
+ relative: 0.02 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
38
+ exit_policy:
39
+ timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
40
+ random_seed: 9527 # optional. random seed for deterministic tuning.
tools/quantize/quantize-inc.py CHANGED
@@ -78,6 +78,9 @@ models=dict(
78
  mp_palmdet=Quantize(model_path='../../models/palm_detection_mediapipe/palm_detection_mediapipe_2022may.onnx',
79
  config_path='./inc_configs/mp_palmdet.yaml',
80
  custom_dataset=Dataset(root='../../benchmark/data/palm_detection', dim='hwc', swapRB=True, mean=127.5, std=127.5, toFP32=True)),
 
 
 
81
  lpd_yunet=Quantize(model_path='../../models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2022may.onnx',
82
  config_path='./inc_configs/lpd_yunet.yaml',
83
  custom_dataset=Dataset(root='../../benchmark/data/license_plate_detection', size=(320, 240), dim='chw', toFP32=True)),
 
78
  mp_palmdet=Quantize(model_path='../../models/palm_detection_mediapipe/palm_detection_mediapipe_2022may.onnx',
79
  config_path='./inc_configs/mp_palmdet.yaml',
80
  custom_dataset=Dataset(root='../../benchmark/data/palm_detection', dim='hwc', swapRB=True, mean=127.5, std=127.5, toFP32=True)),
81
+ mp_handpose=Quantize(model_path='../../models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2022may.onnx',
82
+ config_path='./inc_configs/mp_handpose.yaml',
83
+ custom_dataset=Dataset(root='../../benchmark/data/palm_detection', dim='hwc', swapRB=True, mean=127.5, std=127.5, toFP32=True)),
84
  lpd_yunet=Quantize(model_path='../../models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2022may.onnx',
85
  config_path='./inc_configs/lpd_yunet.yaml',
86
  custom_dataset=Dataset(root='../../benchmark/data/license_plate_detection', size=(320, 240), dim='chw', toFP32=True)),
tools/quantize/quantize-ort.py CHANGED
@@ -6,7 +6,7 @@
6
 
7
  import os
8
  import sys
9
- import numpy as ny
10
  import cv2 as cv
11
 
12
  import onnx
 
6
 
7
  import os
8
  import sys
9
+ import numpy as np
10
  import cv2 as cv
11
 
12
  import onnx
tools/quantize/requirements.txt CHANGED
@@ -2,4 +2,3 @@ opencv-python>=4.5.4.58
2
  onnx
3
  onnxruntime
4
  neural-compressor
5
-
 
2
  onnx
3
  onnxruntime
4
  neural-compressor