Wanli
commited on
Commit
·
ebeb80f
1
Parent(s):
aa30eea
Update FER quantized model and fix document (#129)
Browse files* update document and fix bugs
* re-quantized model to per_tensor mode
* update KV3-NPU benchmark result
README.md
CHANGED
@@ -19,7 +19,7 @@ Guidelines:
|
|
19 |
| ------------------------------------------------------- | ----------------------------- | ---------- | -------------- | ------------ | --------------- | ------------ | ----------- |
|
20 |
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 6.22 | 12.18 | 4.04 | 86.69 |
|
21 |
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 99.20 | 24.88 | 46.25 | --- |
|
22 |
-
| [FER](./models/facial_expression_recognition/) | Facial Expression Recognition | 112x112 | 4.43 | 49.86 | 31.07 |
|
23 |
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 168.03 | 56.12 | 29.53 | --- |
|
24 |
| [YOLOX](./models/object_detection_yolox/) | Object Detection | 640x640 | 176.68 | 1496.70 | 388.95 | 420.98 | --- |
|
25 |
| [NanoDet](./models/object_detection_nanodet/) | Object Detection | 416x416 | 157.91 | 220.36 | 64.94 | 116.64 | --- |
|
@@ -63,7 +63,7 @@ Some examples are listed below. You can find more in the directory of each model
|
|
63 |
|
64 |

|
65 |
|
66 |
-
### Facial Expression Recognition with Progressive Teacher(./models/facial_expression_recognition/)
|
67 |
|
68 |

|
69 |
|
|
|
19 |
| ------------------------------------------------------- | ----------------------------- | ---------- | -------------- | ------------ | --------------- | ------------ | ----------- |
|
20 |
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 6.22 | 12.18 | 4.04 | 86.69 |
|
21 |
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 99.20 | 24.88 | 46.25 | --- |
|
22 |
+
| [FER](./models/facial_expression_recognition/) | Facial Expression Recognition | 112x112 | 4.43 | 49.86 | 31.07 | 29.80 | --- |
|
23 |
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 168.03 | 56.12 | 29.53 | --- |
|
24 |
| [YOLOX](./models/object_detection_yolox/) | Object Detection | 640x640 | 176.68 | 1496.70 | 388.95 | 420.98 | --- |
|
25 |
| [NanoDet](./models/object_detection_nanodet/) | Object Detection | 416x416 | 157.91 | 220.36 | 64.94 | 116.64 | --- |
|
|
|
63 |
|
64 |

|
65 |
|
66 |
+
### Facial Expression Recognition with [Progressive Teacher](./models/facial_expression_recognition/)
|
67 |
|
68 |

|
69 |
|
models/facial_expression_recognition/README.md
CHANGED
@@ -6,7 +6,7 @@ Progressive Teacher: [Boosting Facial Expression Recognition by A Semi-Supervise
|
|
6 |
Note:
|
7 |
- Progressive Teacher is contributed by [Jing Jiang](https://scholar.google.com/citations?user=OCwcfAwAAAAJ&hl=zh-CN).
|
8 |
- [MobileFaceNet](https://link.springer.com/chapter/10.1007/978-3-319-97909-0_46) is used as the backbone and the model is able to classify seven basic facial expressions (angry, disgust, fearful, happy, neutral, sad, surprised).
|
9 |
-
- [facial_expression_recognition_mobilefacenet_2022july.onnx](https://github.com/opencv/opencv_zoo/raw/master/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july.onnx) is implemented thanks to [Chengrui Wang](https://github.com/
|
10 |
|
11 |
Results of accuracy evaluation on [RAF-DB](http://whdeng.cn/RAF/model1.html).
|
12 |
|
|
|
6 |
Note:
|
7 |
- Progressive Teacher is contributed by [Jing Jiang](https://scholar.google.com/citations?user=OCwcfAwAAAAJ&hl=zh-CN).
|
8 |
- [MobileFaceNet](https://link.springer.com/chapter/10.1007/978-3-319-97909-0_46) is used as the backbone and the model is able to classify seven basic facial expressions (angry, disgust, fearful, happy, neutral, sad, surprised).
|
9 |
+
- [facial_expression_recognition_mobilefacenet_2022july.onnx](https://github.com/opencv/opencv_zoo/raw/master/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july.onnx) is implemented thanks to [Chengrui Wang](https://github.com/crywang).
|
10 |
|
11 |
Results of accuracy evaluation on [RAF-DB](http://whdeng.cn/RAF/model1.html).
|
12 |
|
models/facial_expression_recognition/demo.py
CHANGED
@@ -35,7 +35,7 @@ except:
|
|
35 |
|
36 |
parser = argparse.ArgumentParser(description='Facial Expression Recognition')
|
37 |
parser.add_argument('--input', '-i', type=str, help='Path to the input image. Omit for using default camera.')
|
38 |
-
parser.add_argument('--model', '-
|
39 |
parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
|
40 |
parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
|
41 |
parser.add_argument('--save', '-s', type=str, default=False, help='Set true to save results. This flag is invalid when using camera.')
|
|
|
35 |
|
36 |
parser = argparse.ArgumentParser(description='Facial Expression Recognition')
|
37 |
parser.add_argument('--input', '-i', type=str, help='Path to the input image. Omit for using default camera.')
|
38 |
+
parser.add_argument('--model', '-m', type=str, default='./facial_expression_recognition_mobilefacenet_2022july.onnx', help='Path to the facial expression recognition model.')
|
39 |
parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
|
40 |
parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
|
41 |
parser.add_argument('--save', '-s', type=str, default=False, help='Set true to save results. This flag is invalid when using camera.')
|
tools/quantize/inc_configs/fer.yaml
CHANGED
@@ -17,9 +17,21 @@ quantization: # optional. tuning constrai
|
|
17 |
dtype: float32
|
18 |
label: True
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
tuning:
|
21 |
accuracy_criterion:
|
22 |
-
relative: 0.
|
23 |
exit_policy:
|
24 |
timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
|
25 |
max_trials: 50 # optional. max tune times. default value is 100. combine with timeout field to decide when to exit.
|
|
|
17 |
dtype: float32
|
18 |
label: True
|
19 |
|
20 |
+
model_wise: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
|
21 |
+
weight:
|
22 |
+
granularity: per_tensor
|
23 |
+
scheme: asym
|
24 |
+
dtype: int8
|
25 |
+
algorithm: minmax
|
26 |
+
activation:
|
27 |
+
granularity: per_tensor
|
28 |
+
scheme: asym
|
29 |
+
dtype: int8
|
30 |
+
algorithm: minmax
|
31 |
+
|
32 |
tuning:
|
33 |
accuracy_criterion:
|
34 |
+
relative: 0.02 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
|
35 |
exit_policy:
|
36 |
timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
|
37 |
max_trials: 50 # optional. max tune times. default value is 100. combine with timeout field to decide when to exit.
|