Add back example output images as lfs tracked files (#166)
Browse files* lfs track images now
* add back example images
* modify paths
* modify paths part 2
* correct paths in each model's readme
- .gitattributes +7 -1
- README.md +15 -15
- models/face_detection_yunet/README.md +2 -2
- models/facial_expression_recognition/README.md +1 -1
- models/handpose_estimation_mediapipe/README.md +2 -2
- models/human_segmentation_pphumanseg/README.md +2 -2
- models/license_plate_detection_yunet/README.md +1 -1
- models/object_detection_nanodet/README.md +3 -3
- models/object_detection_yolox/README.md +3 -3
- models/object_tracking_dasiamrpn/README.md +1 -1
- models/palm_detection_mediapipe/README.md +1 -1
- models/person_detection_mediapipe/README.md +1 -1
- models/pose_estimation_mediapipe/README.md +1 -1
- models/qrcode_wechatqrcode/README.md +1 -1
- models/text_detection_db/README.md +2 -2
- models/text_recognition_crnn/README.md +2 -2
.gitattributes
CHANGED
@@ -13,4 +13,10 @@
|
|
13 |
*.weights filter=lfs diff=lfs merge=lfs -text
|
14 |
|
15 |
# ONNX
|
16 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
*.weights filter=lfs diff=lfs merge=lfs -text
|
14 |
|
15 |
# ONNX
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
|
18 |
+
# Images
|
19 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -50,61 +50,61 @@ Some examples are listed below. You can find more in the directory of each model
|
|
50 |
|
51 |
### Face Detection with [YuNet](./models/face_detection_yunet/)
|
52 |
|
53 |
-

|
56 |
|
57 |
-

|
60 |
|
61 |
-

|
64 |
|
65 |
-
 & [YOLOX](./models/object_detection_yolox/)
|
68 |
|
69 |
-

|
74 |
|
75 |
-

|
78 |
|
79 |
-

|
82 |
|
83 |
-

|
86 |
|
87 |
-

|
90 |
|
91 |
-

|
94 |
|
95 |
-

|
98 |
|
99 |
-

|
102 |
|
103 |
-

|
106 |
|
107 |
-

|
52 |
|
53 |
+

|
54 |
|
55 |
### Facial Expression Recognition with [Progressive Teacher](./models/facial_expression_recognition/)
|
56 |
|
57 |
+

|
58 |
|
59 |
### Human Segmentation with [PP-HumanSeg](./models/human_segmentation_pphumanseg/)
|
60 |
|
61 |
+

|
62 |
|
63 |
### License Plate Detection with [LPD_YuNet](./models/license_plate_detection_yunet/)
|
64 |
|
65 |
+

|
66 |
|
67 |
### Object Detection with [NanoDet](./models/object_detection_nanodet/) & [YOLOX](./models/object_detection_yolox/)
|
68 |
|
69 |
+

|
70 |
|
71 |
+

|
72 |
|
73 |
### Object Tracking with [DaSiamRPN](./models/object_tracking_dasiamrpn/)
|
74 |
|
75 |
+

|
76 |
|
77 |
### Palm Detection with [MP-PalmDet](./models/palm_detection_mediapipe/)
|
78 |
|
79 |
+

|
80 |
|
81 |
### Hand Pose Estimation with [MP-HandPose](models/handpose_estimation_mediapipe/)
|
82 |
|
83 |
+

|
84 |
|
85 |
### Person Detection with [MP-PersonDet](./models/person_detection_mediapipe)
|
86 |
|
87 |
+

|
88 |
|
89 |
### Pose Estimation with [MP-Pose](models/pose_estimation_mediapipe)
|
90 |
|
91 |
+

|
92 |
|
93 |
### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
|
94 |
|
95 |
+

|
96 |
|
97 |
### Chinese Text detection [DB](./models/text_detection_db/)
|
98 |
|
99 |
+

|
100 |
|
101 |
### English Text detection [DB](./models/text_detection_db/)
|
102 |
|
103 |
+

|
104 |
|
105 |
### Text Detection with [CRNN](./models/text_recognition_crnn/)
|
106 |
|
107 |
+

|
108 |
|
109 |
## License
|
110 |
|
models/face_detection_yunet/README.md
CHANGED
@@ -53,9 +53,9 @@ cmake --build build
|
|
53 |
|
54 |
### Example outputs
|
55 |
|
56 |
-

|
57 |
|
58 |
+

|
59 |
|
60 |
## License
|
61 |
|
models/facial_expression_recognition/README.md
CHANGED
@@ -29,7 +29,7 @@ python demo.py --input /path/to/image -v
|
|
29 |
|
30 |
Note: Zoom in to to see the recognized facial expression in the top-left corner of each face boxes.
|
31 |
|
32 |
-

|
33 |
|
34 |
## License
|
35 |
|
models/handpose_estimation_mediapipe/README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
|
3 |
This model estimates 21 hand keypoints per detected hand from [palm detector](../palm_detection_mediapipe). (The image below is referenced from [MediaPipe Hands Keypoints](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection#mediapipe-hands-keypoints-used-in-mediapipe-hands))
|
4 |
|
5 |
-
. (The image below is referenced from [MediaPipe Hands Keypoints](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection#mediapipe-hands-keypoints-used-in-mediapipe-hands))
|
4 |
|
5 |
+

|
6 |
|
7 |
This model is converted from TFlite to ONNX using following tools:
|
8 |
- TFLite model to ONNX: https://github.com/onnx/tensorflow-onnx
|
|
|
24 |
|
25 |
### Example outputs
|
26 |
|
27 |
+

|
28 |
|
29 |
## License
|
30 |
|
models/human_segmentation_pphumanseg/README.md
CHANGED
@@ -18,9 +18,9 @@ python demo.py --help
|
|
18 |
|
19 |
### Example outputs
|
20 |
|
21 |
-
.
|
|
|
18 |
|
19 |
### Example outputs
|
20 |
|
21 |
+

|
22 |
|
23 |
+

|
24 |
|
25 |
---
|
26 |
Results of accuracy evaluation with [tools/eval](../../tools/eval).
|
models/license_plate_detection_yunet/README.md
CHANGED
@@ -19,7 +19,7 @@ python demo.py --help
|
|
19 |
|
20 |
### Example outputs
|
21 |
|
22 |
-

|
23 |
|
24 |
## License
|
25 |
|
models/object_detection_nanodet/README.md
CHANGED
@@ -22,13 +22,13 @@ Note:
|
|
22 |
|
23 |
Here are some of the sample results that were observed using the model,
|
24 |
|
25 |
-
 for the original images.
|
29 |
|
30 |
Video inference result,
|
31 |
-

|
26 |
+

|
27 |
|
28 |
Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images.
|
29 |
|
30 |
Video inference result,
|
31 |
+

|
32 |
|
33 |
## Model metrics:
|
34 |
|
models/object_detection_yolox/README.md
CHANGED
@@ -29,9 +29,9 @@ Note:
|
|
29 |
|
30 |
Here are some of the sample results that were observed using the model (**yolox_s.onnx**),
|
31 |
|
32 |
-
 for the original images.
|
37 |
|
|
|
29 |
|
30 |
Here are some of the sample results that were observed using the model (**yolox_s.onnx**),
|
31 |
|
32 |
+

|
33 |
+

|
34 |
+

|
35 |
|
36 |
Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images.
|
37 |
|
models/object_tracking_dasiamrpn/README.md
CHANGED
@@ -23,7 +23,7 @@ python demo.py --help
|
|
23 |
|
24 |
### Example outputs
|
25 |
|
26 |
-

|
27 |
|
28 |
## License
|
29 |
|
models/palm_detection_mediapipe/README.md
CHANGED
@@ -26,7 +26,7 @@ python demo.py --help
|
|
26 |
|
27 |
### Example outputs
|
28 |
|
29 |
-

|
30 |
|
31 |
## License
|
32 |
|
models/person_detection_mediapipe/README.md
CHANGED
@@ -23,7 +23,7 @@ python demo.py --help
|
|
23 |
|
24 |
### Example outputs
|
25 |
|
26 |
-

|
27 |
|
28 |
## License
|
29 |
|
models/pose_estimation_mediapipe/README.md
CHANGED
@@ -22,7 +22,7 @@ python demo.py -i /path/to/image -v
|
|
22 |
|
23 |
### Example outputs
|
24 |
|
25 |
-

|
26 |
|
27 |
## License
|
28 |
|
models/qrcode_wechatqrcode/README.md
CHANGED
@@ -23,7 +23,7 @@ python demo.py --help
|
|
23 |
|
24 |
### Example outputs
|
25 |
|
26 |
-

|
27 |
|
28 |
## License
|
29 |
|
models/text_detection_db/README.md
CHANGED
@@ -25,9 +25,9 @@ python demo.py --help
|
|
25 |
|
26 |
### Example outputs
|
27 |
|
28 |
-

|
29 |
|
30 |
+

|
31 |
|
32 |
## License
|
33 |
|
models/text_recognition_crnn/README.md
CHANGED
@@ -62,9 +62,9 @@ python demo.py --help
|
|
62 |
|
63 |
### Examples
|
64 |
|
65 |
-

|
66 |
|
67 |
+

|
68 |
|
69 |
## License
|
70 |
|