ytfeng commited on
Commit
69ad792
·
1 Parent(s): ab8d410

Add back example output images as lfs tracked files (#166)

Browse files

* lfs track images now

* add back example images

* modify paths

* modify paths part 2

* correct paths in each model's readme

.gitattributes CHANGED
@@ -13,4 +13,10 @@
13
  *.weights filter=lfs diff=lfs merge=lfs -text
14
 
15
  # ONNX
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
13
  *.weights filter=lfs diff=lfs merge=lfs -text
14
 
15
  # ONNX
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+
18
+ # Images
19
+ *.jpg filter=lfs diff=lfs merge=lfs -text
20
+ *.gif filter=lfs diff=lfs merge=lfs -text
21
+ *.png filter=lfs diff=lfs merge=lfs -text
22
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -50,61 +50,61 @@ Some examples are listed below. You can find more in the directory of each model
50
 
51
  ### Face Detection with [YuNet](./models/face_detection_yunet/)
52
 
53
- ![largest selfie](./models/face_detection_yunet/examples/largest_selfie.jpg)
54
 
55
  ### Facial Expression Recognition with [Progressive Teacher](./models/facial_expression_recognition/)
56
 
57
- ![fer demo](./models/facial_expression_recognition/examples/selfie.jpg)
58
 
59
  ### Human Segmentation with [PP-HumanSeg](./models/human_segmentation_pphumanseg/)
60
 
61
- ![messi](./models/human_segmentation_pphumanseg/examples/messi.jpg)
62
 
63
  ### License Plate Detection with [LPD_YuNet](./models/license_plate_detection_yunet/)
64
 
65
- ![license plate detection](./models/license_plate_detection_yunet/examples/lpd_yunet_demo.gif)
66
 
67
  ### Object Detection with [NanoDet](./models/object_detection_nanodet/) & [YOLOX](./models/object_detection_yolox/)
68
 
69
- ![nanodet demo](./models/object_detection_nanodet/samples/1_res.jpg)
70
 
71
- ![yolox demo](./models/object_detection_yolox/samples/3_res.jpg)
72
 
73
  ### Object Tracking with [DaSiamRPN](./models/object_tracking_dasiamrpn/)
74
 
75
- ![webcam demo](./models/object_tracking_dasiamrpn/examples/dasiamrpn_demo.gif)
76
 
77
  ### Palm Detection with [MP-PalmDet](./models/palm_detection_mediapipe/)
78
 
79
- ![palm det](./models/palm_detection_mediapipe/examples/mppalmdet_demo.gif)
80
 
81
  ### Hand Pose Estimation with [MP-HandPose](models/handpose_estimation_mediapipe/)
82
 
83
- ![handpose estimation](models/handpose_estimation_mediapipe/examples/mphandpose_demo.webp)
84
 
85
  ### Person Detection with [MP-PersonDet](./models/person_detection_mediapipe)
86
 
87
- ![person det](./models/person_detection_mediapipe/examples/mppersondet_demo.webp)
88
 
89
  ### Pose Estimation with [MP-Pose](models/pose_estimation_mediapipe)
90
 
91
- ![pose_estimation](models/pose_estimation_mediapipe/examples/mpposeest_demo.webp)
92
 
93
  ### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
94
 
95
- ![qrcode](./models/qrcode_wechatqrcode/examples/wechat_qrcode_demo.gif)
96
 
97
  ### Chinese Text detection [DB](./models/text_detection_db/)
98
 
99
- ![mask](./models/text_detection_db/examples/mask.jpg)
100
 
101
  ### English Text detection [DB](./models/text_detection_db/)
102
 
103
- ![gsoc](./models/text_detection_db/examples/gsoc.jpg)
104
 
105
  ### Text Detection with [CRNN](./models/text_recognition_crnn/)
106
 
107
- ![crnn_demo](./models/text_recognition_crnn/examples/CRNNCTC.gif)
108
 
109
  ## License
110
 
 
50
 
51
  ### Face Detection with [YuNet](./models/face_detection_yunet/)
52
 
53
+ ![largest selfie](./models/face_detection_yunet/example_outputs/largest_selfie.jpg)
54
 
55
  ### Facial Expression Recognition with [Progressive Teacher](./models/facial_expression_recognition/)
56
 
57
+ ![fer demo](./models/facial_expression_recognition/example_outputs/selfie.jpg)
58
 
59
  ### Human Segmentation with [PP-HumanSeg](./models/human_segmentation_pphumanseg/)
60
 
61
+ ![messi](./models/human_segmentation_pphumanseg/example_outputs/messi.jpg)
62
 
63
  ### License Plate Detection with [LPD_YuNet](./models/license_plate_detection_yunet/)
64
 
65
+ ![license plate detection](./models/license_plate_detection_yunet/example_outputs/lpd_yunet_demo.gif)
66
 
67
  ### Object Detection with [NanoDet](./models/object_detection_nanodet/) & [YOLOX](./models/object_detection_yolox/)
68
 
69
+ ![nanodet demo](./models/object_detection_nanodet/example_outputs/1_res.jpg)
70
 
71
+ ![yolox demo](./models/object_detection_yolox/example_outputs/3_res.jpg)
72
 
73
  ### Object Tracking with [DaSiamRPN](./models/object_tracking_dasiamrpn/)
74
 
75
+ ![webcam demo](./models/object_tracking_dasiamrpn/example_outputs/dasiamrpn_demo.gif)
76
 
77
  ### Palm Detection with [MP-PalmDet](./models/palm_detection_mediapipe/)
78
 
79
+ ![palm det](./models/palm_detection_mediapipe/example_outputs/mppalmdet_demo.gif)
80
 
81
  ### Hand Pose Estimation with [MP-HandPose](models/handpose_estimation_mediapipe/)
82
 
83
+ ![handpose estimation](models/handpose_estimation_mediapipe/example_outputs/mphandpose_demo.webp)
84
 
85
  ### Person Detection with [MP-PersonDet](./models/person_detection_mediapipe)
86
 
87
+ ![person det](./models/person_detection_mediapipe/example_outputs/mppersondet_demo.webp)
88
 
89
  ### Pose Estimation with [MP-Pose](models/pose_estimation_mediapipe)
90
 
91
+ ![pose_estimation](models/pose_estimation_mediapipe/example_outputs/mpposeest_demo.webp)
92
 
93
  ### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
94
 
95
+ ![qrcode](./models/qrcode_wechatqrcode/example_outputs/wechat_qrcode_demo.gif)
96
 
97
  ### Chinese Text detection [DB](./models/text_detection_db/)
98
 
99
+ ![mask](./models/text_detection_db/example_outputs/mask.jpg)
100
 
101
  ### English Text detection [DB](./models/text_detection_db/)
102
 
103
+ ![gsoc](./models/text_detection_db/example_outputs/gsoc.jpg)
104
 
105
  ### Text Detection with [CRNN](./models/text_recognition_crnn/)
106
 
107
+ ![crnn_demo](./models/text_recognition_crnn/example_outputs/CRNNCTC.gif)
108
 
109
  ## License
110
 
models/face_detection_yunet/README.md CHANGED
@@ -53,9 +53,9 @@ cmake --build build
53
 
54
  ### Example outputs
55
 
56
- ![webcam demo](./examples/yunet_demo.gif)
57
 
58
- ![largest selfie](./examples/largest_selfie.jpg)
59
 
60
  ## License
61
 
 
53
 
54
  ### Example outputs
55
 
56
+ ![webcam demo](./example_outputs/yunet_demo.gif)
57
 
58
+ ![largest selfie](./example_outputs/largest_selfie.jpg)
59
 
60
  ## License
61
 
models/facial_expression_recognition/README.md CHANGED
@@ -29,7 +29,7 @@ python demo.py --input /path/to/image -v
29
 
30
  Note: Zoom in to to see the recognized facial expression in the top-left corner of each face boxes.
31
 
32
- ![fer demo](./examples/selfie.jpg)
33
 
34
  ## License
35
 
 
29
 
30
  Note: Zoom in to to see the recognized facial expression in the top-left corner of each face boxes.
31
 
32
+ ![fer demo](./example_outputs/selfie.jpg)
33
 
34
  ## License
35
 
models/handpose_estimation_mediapipe/README.md CHANGED
@@ -2,7 +2,7 @@
2
 
3
  This model estimates 21 hand keypoints per detected hand from [palm detector](../palm_detection_mediapipe). (The image below is referenced from [MediaPipe Hands Keypoints](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection#mediapipe-hands-keypoints-used-in-mediapipe-hands))
4
 
5
- ![MediaPipe Hands Keypoints](./examples/hand_keypoints.png)
6
 
7
  This model is converted from TFlite to ONNX using following tools:
8
  - TFLite model to ONNX: https://github.com/onnx/tensorflow-onnx
@@ -24,7 +24,7 @@ python demo.py -i /path/to/image -v
24
 
25
  ### Example outputs
26
 
27
- ![webcam demo](./examples/mphandpose_demo.webp)
28
 
29
  ## License
30
 
 
2
 
3
  This model estimates 21 hand keypoints per detected hand from [palm detector](../palm_detection_mediapipe). (The image below is referenced from [MediaPipe Hands Keypoints](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection#mediapipe-hands-keypoints-used-in-mediapipe-hands))
4
 
5
+ ![MediaPipe Hands Keypoints](./example_outputs/hand_keypoints.png)
6
 
7
  This model is converted from TFlite to ONNX using following tools:
8
  - TFLite model to ONNX: https://github.com/onnx/tensorflow-onnx
 
24
 
25
  ### Example outputs
26
 
27
+ ![webcam demo](./example_outputs/mphandpose_demo.webp)
28
 
29
  ## License
30
 
models/human_segmentation_pphumanseg/README.md CHANGED
@@ -18,9 +18,9 @@ python demo.py --help
18
 
19
  ### Example outputs
20
 
21
- ![webcam demo](./examples/pphumanseg_demo.gif)
22
 
23
- ![messi](./examples/messi.jpg)
24
 
25
  ---
26
  Results of accuracy evaluation with [tools/eval](../../tools/eval).
 
18
 
19
  ### Example outputs
20
 
21
+ ![webcam demo](./example_outputs/pphumanseg_demo.gif)
22
 
23
+ ![messi](./example_outputs/messi.jpg)
24
 
25
  ---
26
  Results of accuracy evaluation with [tools/eval](../../tools/eval).
models/license_plate_detection_yunet/README.md CHANGED
@@ -19,7 +19,7 @@ python demo.py --help
19
 
20
  ### Example outputs
21
 
22
- ![lpd](./examples/lpd_yunet_demo.gif)
23
 
24
  ## License
25
 
 
19
 
20
  ### Example outputs
21
 
22
+ ![lpd](./example_outputs/lpd_yunet_demo.gif)
23
 
24
  ## License
25
 
models/object_detection_nanodet/README.md CHANGED
@@ -22,13 +22,13 @@ Note:
22
 
23
  Here are some of the sample results that were observed using the model,
24
 
25
- ![test1_res.jpg](./samples/1_res.jpg)
26
- ![test2_res.jpg](./samples/2_res.jpg)
27
 
28
  Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images.
29
 
30
  Video inference result,
31
- ![WebCamR.gif](./samples/WebCamR.gif)
32
 
33
  ## Model metrics:
34
 
 
22
 
23
  Here are some of the sample results that were observed using the model,
24
 
25
+ ![test1_res.jpg](./example_outputs/1_res.jpg)
26
+ ![test2_res.jpg](./example_outputs/2_res.jpg)
27
 
28
  Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images.
29
 
30
  Video inference result,
31
+ ![WebCamR.gif](./example_outputs/WebCamR.gif)
32
 
33
  ## Model metrics:
34
 
models/object_detection_yolox/README.md CHANGED
@@ -29,9 +29,9 @@ Note:
29
 
30
  Here are some of the sample results that were observed using the model (**yolox_s.onnx**),
31
 
32
- ![1_res.jpg](./samples/1_res.jpg)
33
- ![2_res.jpg](./samples/2_res.jpg)
34
- ![3_res.jpg](./samples/3_res.jpg)
35
 
36
  Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images.
37
 
 
29
 
30
  Here are some of the sample results that were observed using the model (**yolox_s.onnx**),
31
 
32
+ ![1_res.jpg](./example_outputs/1_res.jpg)
33
+ ![2_res.jpg](./example_outputs/2_res.jpg)
34
+ ![3_res.jpg](./example_outputs/3_res.jpg)
35
 
36
  Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images.
37
 
models/object_tracking_dasiamrpn/README.md CHANGED
@@ -23,7 +23,7 @@ python demo.py --help
23
 
24
  ### Example outputs
25
 
26
- ![webcam demo](./examples/dasiamrpn_demo.gif)
27
 
28
  ## License
29
 
 
23
 
24
  ### Example outputs
25
 
26
+ ![webcam demo](./example_outputs/dasiamrpn_demo.gif)
27
 
28
  ## License
29
 
models/palm_detection_mediapipe/README.md CHANGED
@@ -26,7 +26,7 @@ python demo.py --help
26
 
27
  ### Example outputs
28
 
29
- ![webcam demo](./examples/mppalmdet_demo.gif)
30
 
31
  ## License
32
 
 
26
 
27
  ### Example outputs
28
 
29
+ ![webcam demo](./example_outputs/mppalmdet_demo.gif)
30
 
31
  ## License
32
 
models/person_detection_mediapipe/README.md CHANGED
@@ -23,7 +23,7 @@ python demo.py --help
23
 
24
  ### Example outputs
25
 
26
- ![webcam demo](examples/mppersondet_demo.webp)
27
 
28
  ## License
29
 
 
23
 
24
  ### Example outputs
25
 
26
+ ![webcam demo](./example_outputs/mppersondet_demo.webp)
27
 
28
  ## License
29
 
models/pose_estimation_mediapipe/README.md CHANGED
@@ -22,7 +22,7 @@ python demo.py -i /path/to/image -v
22
 
23
  ### Example outputs
24
 
25
- ![webcam demo](examples/mpposeest_demo.webp)
26
 
27
  ## License
28
 
 
22
 
23
  ### Example outputs
24
 
25
+ ![webcam demo](./example_outputs/mpposeest_demo.webp)
26
 
27
  ## License
28
 
models/qrcode_wechatqrcode/README.md CHANGED
@@ -23,7 +23,7 @@ python demo.py --help
23
 
24
  ### Example outputs
25
 
26
- ![webcam demo](./examples/wechat_qrcode_demo.gif)
27
 
28
  ## License
29
 
 
23
 
24
  ### Example outputs
25
 
26
+ ![webcam demo](./example_outputs/wechat_qrcode_demo.gif)
27
 
28
  ## License
29
 
models/text_detection_db/README.md CHANGED
@@ -25,9 +25,9 @@ python demo.py --help
25
 
26
  ### Example outputs
27
 
28
- ![mask](./examples/mask.jpg)
29
 
30
- ![gsoc](./examples/gsoc.jpg)
31
 
32
  ## License
33
 
 
25
 
26
  ### Example outputs
27
 
28
+ ![mask](./example_outputs/mask.jpg)
29
 
30
+ ![gsoc](./example_outputs/gsoc.jpg)
31
 
32
  ## License
33
 
models/text_recognition_crnn/README.md CHANGED
@@ -62,9 +62,9 @@ python demo.py --help
62
 
63
  ### Examples
64
 
65
- ![CRNNCTC](./examples/CRNNCTC.gif)
66
 
67
- ![demo](./examples/demo.jpg)
68
 
69
  ## License
70
 
 
62
 
63
  ### Examples
64
 
65
+ ![CRNNCTC](./example_outputs/CRNNCTC.gif)
66
 
67
+ ![demo](./example_outputs/demo.jpg)
68
 
69
  ## License
70