ONNX
Satyam Goyal commited on
Commit
850fa1e
·
1 Parent(s): 52e3f44

Merge pull request #95 from Satgoy152:adding-doc

Browse files

Improved help messages for demo programs (#95)
- Added Demo Documentation
- Updated help messages
- Changed exception link

Files changed (2) hide show
  1. README.md +10 -7
  2. demo.py +4 -4
README.md CHANGED
@@ -6,23 +6,27 @@ MobileNetV2: Inverted Residuals and Linear Bottlenecks
6
 
7
  Results of accuracy evaluation with [tools/eval](../../tools/eval).
8
 
9
- | Models | Top-1 Accuracy | Top-5 Accuracy |
10
- | ------ | -------------- | -------------- |
11
- | MobileNet V1 | 67.64 | 87.97 |
12
- | MobileNet V1 quant | 55.53 | 78.74 |
13
- | MobileNet V2 | 69.44 | 89.23 |
14
- | MobileNet V2 quant | 68.37 | 88.56 |
15
 
16
  \*: 'quant' stands for 'quantized'.
17
 
18
  ## Demo
19
 
20
  Run the following command to try the demo:
 
21
  ```shell
22
  # MobileNet V1
23
  python demo.py --input /path/to/image
24
  # MobileNet V2
25
  python demo.py --input /path/to/image --model v2
 
 
 
26
  ```
27
 
28
  ## License
@@ -35,4 +39,3 @@ All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
35
  - MobileNet V2: https://arxiv.org/abs/1801.04381
36
  - MobileNet V1 weight and scripts for training: https://github.com/wjc852456/pytorch-mobilenet-v1
37
  - MobileNet V2 weight: https://github.com/onnx/models/tree/main/vision/classification/mobilenet
38
-
 
6
 
7
  Results of accuracy evaluation with [tools/eval](../../tools/eval).
8
 
9
+ | Models | Top-1 Accuracy | Top-5 Accuracy |
10
+ | ------------------ | -------------- | -------------- |
11
+ | MobileNet V1 | 67.64 | 87.97 |
12
+ | MobileNet V1 quant | 55.53 | 78.74 |
13
+ | MobileNet V2 | 69.44 | 89.23 |
14
+ | MobileNet V2 quant | 68.37 | 88.56 |
15
 
16
  \*: 'quant' stands for 'quantized'.
17
 
18
  ## Demo
19
 
20
  Run the following command to try the demo:
21
+
22
  ```shell
23
  # MobileNet V1
24
  python demo.py --input /path/to/image
25
  # MobileNet V2
26
  python demo.py --input /path/to/image --model v2
27
+
28
+ # get help regarding various parameters
29
+ python demo.py --help
30
  ```
31
 
32
  ## License
 
39
  - MobileNet V2: https://arxiv.org/abs/1801.04381
40
  - MobileNet V1 weight and scripts for training: https://github.com/wjc852456/pytorch-mobilenet-v1
41
  - MobileNet V2 weight: https://github.com/onnx/models/tree/main/vision/classification/mobilenet
 
demo.py CHANGED
@@ -24,14 +24,14 @@ try:
24
  help_msg_backends += "; {:d}: TIMVX"
25
  help_msg_targets += "; {:d}: NPU"
26
  except:
27
- print('This version of OpenCV does not support TIM-VX and NPU. Visit https://gist.github.com/fengyuentau/5a7a5ba36328f2b763aea026c43fa45f for more information.')
28
 
29
  parser = argparse.ArgumentParser(description='Demo for MobileNet V1 & V2.')
30
- parser.add_argument('--input', '-i', type=str, help='Path to the input image.')
31
- parser.add_argument('--model', '-m', type=str, choices=['v1', 'v2', 'v1-q', 'v2-q'], default='v1', help='Which model to use, either v1 or v2.')
32
  parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
33
  parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
34
- parser.add_argument('--label', '-l', type=str, default='./imagenet_labels.txt', help='Path to the dataset labels.')
35
  args = parser.parse_args()
36
 
37
  if __name__ == '__main__':
 
24
  help_msg_backends += "; {:d}: TIMVX"
25
  help_msg_targets += "; {:d}: NPU"
26
  except:
27
+ print('This version of OpenCV does not support TIM-VX and NPU. Visit https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU for more information.')
28
 
29
  parser = argparse.ArgumentParser(description='Demo for MobileNet V1 & V2.')
30
+ parser.add_argument('--input', '-i', type=str, help='Usage: Set input path to a certain image, omit if using camera.')
31
+ parser.add_argument('--model', '-m', type=str, choices=['v1', 'v2', 'v1-q', 'v2-q'], default='v1', help='Usage: Set model type, defaults to image_classification_mobilenetv1_2022apr.onnx (v1).')
32
  parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
33
  parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
34
+ parser.add_argument('--label', '-l', type=str, default='./imagenet_labels.txt', help='Usage: Set path to the different labels that will be used during the detection. Default list found in imagenet_labels.txt')
35
  args = parser.parse_args()
36
 
37
  if __name__ == '__main__':