File size: 6,492 Bytes
28adb60 f2e3176 c920270 28adb60 f2e3176 60ba673 f2e3176 af3dd88 46b1f95 af3dd88 28adb60 c920270 28adb60 60ba673 f2e3176 46b1f95 85a27e0 46b1f95 12c3d6b 85a27e0 12c3d6b f2e3176 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
# Accuracy evaluation of models in OpenCV Zoo
Make sure you have the following packages installed:
```shell
pip install tqdm
pip install scikit-learn
pip install scipy==1.8.1
```
Generally speaking, evaluation can be done with the following command:
```shell
python eval.py -m model_name -d dataset_name -dr dataset_root_dir
```
Supported datasets:
- [ImageNet](#imagenet)
- [WIDERFace](#widerface)
- [LFW](#lfw)
- [ICDAR](#icdar2003)
- [IIIT5K](#iiit5k)
- [Mini Supervisely](#mini-supervisely)
## ImageNet
### Prepare data
Please visit https://image-net.org/ to download the ImageNet dataset (only need images in `ILSVRC/Data/CLS-LOC/val`) and [the labels from caffe](http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz). Organize files as follow:
```shell
$ tree -L 2 /path/to/imagenet
.
βββ caffe_ilsvrc12
βΒ Β βββ det_synset_words.txt
βΒ Β βββ imagenet.bet.pickle
βΒ Β βββ imagenet_mean.binaryproto
βΒ Β βββ synsets.txt
βΒ Β βββ synset_words.txt
βΒ Β βββ test.txt
βΒ Β βββ train.txt
βΒ Β βββ val.txt
βββ caffe_ilsvrc12.tar.gz
βββ ILSVRC
βΒ Β βββ Annotations
βΒ Β βββ Data
βΒ Β βββ ImageSets
βββ imagenet_object_localization_patched2019.tar.gz
βββ LOC_sample_submission.csv
βββ LOC_synset_mapping.txt
βββ LOC_train_solution.csv
βββ LOC_val_solution.csv
```
### Evaluation
Run evaluation with the following command:
```shell
python eval.py -m mobilenet -d imagenet -dr /path/to/imagenet
```
## WIDERFace
The script is modified based on [WiderFace-Evaluation](https://github.com/wondervictor/WiderFace-Evaluation).
### Prepare data
Please visit http://shuoyang1213.me/WIDERFACE to download the WIDERFace dataset [Validation Images](https://huggingface.co/datasets/wider_face/resolve/main/data/WIDER_val.zip), [Face annotations](http://shuoyang1213.me/WIDERFACE/support/bbx_annotation/wider_face_split.zip) and [eval_tools](http://shuoyang1213.me/WIDERFACE/support/eval_script/eval_tools.zip). Organize files as follow:
```shell
$ tree -L 2 /path/to/widerface
.
βββ eval_tools
βΒ Β βββ boxoverlap.m
βΒ Β βββ evaluation.m
βΒ Β βββ ground_truth
βΒ Β βββ nms.m
βΒ Β βββ norm_score.m
βΒ Β βββ plot
βΒ Β βββ read_pred.m
βΒ Β βββ wider_eval.m
βββ wider_face_split
βΒ Β βββ readme.txt
βΒ Β βββ wider_face_test_filelist.txt
βΒ Β βββ wider_face_test.mat
βΒ Β βββ wider_face_train_bbx_gt.txt
βΒ Β βββ wider_face_train.mat
βΒ Β βββ wider_face_val_bbx_gt.txt
βΒ Β βββ wider_face_val.mat
βββ WIDER_val
βββ images
```
### Evaluation
Run evaluation with the following command:
```shell
python eval.py -m yunet -d widerface -dr /path/to/widerface
```
## LFW
The script is modified based on [evaluation of InsightFace](https://github.com/deepinsight/insightface/blob/f92bf1e48470fdd567e003f196f8ff70461f7a20/src/eval/lfw.py).
This evaluation uses [YuNet](../../models/face_detection_yunet) as face detector. The structure of the face bounding boxes saved in [lfw_face_bboxes.npy](../eval/datasets/lfw_face_bboxes.npy) is shown below.
Each row represents the bounding box of the main face that will be used in each image.
```shell
[
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm],
...
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm]
]
```
`x1, y1, w, h` are the top-left coordinates, width and height of the face bounding box, `{x, y}_{re, le, nt, rcm, lcm}` stands for the coordinates of right eye, left eye, nose tip, the right corner and left corner of the mouth respectively. Data type of this numpy array is `np.float32`.
### Prepare data
Please visit http://vis-www.cs.umass.edu/lfw to download the LFW [all images](http://vis-www.cs.umass.edu/lfw/lfw.tgz)(needs to be decompressed) and [pairs.txt](http://vis-www.cs.umass.edu/lfw/pairs.txt)(needs to be placed in the `view2` folder). Organize files as follow:
```shell
$ tree -L 2 /path/to/lfw
.
βββ lfw
βΒ Β βββ Aaron_Eckhart
βΒ Β βββ ...
βΒ Β βββ Zydrunas_Ilgauskas
βββ view2
Β Β βββ pairs.txt
```
### Evaluation
Run evaluation with the following command:
```shell
python eval.py -m sface -d lfw -dr /path/to/lfw
```
## ICDAR2003
### Prepare data
Please visit http://iapr-tc11.org/mediawiki/index.php/ICDAR_2003_Robust_Reading_Competitions to download the ICDAR2003 dataset and the labels. You have to download the Robust Word Recognition [TrialTrain Set](http://www.iapr-tc11.org/dataset/ICDAR2003_RobustReading/TrialTrain/word.zip) only.
```shell
$ tree -L 2 /path/to/icdar
.
βββ word
βΒ Β βββ 1
β β βββ self
β β βββ ...
β β βββ willcooks
βΒ Β βββ ...
βΒ Β βββ 12
βββ word.xml
Β Β
```
### Evaluation
Run evaluation with the following command:
```shell
python eval.py -m crnn -d icdar -dr /path/to/icdar
```
### Example
```shell
download zip file from http://www.iapr-tc11.org/dataset/ICDAR2003_RobustReading/TrialTrain/word.zip
upzip file to /path/to/icdar
python eval.py -m crnn -d icdar -dr /path/to/icdar
```
## IIIT5K
### Prepare data
Please visit https://github.com/cv-small-snails/Text-Recognition-Material to download the IIIT5K dataset and the labels.
### Evaluation
All the datasets in the format of lmdb can be evaluated by this script.<br>
Run evaluation with the following command:
```shell
python eval.py -m crnn -d iiit5k -dr /path/to/iiit5k
```
## Mini Supervisely
### Prepare data
Please download the mini_supervisely data from [here](https://paddleseg.bj.bcebos.com/humanseg/data/mini_supervisely.zip) which includes the validation dataset and unzip it.
```shell
$ tree -L 2 /path/to/mini_supervisely
.
βββ Annotations
βΒ Β βββ ache-adult-depression-expression-41253.png
βΒ Β βββ ...
βββ Images
βΒ Β βββ ache-adult-depression-expression-41253.jpg
βΒ Β βββ ...
βββ test.txt
βββ train.txt
βββ val.txt
```
### Evaluation
Run evaluation with the following command :
```shell
python eval.py -m pphumanseg -d mini_supervisely -dr /path/to/pphumanseg
```
Run evaluation on quantized model with the following command :
```shell
python eval.py -m pphumanseg_q -d mini_supervisely -dr /path/to/pphumanseg
``` |