UniversalAlgorithmic commited on
Commit
72e75ab
Β·
verified Β·
1 Parent(s): 96edf87

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -36
README.md CHANGED
@@ -1,5 +1,7 @@
1
  # SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization
2
 
 
 
3
 
4
  ## Model Zoo: Adaptive Hyperparameter Optimization (HPO) via SPG Algorithm
5
 
@@ -8,34 +10,33 @@
8
  | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
9
  |-------|------|----------|-----------|-----------|---------|----------------------|
10
  | MobileNet-V2 | ❌ | 3.5 M | 71.878 | 90.286 | <a href='https://download.pytorch.org/models/mobilenet_v2-b0353104.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2'>Recipe</a> |
11
- | MobileNet-V2 | βœ… | 3.5 M | 72.104 | 90.316 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/mobilenet_v2/model_32.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/mobilenet_v2-yellow'></a> | [examples/image-classification/run.sh](#-Retrain-model-on-ImageNet-1K) |
12
  | ResNet-50 | ❌ | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
13
- | ResNet-50 | βœ… | 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/resnet50/model_35.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> |
14
  | EfficientNet-V2-M | ❌ | 54.1 M | 85.112 | 97.156 | <a href='https://download.pytorch.org/models/efficientnet_v2_m-dc08266a.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#efficientnet-v2'>Recipe</a> |
15
- | EfficientNet-V2-M | βœ… | 54.1 M | 85.218 | 97.208 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/efficientnet_v2_m/model_7.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/efficientnet_v2_m-yellow'></a> |
16
  | ViT-B16 | ❌ | 86.6 M | 81.072 | 95.318 | <a href='https://download.pytorch.org/models/vit_b_16-c867db91.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#vit_b_16'>Recipe</a> |
17
- | ViT-B16 | βœ… | 86.6 M | 81.092 | 95.304 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/vit_b_16/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/vit_b_16-yellow'></a> |
18
 
19
 
20
 
21
  `Table 2: Performance of pre-trained vs. SPG-retrained models. All models are evaluated a subset of COCO val2017, on the 21 categories (including "background") that are present in the Pascal VOC dataset.`
22
 
23
- ⚠️`All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-category framework.`
24
 
25
  | Model | SPG | # Params | mIoU (%) | pixelwise Acc (%) | Weights | Command to reproduce |
26
  |---------------------|-----|----------|------------|---------------------|---------|----------------------|
27
  | FCN-ResNet50 | ❌ | 35.3 M | 58.9 | 90.9 | <a href='https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#fcn_resnet50'>Recipe</a> |
28
- | FCN-ResNet50 | βœ… | 35.3 M | 59.4 | 90.9 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/fcn_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet50-yellow'></a> |
29
  | FCN-ResNet101 | ❌ | 54.3 M | 62.2 | 91.1 | <a href='https://download.pytorch.org/models/fcn_resnet101_coco-7ecb50ca.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
30
- | FCN-ResNet101 | βœ… | 54.3 M | 62.4 | 91.1 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/fcn_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet101-yellow'></a> |
31
  | DeepLabV3-ResNet50 | ❌ | 42.0 M | 63.8 | 91.5 | <a href='https://download.pytorch.org/models/deeplabv3_resnet50_coco-cd0a2569.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet50'>Recipe</a> |
32
- | DeepLabV3-ResNet50 | βœ… | 42.0 M | 64.2 | 91.6 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/deeplabv3_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet50-yellow'></a> |
33
  | DeepLabV3-ResNet101 | ❌ | 61.0 M | 65.3 | 91.7 | <a href='https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
34
- | DeepLabV3-ResNet101 | βœ… | 61.0 M | 65.7 | 91.8 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/deeplabv3_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet101-yellow'></a> |
35
 
36
 
37
-
38
- `Table X: Performance comparison of fine-tuned vs. SPG-retrained models across NLP and speech benchmarks.`
39
  - GLUE (Text classification: BERT on CoLA, SST-2, MRPC, QQP, QNLI, and RTE task)
40
  - SQuAD (Question answering: BERT)
41
  - SUPERB (Speech classification: Wav2Vec2 for Audio Classification (AC))
@@ -43,30 +44,50 @@
43
  | Task | SPG | Metric Type | Performance (%) | Weights | Command to reproduce |
44
  |-------|------|-------------------|-----------------|---------|----------------------|
45
  | CoLA | ❌ | Matthews coor | 56.53 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
46
- | CoLA | βœ… | Matthews coor | 62.13 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/cola'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/CoLA-yellow'></a> |
47
  | SST-2 | ❌ | Accuracy | 92.32 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
48
- | SST-2 | βœ… | Accuracy | 92.54 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/sst2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/SST2-yellow'></a> |
49
  | MRPC | ❌ | F1/Accuracy | 88.85/84.09 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
50
- | MRPC | βœ… | F1/Accuracy | 91.10/87.25 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/mrpc'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/MRPC-yellow'></a> |
51
  | QQP | ❌ | F1/Accuracy | 87.49/90.71 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
52
- | QQP | βœ… | F1/Accuracy | 89.72/90.88 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/qqp'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QQP-yellow'></a> |
53
  | QNLI | ❌ | Accuracy | 90.66 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
54
- | QNLI | βœ… | Accuracy | 91.10 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/qnli'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QNLI-yellow'></a> |
55
  | RTE | ❌ | Accuracy | 65.70 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
56
- | RTE | βœ… | Accuracy | 72.56 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/rte'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/RTE-yellow'></a> |
57
  | Q/A* | ❌ | F1/Extra match | 88.52/81.22 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-question_answering-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering#fine-tuning-bert-on-squad10'>Recipe</a> |
58
- | Q/A* | βœ… | F1/Extra match | 88.67/81.51 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/qa'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QA-yellow'></a> |
59
  | AC† | ❌ | Accuracy | 98.26 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-audio_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu'>Recipe</a> |
60
- | AC† | βœ… | Accuracy | 98.31 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/ac'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/AC-yellow'></a> |
61
 
62
 
63
  ## Model Zoo: Neural Architecture Search (NAS) via SPG Algorithm
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  ## Requirements
66
 
67
  1. Install `torch>=2.0.0+cu118`.
68
  2. To install other pip packages:
69
  ```setup
 
70
  pip install -r requirements.txt
71
  ```
72
  3. Prepare the [ImageNet](http://image-net.org/) dataset manually and place it in `/path/to/imagenet`. For image classification examples, pass the argument `--data-path=/path/to/imagenet` to the training script. The extracted dataset directory should follow this structure:
@@ -97,7 +118,6 @@
97
 
98
  ## Training
99
 
100
- <a id="#-Retrain-model-on-ImageNet-1K"></a>
101
  ### Retrain model on ImageNet-1K
102
  We use training recipes similar to those in [PyTorch Vision's classification reference](https://github.com/pytorch/vision/blob/main/references/classification/README.md) to retrain MobileNet-V2, ResNet, EfficientNet-V2, and ViT with our SPG on ImageNet-1K. The following command can be used:
103
 
@@ -265,7 +285,7 @@ torchrun --nproc_per_node=4 train.py\
265
  --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0.\
266
  --apply-trp --trp-depths 2 2 2 --trp-planes 256 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
267
 
268
- # During Neural Architecture Search (NAS), we explore ResNet-34, ResNet-50, ResNet-53, and ResNet-56. After retraining with SPG algorithm, we retain only ResNet-50 and discard the others.
269
  torchrun --nproc_per_node=4 train.py\
270
  --data-path /home/cs/Documents/datasets/imagenet\
271
  --model resnet50 --output-dir resnet50 --weights ResNet50_Weights.IMAGENET1K_V1\
@@ -280,25 +300,25 @@ To evaluate our models on ImageNet, run:
280
 
281
  ```bash
282
 
283
- cd image-classification
284
 
285
- # Required: Download our MobileNet-V2 weights to /path/to/image-classification/mobilenet_v2
286
  torchrun --nproc_per_node=4 train.py\
287
  --data-path /path/to/imagenet/\
288
  --model mobilenet_v2 --resume mobilenet_v2/model_32.pth --test-only
289
 
290
- # Required: Download our ResNet-50 weights to /path/to/image-classification/resnet50
291
  torchrun --nproc_per_node=4 train.py\
292
  --data-path /path/to/imagenet/\
293
  --model resnet50 --resume resnet50/model_35.pth --test-only
294
 
295
- # Required: Download our EfficientNet-V2 M weights to /path/to/image-classification/efficientnet_v2_m
296
  torchrun --nproc_per_node=4 train.py\
297
  --data-path /path/to/imagenet/\
298
  --model efficientnet_v2_m --resume efficientnet_v2_m/model_7.pth --test-only\
299
  --val-crop-size 480 --val-resize-size 480
300
 
301
- # Required: Download our ViT-B-16 weights to /path/to/image-classification/vit_b_16
302
  torchrun --nproc_per_node=4 train.py\
303
  --data-path /path/to/imagenet/\
304
  --model vit_b_16 --resume vit_b_16/model_4.pth --test-only
@@ -308,7 +328,7 @@ To evaluate our models on COCO, run:
308
 
309
  ```bash
310
 
311
- cd ./examples/semantic-segmentation
312
 
313
  # eval baselines
314
  torchrun --nproc_per_node=4 train.py\
@@ -330,50 +350,50 @@ torchrun --nproc_per_node=4 train.py\
330
 
331
 
332
  # eval our models
333
- # Required: Download our FCN-ResNet50 weights to /path/to/semantic-segmentation/fcn_resnet50
334
  torchrun --nproc_per_node=4 train.py\
335
  --workers 4 --dataset coco --data-path /path/to/coco/\
336
  --model fcn_resnet50 --aux-loss --resume fcn_resnet50/model_4.pth\
337
  --test-only
338
 
339
- # Required: Download our FCN-ResNet101 weights to /path/to/semantic-segmentation/fcn_resnet101
340
  torchrun --nproc_per_node=4 train.py\
341
  --workers 4 --dataset coco --data-path /path/to/coco/\
342
  --model fcn_resnet101 --aux-loss --resume fcn_resnet101/model_4.pth\
343
  --test-only
344
 
345
- # Required: Download our DeepLabV3-ResNet50 weights to /path/to/semantic-segmentation/deeplabv3_resnet50
346
  torchrun --nproc_per_node=4 train.py\
347
  --workers 4 --dataset coco --data-path /path/to/coco/\
348
  --model deeplabv3_resnet50 --aux-loss --resume deeplabv3_resnet50/model_4.pth\
349
  --test-only
350
 
351
- # Required: Download our DeepLabV3-ResNet101 weights to /path/to/semantic-segmentation/deeplabv3_resnet101
352
  torchrun --nproc_per_node=4 train.py\
353
  --workers 4 --dataset coco --data-path /path/to/coco/\
354
- --model deeplabv3_resnet101 --aux-loss --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
355
  --test-only
356
  ```
357
 
358
  To evaluate our models on GLUE, SquAD, and SUPERB, please re-run the `transfer learning` related commands we previously declared, as these commands are used not only for training but also for evaluation.
359
 
360
 
361
- For Network Architecture Search, please run the following command to evaluate our SPG-trained ResNet-18 model:
362
  ```bash
363
 
364
  cd ./examples/neural-architecture-search
365
 
366
- # Required: Download our ResNet-18 weights to /path/to/neural-architecture-search/resnet18
367
  torchrun --nproc_per_node=4 train.py\
368
  --data-path /home/cs/Documents/datasets/imagenet\
369
  --model resnet18 --resume resnet18/model_3.pth --test-only
370
 
371
- # Required: Download our ResNet-34 weights to /path/to/neural-architecture-search/resnet34
372
  torchrun --nproc_per_node=4 train.py\
373
  --data-path /home/cs/Documents/datasets/imagenet\
374
  --model resnet34 --resume resnet34/model_8.pth --test-only
375
 
376
- # Required: Download our ResNet-50 weights to /path/to/neural-architecture-search/resnet50
377
  torchrun --nproc_per_node=4 train.py\
378
  --data-path /home/cs/Documents/datasets/imagenet\
379
  --model resnet50 --resume resnet50/model_9.pth --test-only
 
1
  # SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization
2
 
3
+ > πŸš€ If you're using Jupyter or Colab, you can follow the demo and run it on a single GPU:
4
+ - Colab Notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/UniversalAlgorithmic/SPG/blob/main/demo_nas.ipynb)
5
 
6
  ## Model Zoo: Adaptive Hyperparameter Optimization (HPO) via SPG Algorithm
7
 
 
10
  | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
11
  |-------|------|----------|-----------|-----------|---------|----------------------|
12
  | MobileNet-V2 | ❌ | 3.5 M | 71.878 | 90.286 | <a href='https://download.pytorch.org/models/mobilenet_v2-b0353104.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2'>Recipe</a> |
13
+ | MobileNet-V2 | βœ… | 3.5 M | 72.104 | 90.316 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/mobilenetv2/model_32.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/mobilenet_v2-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
14
  | ResNet-50 | ❌ | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
15
+ | ResNet-50 | βœ… | 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/resnet50/model_35.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
16
  | EfficientNet-V2-M | ❌ | 54.1 M | 85.112 | 97.156 | <a href='https://download.pytorch.org/models/efficientnet_v2_m-dc08266a.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#efficientnet-v2'>Recipe</a> |
17
+ | EfficientNet-V2-M | βœ… | 54.1 M | 85.218 | 97.208 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/efficientnet_v2_m/model_7.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/efficientnet_v2_m-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
18
  | ViT-B16 | ❌ | 86.6 M | 81.072 | 95.318 | <a href='https://download.pytorch.org/models/vit_b_16-c867db91.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#vit_b_16'>Recipe</a> |
19
+ | ViT-B16 | βœ… | 86.6 M | 81.092 | 95.304 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/vit_b_16/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/vit_b_16-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
20
 
21
 
22
 
23
  `Table 2: Performance of pre-trained vs. SPG-retrained models. All models are evaluated a subset of COCO val2017, on the 21 categories (including "background") that are present in the Pascal VOC dataset.`
24
 
25
+ > ⚠️`All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-category framework.`
26
 
27
  | Model | SPG | # Params | mIoU (%) | pixelwise Acc (%) | Weights | Command to reproduce |
28
  |---------------------|-----|----------|------------|---------------------|---------|----------------------|
29
  | FCN-ResNet50 | ❌ | 35.3 M | 58.9 | 90.9 | <a href='https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#fcn_resnet50'>Recipe</a> |
30
+ | FCN-ResNet50 | βœ… | 35.3 M | 59.4 | 90.9 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/fcn_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet50-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
31
  | FCN-ResNet101 | ❌ | 54.3 M | 62.2 | 91.1 | <a href='https://download.pytorch.org/models/fcn_resnet101_coco-7ecb50ca.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
32
+ | FCN-ResNet101 | βœ… | 54.3 M | 62.4 | 91.1 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/fcn_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet101-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
33
  | DeepLabV3-ResNet50 | ❌ | 42.0 M | 63.8 | 91.5 | <a href='https://download.pytorch.org/models/deeplabv3_resnet50_coco-cd0a2569.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet50'>Recipe</a> |
34
+ | DeepLabV3-ResNet50 | βœ… | 42.0 M | 64.2 | 91.6 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/deeplabv3_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet50-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
35
  | DeepLabV3-ResNet101 | ❌ | 61.0 M | 65.3 | 91.7 | <a href='https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
36
+ | DeepLabV3-ResNet101 | βœ… | 61.0 M | 65.7 | 91.8 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/deeplabv3_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet101-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
37
 
38
 
39
+ `Table 3: Performance comparison of fine-tuned vs. SPG-retrained models across NLP and speech benchmarks.`
 
40
  - GLUE (Text classification: BERT on CoLA, SST-2, MRPC, QQP, QNLI, and RTE task)
41
  - SQuAD (Question answering: BERT)
42
  - SUPERB (Speech classification: Wav2Vec2 for Audio Classification (AC))
 
44
  | Task | SPG | Metric Type | Performance (%) | Weights | Command to reproduce |
45
  |-------|------|-------------------|-----------------|---------|----------------------|
46
  | CoLA | ❌ | Matthews coor | 56.53 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
47
+ | CoLA | βœ… | Matthews coor | 62.13 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/cola'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/CoLA-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
48
  | SST-2 | ❌ | Accuracy | 92.32 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
49
+ | SST-2 | βœ… | Accuracy | 92.54 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/sst2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/SST2-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
50
  | MRPC | ❌ | F1/Accuracy | 88.85/84.09 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
51
+ | MRPC | βœ… | F1/Accuracy | 91.10/87.25 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/mrpc'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/MRPC-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
52
  | QQP | ❌ | F1/Accuracy | 87.49/90.71 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
53
+ | QQP | βœ… | F1/Accuracy | 89.72/90.88 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/qqp'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QQP-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
54
  | QNLI | ❌ | Accuracy | 90.66 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
55
+ | QNLI | βœ… | Accuracy | 91.10 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/qnli'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QNLI-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
56
  | RTE | ❌ | Accuracy | 65.70 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
57
+ | RTE | βœ… | Accuracy | 72.56 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/rte'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/RTE-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
58
  | Q/A* | ❌ | F1/Extra match | 88.52/81.22 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-question_answering-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering#fine-tuning-bert-on-squad10'>Recipe</a> |
59
+ | Q/A* | βœ… | F1/Extra match | 88.67/81.51 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/question-answering/qa'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QA-yellow'></a> | [examples/question-answering/run.sh](#transfer-learning-on-squad) |
60
  | AC† | ❌ | Accuracy | 98.26 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-audio_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu'>Recipe</a> |
61
+ | AC† | βœ… | Accuracy | 98.31 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/audio-classification/ac'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/AC-yellow'></a> | [examples/audio-answering/run.sh](#transfer-learning-on-superb) |
62
 
63
 
64
  ## Model Zoo: Neural Architecture Search (NAS) via SPG Algorithm
65
 
66
+ `Table 4: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
67
+ Depending on the base model, we explore the following architectures:
68
+ - ResNet-18: ResNet-18, ResNet-27, ResNet-36, ResNet-45
69
+ - ResNet-34: ResNet-34, ResNet-40, ResNet-46, ResNet-52
70
+ - ResNet-50: ResNet-50, ResNet-53, ResNet-56, ResNet-59
71
+
72
+ > ⚠️`Our SPG differs from most NAS algorithms, which typically use a gating network for architecture selection. In contrast, we neither employ a gating network nor a proxy network. Instead, after policy optimization, we keep only the base architecture (ResNet-18, ResNet-34, and ResNet-50) and remove all others (ResNet-27/36/45, ResNet-40/46/52, and ResNet-53/56/59).`
73
+
74
+
75
+ | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
76
+ |-------|------|----------|-----------|-----------|---------|----------------------|
77
+ | ResNet-18 | ❌ | 11.7M | 69.758 | 89.078 | <a href='https://download.pytorch.org/models/resnet18-f37072fd.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
78
+ | ResNet-18 | βœ… | 11.7M | 70.092 | 89.314 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet18/model_3.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet18-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
79
+ | ResNet-34 | ❌ | 21.8M | 73.314 | 91.420 | <a href='https://download.pytorch.org/models/resnet34-b627a593.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
80
+ | ResNet-34 | βœ… | 21.8M | 73.900 | 93.536 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet34/model_8.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet34-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
81
+ | ResNet-50 | ❌ | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
82
+ | ResNet-50 | βœ… | 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet50/model_9.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
83
+
84
+
85
  ## Requirements
86
 
87
  1. Install `torch>=2.0.0+cu118`.
88
  2. To install other pip packages:
89
  ```setup
90
+ cd examples
91
  pip install -r requirements.txt
92
  ```
93
  3. Prepare the [ImageNet](http://image-net.org/) dataset manually and place it in `/path/to/imagenet`. For image classification examples, pass the argument `--data-path=/path/to/imagenet` to the training script. The extracted dataset directory should follow this structure:
 
118
 
119
  ## Training
120
 
 
121
  ### Retrain model on ImageNet-1K
122
  We use training recipes similar to those in [PyTorch Vision's classification reference](https://github.com/pytorch/vision/blob/main/references/classification/README.md) to retrain MobileNet-V2, ResNet, EfficientNet-V2, and ViT with our SPG on ImageNet-1K. The following command can be used:
123
 
 
285
  --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0.\
286
  --apply-trp --trp-depths 2 2 2 --trp-planes 256 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
287
 
288
+ # During Neural Architecture Search (NAS), we explore ResNet-50, ResNet-53, ResNet-56, and ResNet-59. After retraining with SPG algorithm, we retain only ResNet-50 and discard the others.
289
  torchrun --nproc_per_node=4 train.py\
290
  --data-path /home/cs/Documents/datasets/imagenet\
291
  --model resnet50 --output-dir resnet50 --weights ResNet50_Weights.IMAGENET1K_V1\
 
300
 
301
  ```bash
302
 
303
+ cd examples/image-classification
304
 
305
+ # Required: Download our MobileNet-V2 weights to examples/image-classification/mobilenet_v2
306
  torchrun --nproc_per_node=4 train.py\
307
  --data-path /path/to/imagenet/\
308
  --model mobilenet_v2 --resume mobilenet_v2/model_32.pth --test-only
309
 
310
+ # Required: Download our ResNet-50 weights to examples/image-classification/resnet50
311
  torchrun --nproc_per_node=4 train.py\
312
  --data-path /path/to/imagenet/\
313
  --model resnet50 --resume resnet50/model_35.pth --test-only
314
 
315
+ # Required: Download our EfficientNet-V2 M weights to examples/image-classification/efficientnet_v2_m
316
  torchrun --nproc_per_node=4 train.py\
317
  --data-path /path/to/imagenet/\
318
  --model efficientnet_v2_m --resume efficientnet_v2_m/model_7.pth --test-only\
319
  --val-crop-size 480 --val-resize-size 480
320
 
321
+ # Required: Download our ViT-B-16 weights to examples/image-classification/vit_b_16
322
  torchrun --nproc_per_node=4 train.py\
323
  --data-path /path/to/imagenet/\
324
  --model vit_b_16 --resume vit_b_16/model_4.pth --test-only
 
328
 
329
  ```bash
330
 
331
+ cd examples/semantic-segmentation
332
 
333
  # eval baselines
334
  torchrun --nproc_per_node=4 train.py\
 
350
 
351
 
352
  # eval our models
353
+ # Required: Download our FCN-ResNet50 weights to examples/semantic-segmentation/fcn_resnet50
354
  torchrun --nproc_per_node=4 train.py\
355
  --workers 4 --dataset coco --data-path /path/to/coco/\
356
  --model fcn_resnet50 --aux-loss --resume fcn_resnet50/model_4.pth\
357
  --test-only
358
 
359
+ # Required: Download our FCN-ResNet101 weights to examples/semantic-segmentation/fcn_resnet101
360
  torchrun --nproc_per_node=4 train.py\
361
  --workers 4 --dataset coco --data-path /path/to/coco/\
362
  --model fcn_resnet101 --aux-loss --resume fcn_resnet101/model_4.pth\
363
  --test-only
364
 
365
+ # Required: Download our DeepLabV3-ResNet50 weights to examples/semantic-segmentation/deeplabv3_resnet50
366
  torchrun --nproc_per_node=4 train.py\
367
  --workers 4 --dataset coco --data-path /path/to/coco/\
368
  --model deeplabv3_resnet50 --aux-loss --resume deeplabv3_resnet50/model_4.pth\
369
  --test-only
370
 
371
+ # Required: Download our DeepLabV3-ResNet101 weights to examples/semantic-segmentation/deeplabv3_resnet101
372
  torchrun --nproc_per_node=4 train.py\
373
  --workers 4 --dataset coco --data-path /path/to/coco/\
374
+ --model deeplabv3_resnet101 --aux-loss --resume deeplabv3_resnet101/model_4.pth\
375
  --test-only
376
  ```
377
 
378
  To evaluate our models on GLUE, SquAD, and SUPERB, please re-run the `transfer learning` related commands we previously declared, as these commands are used not only for training but also for evaluation.
379
 
380
 
381
+ For Network Architecture Search, please run the following command to evaluate our SPG-trained ResNet models:
382
  ```bash
383
 
384
  cd ./examples/neural-architecture-search
385
 
386
+ # Required: Download our ResNet-18 weights to examples/neural-architecture-search/resnet18
387
  torchrun --nproc_per_node=4 train.py\
388
  --data-path /home/cs/Documents/datasets/imagenet\
389
  --model resnet18 --resume resnet18/model_3.pth --test-only
390
 
391
+ # Required: Download our ResNet-34 weights to examples/neural-architecture-search/resnet34
392
  torchrun --nproc_per_node=4 train.py\
393
  --data-path /home/cs/Documents/datasets/imagenet\
394
  --model resnet34 --resume resnet34/model_8.pth --test-only
395
 
396
+ # Required: Download our ResNet-50 weights to examples/neural-architecture-search/resnet50
397
  torchrun --nproc_per_node=4 train.py\
398
  --data-path /home/cs/Documents/datasets/imagenet\
399
  --model resnet50 --resume resnet50/model_9.pth --test-only