File size: 3,804 Bytes
a7fd78b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
language: en
license: apache-2.0
datasets:
- nyu-mll/glue
---
# LoNAS Model Card: lonas-bert-base-glue
The super-networks fine-tuned on BERT-base with [GLUE benchmark](https://gluebenchmark.com/) using LoNAS.
## Model Details
### Information
- **Model name:** lonas-bert-base-glue
- **Base model:** [bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
- **Subnetwork version:** Super-network
- **NNCF Configurations:** [nncf_config/glue](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/nncf_config/glue)
### Adapter Configuration
- **LoRA rank:** 8
- **LoRA alpha:** 16
- **LoRA target modules:** query, value
### Training and Evaluation
[GLUE benchmark](https://gluebenchmark.com/)
### Training Hyperparameters
| Task | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI |
|------------|------|------|-------|------|-------|------|------|------|
| Epoch | 80 | 35 | 60 | 80 | 60 | 80 | 60 | 40 |
| Batch size | 32 | 32 | 64 | 64 | 64 | 64 | 64 | 64 |
| Learning rate | 3e-4 | 5e-4 | 5e-4 | 3e-4 | 3e-4 | 4e-4 | 3e-4 | 4e-4 |
| Max length | 128 | 128 | 128 | 128 | 128 | 256 | 128 | 128 |
## How to use
Refer to [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/running_commands](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/running_commands):
```bash
CUDA_VISIBLE_DEVICES=${DEVICES} python run_glue.py \
--task_name ${TASK} \
--model_name_or_path bert-base-uncased \
--do_eval \
--do_search \
--per_device_eval_batch_size 64 \
--max_seq_length ${MAX_LENGTH} \
--lora \
--lora_weights lonas-bert-base-glue/lonas-bert-base-${TASK} \
--nncf_config nncf_config/glue/nncf_lonas_bert_base_${TASK}.json \
--output_dir lonas-bert-base-glue/lonas-bert-base-${TASK}/results
```
## Evaluation Results
Results of the optimal sub-network discoverd from the super-network:
| Method | Trainable Parameter Ratio | GFLOPs | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI | AVG |
|-------------|---------------------------|------------|-------|-------|-------|-------|-------|-------|-------|-------|-----------|
| LoRA | 0.27% | 11.2 | 65.85 | 84.46 | 88.73 | 57.58 | 92.06 | 90.62 | 89.41 | 83.00 | 81.46 |
| **LoNAS** | 0.27% | **8.0** | 70.76 | 88.97 | 88.28 | 61.12 | 93.23 | 91.21 | 88.55 | 82.00 | **83.02** |
## Model Sources
**Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS)
**Paper:**
- [LoNAS: Elastic Low-Rank Adapters for Efficient Large Language Models](https://aclanthology.org/2024.lrec-main.940)
- [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372)
## Citation
```bibtex
@inproceedings{munoz-etal-2024-lonas,
title = "{L}o{NAS}: Elastic Low-Rank Adapters for Efficient Large Language Models",
author = "Munoz, Juan Pablo and
Yuan, Jinjie and
Zheng, Yi and
Jain, Nilesh",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.940",
pages = "10760--10776",
}
```
## License
Apache-2.0
|