Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ This is the official repo for paper [Supervised Fine-tuning *in turn* Improves V
|
|
| 15 |
* [2024/01/19] We open source the [ViSFT]() including training scripts and weights. Evaluation codes will be released soon.
|
| 16 |
|
| 17 |
## Introduction
|
| 18 |
-
Image-text training like CLIP has dominated the pretraining of vision foundation models in recent years. Subsequent efforts have been made to introduce region-level visual learning into CLIP’s pretraining but face scalability challenges due to the lack of large-scale region-level datasets. Drawing inspiration from supervised fine-tuning (SFT) in natural language processing such as instruction tuning, we explore the potential of fine-grained SFT in enhancing the generation of vision foundation models after their pretraining. Thus a two-stage method **ViSFT** (**Vi**sion **SFT**) is proposed to unleash the fine-grained knowledge of vision
|
| 19 |
|
| 20 |
|
| 21 |
## Installation
|
|
@@ -210,6 +210,16 @@ Or use the LoRA weights we provide:
|
|
| 210 |
The code of ViSFT is based on the official implementation of [mmf](https://github.com/facebookresearch/mmf), [EVA](https://github.com/baaivision/EVA/tree/master) and [LAVIS](https://github.com/salesforce/LAVIS/tree/main)
|
| 211 |
|
| 212 |
## Citation
|
| 213 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
|
|
|
|
| 15 |
* [2024/01/19] We open source the [ViSFT]() including training scripts and weights. Evaluation codes will be released soon.
|
| 16 |
|
| 17 |
## Introduction
|
| 18 |
+
Image-text training like CLIP has dominated the pretraining of vision foundation models in recent years. Subsequent efforts have been made to introduce region-level visual learning into CLIP’s pretraining but face scalability challenges due to the lack of large-scale region-level datasets. Drawing inspiration from supervised fine-tuning (SFT) in natural language processing such as instruction tuning, we explore the potential of fine-grained SFT in enhancing the generation of vision foundation models after their pretraining. Thus a two-stage method **ViSFT** (**Vi**sion **SFT**) is proposed to unleash the fine-grained knowledge of vision foundation models. In ViSFT, the vision foundation model is enhanced by performing visual joint learning on some in-domain tasks and then tested on out-of-domain benchmarks. With updating using ViSFT on 8 V100 GPUs in less than 2 days, a vision transformer with over 4.4B parameters shows improvements across various out-of-domain benchmarks including vision and vision-linguistic scenarios.
|
| 19 |
|
| 20 |
|
| 21 |
## Installation
|
|
|
|
| 210 |
The code of ViSFT is based on the official implementation of [mmf](https://github.com/facebookresearch/mmf), [EVA](https://github.com/baaivision/EVA/tree/master) and [LAVIS](https://github.com/salesforce/LAVIS/tree/main)
|
| 211 |
|
| 212 |
## Citation
|
| 213 |
+
If you found our work valuable, please cite:
|
| 214 |
+
```
|
| 215 |
+
@misc{jiang2024supervised,
|
| 216 |
+
title={Supervised Fine-tuning in turn Improves Visual Foundation Models},
|
| 217 |
+
author={Xiaohu Jiang and Yixiao Ge and Yuying Ge and Chun Yuan and Ying Shan},
|
| 218 |
+
year={2024},
|
| 219 |
+
eprint={2401.10222},
|
| 220 |
+
archivePrefix={arXiv},
|
| 221 |
+
primaryClass={cs.CV}
|
| 222 |
+
}
|
| 223 |
+
```
|
| 224 |
|
| 225 |
|