Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ library_name: transformers
|
|
15 |
|
16 |
**VL-Rethinker-7B** achieves SoTA results on various multimodal reasoning benchmarks.
|
17 |
|
18 |
-
It is trained using the **GRPO-SSR and Forced Rethinking** techniques.
|
19 |
|
20 |
For details of our approach and performance comparison, please see our [paper](https://github.com/TIGER-AI-Lab/VL-Rethinker/blob/main/paper.pdf).
|
21 |
|
@@ -23,15 +23,16 @@ For details of training and evaluation, please see our [code repo](https://githu
|
|
23 |
|
24 |
Explore further via the following links:
|
25 |
|
26 |
-
| [**🚀Project Page**](https://tiger-ai-lab.github.io/VL-Rethinker/) | [**📖Paper**](https://arxiv.org/abs/2504.08837) | [**🔗Github**](https://github.com/TIGER-AI-Lab/VL-Rethinker/) | [**🤗Data** (
|
27 |
|
28 |
## Citation
|
29 |
|
30 |
If you feel this model useful, please give us a free cite:
|
31 |
-
```
|
32 |
@article{vl-rethinker,
|
33 |
title={VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning},
|
34 |
author = {Wang, Haozhe and Qu, Chao and Huang, Zuming and Chu, Wei and Lin,Fangzhen and Chen, Wenhu},
|
35 |
journal={arXiv preprint arXiv:2504.08837},
|
36 |
year={2025}
|
37 |
-
}
|
|
|
|
15 |
|
16 |
**VL-Rethinker-7B** achieves SoTA results on various multimodal reasoning benchmarks.
|
17 |
|
18 |
+
It is trained using the **GRPO-SSR and Forced Rethinking** techniques, using meticulously curated training queries from [ViRL39K](https://huggingface.co/datasets/TIGER-Lab/ViRL39K).
|
19 |
|
20 |
For details of our approach and performance comparison, please see our [paper](https://github.com/TIGER-AI-Lab/VL-Rethinker/blob/main/paper.pdf).
|
21 |
|
|
|
23 |
|
24 |
Explore further via the following links:
|
25 |
|
26 |
+
| [**🚀Project Page**](https://tiger-ai-lab.github.io/VL-Rethinker/) | [**📖Paper**](https://arxiv.org/abs/2504.08837) | [**🔗Github**](https://github.com/TIGER-AI-Lab/VL-Rethinker/) | [**🤗Data** (ViRL39K)](https://huggingface.co/datasets/TIGER-Lab/ViRL39K) |
|
27 |
|
28 |
## Citation
|
29 |
|
30 |
If you feel this model useful, please give us a free cite:
|
31 |
+
```bibtex
|
32 |
@article{vl-rethinker,
|
33 |
title={VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning},
|
34 |
author = {Wang, Haozhe and Qu, Chao and Huang, Zuming and Chu, Wei and Lin,Fangzhen and Chen, Wenhu},
|
35 |
journal={arXiv preprint arXiv:2504.08837},
|
36 |
year={2025}
|
37 |
+
}
|
38 |
+
```
|