JasperHaozhe commited on
Commit
5da532b
·
verified ·
1 Parent(s): 069ed5e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ tags:
8
+ - transformers
9
+ - multimodal
10
+ pipeline_tag: visual-question-answering
11
+ ---
12
+
13
+ # VL-Rethinker-7B
14
+
15
+ **VL-Rethinker-7B** achieves SoTA results on various multimodal reasoning benchmarks.
16
+
17
+ It is trained using the **GRPO-SSR and Forced Rethinking** techniques.
18
+
19
+ For details of our approach and performance comparison, please see our [paper](https://github.com/TIGER-AI-Lab/VL-Rethinker/blob/main/paper.pdf).
20
+
21
+ For details of training and evaluation, please see our [code repo](https://github.com/TIGER-AI-Lab/VL-Rethinker/).
22
+
23
+ Explore further via the following links:
24
+
25
+ | [**🚀Project Page**](https://tiger-ai-lab.github.io/VL-Rethinker/) | [**📖Paper**](https://github.com/TIGER-AI-Lab/VL-Rethinker/blob/main/paper.pdf) | [**🔗Github**](https://github.com/TIGER-AI-Lab/VL-Rethinker/) | [**🤗Data** (Coming Soon)]() |
26
+
27
+ ## Citation
28
+
29
+ If you feel this model useful, please give us a free cite:
30
+ ```
31
+ @article{VLRethinker,
32
+ title={VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning},
33
+ author={Wang, Haozhe and Qu, Chao and Huang, Zuming and Chu, Wei and Fangzhen, Lin and Wenhu Chen},
34
+ journal={ArXiv},
35
+ year={2025},
36
+ }
37
+ ```