Think2SQL-7B / README.md
simone-papicchio's picture
Commit folder
eac8cc9 verified
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
datasets: simone-papicchio/bird
library_name: transformers
tags:
- generated_from_trainer
- open-r1
- Text2SQL
- Reasoning
licence: apache-2.0
---
# Model Information
This model is the reasoning model for Text2SQL task introduced in [Think2SQL: Reinforce LLM Reasoning Capabilities for Text2SQL](https://arxiv.org/abs/2504.15077)
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the [simone-papicchio/bird](https://huggingface.co/datasets/simone-papicchio/bird) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
The best model performance are given with its System and User prompt.
The model is intended to use with three input: question, evidence and the database schema.
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "simone-papicchio/Think2SQL-7B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
system_message = (
"You are a helpful AI Assistant that provides well-reasoned and detailed responses. "
"You first think about the reasoning process as an internal monologue and then provide the user with the answer. "
"Respond in the following format: <think>\n...\n</think>\n<answer>\n...\n</answer>"
).strip()
user_message = (
"Answer the following question with the SQL code. Use the piece of evidence and base your answer on the database schema. "
"Given the question, the evidence and the database schema, return in the <answer> tags only the SQL script that addresses the question.\n"
"Question:\n{question}\n\n"
"Evidence:\n{evidence}\n\n"
"Database Schema:\n{schema}\n\n"
"Return only the SQL script enclosed in <answer> tags."
).strip()
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_message},
]
outputs = pipeline(
messages,
max_new_tokens=30_000,
temperature=0.7,
top_p=0.95
)
print(outputs[0]["generated_text"][-1])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/spapicchio-politecnico-di-torino/deep-thinking/runs/d93m41pq)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
```bibtex
@misc{papicchio2025think2sqlreinforcellmreasoning,
title={Think2SQL: Reinforce LLM Reasoning Capabilities for Text2SQL},
author={Simone Papicchio and Simone Rossi and Luca Cagliero and Paolo Papotti},
year={2025},
eprint={2504.15077},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.15077},
}
```
```bibtex
@inproceedings{papicchio2023qatch,
title={QATCH: benchmarking SQL-centric tasks with table representation learning models on your data},
author={Papicchio, Simone and Papotti, Paolo and Cagliero, Luca},
booktitle={Proceedings of the 37th International Conference on Neural Information Processing Systems},
pages={30898--30917},
year={2023}
}
```