From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding
Abstract
The paper presents a method for generating diverse and complex instruction data for large language models using attributed grounding, achieving top performance on benchmarks with a large synthesized dataset.
The pursuit of diverse, complex, and large-scale instruction data is crucial for automatically aligning large language models (LLMs). While there are methods capable of generating synthetic instructions at scale, they either suffer from limited grounding sources, leading to a narrow distribution, or rely on trivial extensions that fail to produce meaningful trajectories in terms of complexity. In contrast, instructions that benefit efficient alignment are typically crafted with cognitive insights and grounded in real-world use cases. In this paper, we synthesize such instructions using attributed grounding, which involves 1) a top-down attribution process that grounds a selective set of real instructions to situated users, and 2) a bottom-up synthesis process that leverages web documents to first generate a situation, then a meaningful instruction. This framework allows us to harvest diverse and complex instructions at scale, utilizing the vast range of web documents. Specifically, we construct a dataset of 1 million instructions, called SynthQuestions, and demonstrate that models trained on it achieve leading performance on several common benchmarks, with improvements that continually scale with more web corpora. Data, models and codes will be available at https://github.com/Ignoramus0817/SynthQuestions.
Community
TLDR: We propose a novel instruction data synthesizing framework that generates high-quality grounded instructions from web corpora. With the framework we synthesize 1M instructional data, on which the trained model shows strong performance.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Efficient and Effective Alignment of Large Language Models (2025)
- Instructing Large Language Models for Low-Resource Languages: A Systematic Study for Basque (2025)
- Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models (2025)
- Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models (2025)
- Scaling Computer-Use Grounding via User Interface Decomposition and Synthesis (2025)
- LIFEBench: Evaluating Length Instruction Following in Large Language Models (2025)
- From Templates to Natural Language: Generalization Challenges in Instruction-Tuned LLMs for Spatial Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper