Paper2Rebuttal: A Multi-Agent Framework for Transparent Author Response Assistance
Abstract
RebuttalAgent is a multi-agent framework that reframes rebuttal generation as an evidence-centric planning task, improving coverage, faithfulness, and strategic coherence in academic peer review.
Writing effective rebuttals is a high-stakes task that demands more than linguistic fluency, as it requires precise alignment between reviewer intent and manuscript details. Current solutions typically treat this as a direct-to-text generation problem, suffering from hallucination, overlooked critiques, and a lack of verifiable grounding. To address these limitations, we introduce RebuttalAgent, the first multi-agents framework that reframes rebuttal generation as an evidence-centric planning task. Our system decomposes complex feedback into atomic concerns and dynamically constructs hybrid contexts by synthesizing compressed summaries with high-fidelity text while integrating an autonomous and on-demand external search module to resolve concerns requiring outside literature. By generating an inspectable response plan before drafting, RebuttalAgent ensures that every argument is explicitly anchored in internal or external evidence. We validate our approach on the proposed RebuttalBench and demonstrate that our pipeline outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a transparent and controllable assistant for the peer review process. Code will be released.
Community
RebuttalAgent is an AI-powered multi-agent system that helps researchers craft high-quality rebuttals for academic paper reviews. The system analyzes reviewer comments, searches relevant literature, generates rebuttal strategies, and produces formal rebuttal letters, all through an interactive human-in-the-loop workflow.
š Paper: https://arxiv.org/abs/2601.14171
š„ Project Page: https://mqleet.github.io/Paper2Rebuttal_ProjectPage/
š¹ļø Code: https://github.com/AutoLab-SAI-SJTU/Paper2Rebuttal (āļø)
š¤ Huggingface Space: https://huggingface.co/spaces/Mqleet/RebuttalAgent
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing (2026)
- DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation (2026)
- LimAgents: Multi-Agent LLMs for Generating Research Limitations (2025)
- Mind2Report: A Cognitive Deep Research Agent for Expert-Level Commercial Report Synthesis (2026)
- IDRBench: Interactive Deep Research Benchmark (2026)
- RhinoInsight: Improving Deep Research through Control Mechanisms for Model Behavior and Context (2025)
- Agent-as-a-Judge (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper