Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking
Abstract
FactArena is an automated evaluation framework that comprehensively assesses large language models across all stages of the fact-checking pipeline, revealing gaps between claim verification accuracy and overall fact-checking capability.
Large Language Models (LLMs) are increasingly deployed in real-world fact-checking systems, yet existing evaluations focus predominantly on claim verification and overlook the broader fact-checking workflow, including claim extraction and evidence retrieval. This narrow focus prevents current benchmarks from revealing systematic reasoning failures, factual blind spots, and robustness limitations of modern LLMs. To bridge this gap, we present FactArena, a fully automated arena-style evaluation framework that conducts comprehensive, stage-wise benchmarking of LLMs across the complete fact-checking pipeline. FactArena integrates three key components: (i) an LLM-driven fact-checking process that standardizes claim decomposition, evidence retrieval via tool-augmented interactions, and justification-based verdict prediction; (ii) an arena-styled judgment mechanism guided by consolidated reference guidelines to ensure unbiased and consistent pairwise comparisons across heterogeneous judge agents; and (iii) an arena-driven claim-evolution module that adaptively generates more challenging and semantically controlled claims to probe LLMs' factual robustness beyond fixed seed data. Across 16 state-of-the-art LLMs spanning seven model families, FactArena produces stable and interpretable rankings. Our analyses further reveal significant discrepancies between static claim-verification accuracy and end-to-end fact-checking competence, highlighting the necessity of holistic evaluation. The proposed framework offers a scalable and trustworthy paradigm for diagnosing LLMs' factual reasoning, guiding future model development, and advancing the reliable deployment of LLMs in safety-critical fact-checking applications.
Community
Current automatic LLM fact-checking tests are too narrow, they only check if a model can verify a claim, ignoring the hard parts like finding evidence and decomposing check-worthy claims. FactArena is built to evaluate the full fact-checking pipeline. Testing 16 state-of-the-art models reveals that the whole process ranking can disagree with simple accuracy. The system also revealed fragility in LLM fact-checking through subtle claim modifications ("claim flipping"), tanked average accuracy to 68%, proving that trustworthy auditing of every stage in the process matters.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- JudgeBoard: Benchmarking and Enhancing Small Language Models for Reasoning Evaluation (2025)
- Fact-Checking with Large Language Models via Probabilistic Certainty and Consistency (2026)
- Multi-Crit: Benchmarking Multimodal Judges on Pluralistic Criteria-Following (2025)
- ART: Adaptive Reasoning Trees for Explainable Claim Verification (2026)
- Multimodal Fact-Checking: An Agent-based Approach (2025)
- Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search (2025)
- InFi-Check: Interpretable and Fine-Grained Fact-Checking of LLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper