Spaces:
Running
AskVeracity: Fact Checking System
A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis.
Overview
This application uses an agentic AI approach to verify factual claims through a combination of NLP techniques and large language models.
The AI agent:
- Uses a ReAct (Reasoning + Acting) methodology to analyze claims
- Dynamically gathers evidence from multiple sources (Wikipedia, News APIs, RSS feeds, fact-checking sites)
- Intelligently decides which tools to use and in what order based on the claim's category
- Classifies the truthfulness of claims using the collected evidence
- Provides transparency into its reasoning process
- Generates clear explanations for its verdict with confidence scores
Features
- Claim Extraction: Identifies and focuses on the primary factual claim
- Category Detection: Determines the claim's category to optimize evidence retrieval
- Multi-source Evidence: Gathers evidence from Wikipedia, news articles, academic sources, and fact-checking sites
- Semantic Analysis: Analyzes evidence relevance using advanced NLP techniques
- Transparent Classification: Provides clear verdicts with confidence scores
- Detailed Explanations: Generates human-readable explanations for verdicts
- Interactive UI: Easy-to-use Streamlit interface with evidence exploration
Project Structure
askveracity/
β
βββ app.py # Main Streamlit application
βββ agent.py # LangGraph agent implementation
βββ config.py # Configuration and API keys
βββ requirements.txt # Dependencies for the application
βββ .streamlit/ # Streamlit configuration
β βββ config.toml # UI theme configuration
β βββ secrets.toml.example # Example secrets file (do not commit actual secrets)
βββ utils/
β βββ __init__.py
β βββ api_utils.py # API rate limiting and error handling
β βββ performance.py # Performance tracking utilities
β βββ models.py # Model initialization functions
βββ modules/
β βββ __init__.py
β βββ claim_extraction.py # Claim extraction functionality
β βββ evidence_retrieval.py # Evidence gathering from various sources
β βββ classification.py # Truth classification logic
β βββ explanation.py # Explanation generation
β βββ rss_feed.py # RSS feed evidence retrieval
β βββ semantic_analysis.py # Relevance analysis for evidence
β βββ category_detection.py # Claim category detection
βββ data/
β βββ source_credibility.json # Source credibility data
βββ tests/
βββ __init__.py
βββ test_claim_extraction.py # Unit tests for claim extraction
Setup and Installation
Local Development
Clone this repository
git clone https://github.com/yourusername/askveracity.git cd askveracity
Install the required dependencies:
pip install -r requirements.txt
Set up your API keys:
You have two options:
Option 1: Using Streamlit secrets (recommended for local development)
- Copy the example secrets file to create your own:
cp .streamlit/secrets.toml.example .streamlit/secrets.toml
- Edit
.streamlit/secrets.toml
and add your API keys:OPENAI_API_KEY = "your_openai_api_key" NEWS_API_KEY = "your_news_api_key" FACTCHECK_API_KEY = "your_factcheck_api_key"
Option 2: Using environment variables
Create a
.env
file in the root directory with the following content:OPENAI_API_KEY=your_openai_api_key NEWS_API_KEY=your_news_api_key FACTCHECK_API_KEY=your_factcheck_api_key
- Copy the example secrets file to create your own:
When using environment variables, load them:
At the start of your Python script or in your terminal:
# In Python from dotenv import load_dotenv load_dotenv()
Or in your terminal before running the app:
# Unix/Linux/MacOS source .env # Windows # Install python-dotenv[cli] and run dotenv run streamlit run app.py
Running the Application
Launch the Streamlit app by running:
streamlit run app.py
Deploying to Hugging Face Spaces
Fork this repository to your GitHub account
Create a new Space on Hugging Face:
- Go to https://huggingface.co/spaces
- Click "Create new Space"
- Select "Streamlit" as the SDK
- Choose "From GitHub" as the source
- Connect to your GitHub repository
Add the required API keys as secrets:
- Go to the "Settings" tab of your Space
- Navigate to the "Repository secrets" section
- Add the following secrets:
OPENAI_API_KEY
NEWS_API_KEY
FACTCHECK_API_KEY
Your Space will automatically deploy with the changes
Rate Limiting and API Considerations
The application implements intelligent rate limiting for API calls to:
- Wikipedia
- WikiData
- News API
- Google FactCheck Tools
- RSS feeds
The system includes exponential backoff for retries and optimized API usage to work within free API tiers. Rate limits can be configured in the config.py
file.
Best Practices for Claim Verification
For optimal results with AskVeracity:
- Keep claims short and precise
- Include key details in your claim
- Phrase claims as direct statements rather than questions
- Be specific about who said what, when relevant
License
This project is licensed under the MIT License, allowing free use, modification, and distribution with proper attribution.