Spaces:
Sleeping
A newer version of the Gradio SDK is available:
5.35.0
title: Project Chimera
emoji: π
colorFrom: purple
colorTo: pink
sdk: gradio
sdk_version: 5.25.0
app_file: app.py
pinned: false
Project Chimera: Real-Time Global Analysis Engine
Project Chimera aims to be a powerful tool for understanding complex global and local issues in near real-time. It leverages the advanced reasoning capabilities of Google's Gemini API, combined with real-time data fetched via SERP (Search Engine Results Page) APIs and potentially other data sources, to synthesize information, identify patterns, predict outcomes, and even brainstorm potential solutions or opportunities far beyond human capacity alone.
This implementation provides a web interface using Gradio, hosted on Hugging Face Spaces, allowing users to pose complex queries and receive AI-driven analysis based on current information.
Key Features
- AI-Powered Analysis: Utilizes the Gemini API for deep understanding, reasoning, and synthesis of information.
- Real-Time Data Integration: Fetches current search results via a SERP API to ground the analysis in up-to-date information.
- Modular Design: Code is structured for clarity and potential expansion (API clients, orchestration logic, UI).
- Asynchronous Operations: Uses
asyncio
andhttpx
for efficient handling of API calls. - Web Interface: Simple and interactive UI provided by Gradio.
- Configurable: API keys are managed via environment variables/Hugging Face secrets.
- Extensible: Designed to potentially incorporate more diverse APIs (weather, financial, scientific, etc.) in the future.
Technology Stack
- Language: Python 3.9+
- AI Model: Google Gemini API (via
google-generativeai
library) - Real-time Data: SERP API (e.g., SerpApi via
httpx
or their library) - Web UI: Gradio
- HTTP Client:
httpx
(for asynchronous requests) - Configuration:
python-dotenv
(for local development), Hugging Face Secrets (for deployment) - Deployment: Hugging Face Spaces
Project Structure
Use code with caution. Markdown project-chimera/ βββ .hf_secrets # (Optional for local testing) βββ .env # Local API Keys (DO NOT COMMIT) βββ .gitignore βββ app.py # Main Gradio application βββ requirements.txt # Dependencies βββ src/ # Core application logic β βββ chimera/ β βββ init.py β βββ config.py # Configuration loading β βββ api_clients/ # API interaction modules β βββ core/ # Orchestration logic β βββ utils/ # Helper functions (logging, data processing) βββ README.md # This file
Setup and Installation
1. Local Development
- Clone the repository:
git clone <your-repository-url> cd project-chimera
- Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
- Install dependencies:
pip install -r requirements.txt
- Configure API Keys:
- Create a file named
.env
in the project root directory (project-chimera/
). - Add your API keys to this file:
# .env GEMINI_API_KEY=your_gemini_api_key_here SERPAPI_API_KEY=your_serpapi_key_here # Add other API keys if you integrate more services
- IMPORTANT: Ensure
.env
is listed in your.gitignore
file to prevent accidentally committing your keys.
- Create a file named
- Run the application:
The Gradio interface should be accessible locally (usually atpython app.py
http://127.0.0.1:7860
).
2. Deployment on Hugging Face Spaces
- Create a Hugging Face Account: If you don't have one, sign up at huggingface.co.
- Create a New Space:
- Go to "Spaces" -> "Create new Space".
- Give it a name (e.g.,
project-chimera
). - Select "Gradio" as the Space SDK.
- Choose hardware (CPU basic should be sufficient initially).
- Create the Space.
- Upload Files:
- Upload all project files (
app.py
,requirements.txt
, the entiresrc
directory,.gitignore
,README.md
) to your Space repository using Git or the web interface. - DO NOT upload your
.env
file.
- Upload all project files (
- Set Repository Secrets:
- In your Space settings, navigate to the "Repository secrets" section.
- Add the following secrets:
GEMINI_API_KEY
: Your Google Gemini API Key.SERPAPI_API_KEY
: Your SERP API Key.- (Add others if needed)
- These secrets will be securely injected as environment variables when your Space runs.
- Deploy: Hugging Face Spaces will automatically install the dependencies from
requirements.txt
and runapp.py
. Monitor the build logs for any issues. Once built, your application will be live athttps://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
.
How it Works (Architecture Overview)
- User Input: The user enters a query into the Gradio interface.
- Orchestration:
app.py
passes the query to therun_analysis
function insrc/chimera/core/orchestrator.py
. - API Calls: The orchestrator determines which APIs to call based on the query (currently focused on SERP). It uses functions from
src/chimera/api_clients/
(e.g.,serp_client.py
) to fetch data asynchronously. - Data Synthesis: Results from the APIs are collected and formatted by
src/chimera/utils/data_processing.py
. - AI Analysis: A carefully crafted prompt, containing the original user query and the formatted data from APIs, is sent to the Gemini API via
src/chimera/api_clients/gemini_client.py
. - Response: Gemini generates an analysis based on the prompt.
- Output: The orchestrator returns the Gemini analysis to
app.py
, which displays it in the Gradio interface.
Usage
- Navigate to the running application (local URL or Hugging Face Space URL).
- Enter your complex query or analysis request in the text box. Examples:
- "Analyze recent news about quantum computing breakthroughs and their potential impact."
- "What are the key challenges and opportunities mentioned in recent search results regarding vertical farming?"
- "Summarize the latest developments concerning supply chain issues in the semiconductor industry based on recent news."
- Click the "Analyze" button.
- Wait for the system to fetch data and generate the analysis. The result will appear in the "Chimera Analysis" section.
Future Enhancements / Roadmap
- Integrate More APIs: Add clients for weather, financial markets, scientific databases (NASA, PubMed), geospatial data, etc.
- Sophisticated Query Interpretation: Use an LLM pre-processing step to better understand the user query and determine which APIs are relevant.
- Caching: Implement caching for API results and potentially Gemini analyses to reduce costs and latency for repeated queries.
- User Feedback Loop: Allow users to rate the quality of the analysis to help refine prompts.
- Enhanced Data Processing: Improve the formatting and synthesis of data before sending it to Gemini.
- Visualization: Integrate basic charting or mapping if relevant APIs are added.
- Error Handling: Make error reporting more granular and user-friendly.
- Alerting: Add functionality to monitor specific topics and trigger alerts based on significant findings.
Contributing
Contributions are welcome! Please feel free to open an issue to report bugs or suggest features, or submit a pull request with improvements.
License
Please add an appropriate open-source license file (e.g., LICENSE
) to the repository.
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference