multi-agent-chat / README.md
elismasilva's picture
Update README.md
76fdc29 verified

A newer version of the Gradio SDK is available: 5.33.2

Upgrade
metadata
title: Multi Agent Chat
emoji: πŸ’¬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.33.1
app_file: app.py
pinned: true
license: apache-2.0
tags:
  - Agents-MCP-Hackathon
  - mcp-server-track
  - agent-demo-track
short_description: A multi-agent chat application and Gradio MCP Server

Multi-Agent Chat

Hugging Face Spaces

This project is a multi-channel chat application where human users can interact with each other and with an intelligent, autonomous AI agent powered by Google's Gemini. The application is not just a chatbot; it's a fully-fledged multi-agent system designed to be both a compelling agentic demo and a functional MCP Server.

πŸŽ₯ Video Demo

https://www.loom.com/share/f5673ab2b9e644b782b539afd6f06a64?sid=27578356-aa75-42e5-b786-86337c9b937e#Activity

✨ Core Features & Agentic Capabilities (Track 3)

This application showcases a powerful and creative use of AI agents in a collaborative environment.

1. Autonomous & Proactive AI Agent (Gemini)

The core of the application is an AI agent named Gemini with a distinct personality and behavior set. Unlike passive chatbots, this agent:

  • Listens Actively: It continuously processes the conversation context.
  • Decides Autonomously: It uses a "Two-Pass" reasoning architecture. A fast, logical Triage Agent first decides if participation is valuable, understanding nuances like typos ("Gmni") or implicit references ("what about you?").
  • Acts Contextually: If the decision is to act, a creative Actor Agent formulates a human-like, contextual response, respecting its persona (no meta-comments, no inventing personal experiences).

2. Multi-Agent System (MAS)

The application is a true multi-agent environment where different agents coexist and interact:

  • Human Agents: Users like "Lucy" and "Eliseu" who drive the conversation.
  • Gemini Participant Agent: The main AI that enriches the discussion.
  • Specialized Tool Agents:
    • A Moderation Agent that acts as a gatekeeper, filtering messages for safety before they are processed.
    • A Summarization Agent that can be invoked to provide a factual, "who-said-what" report of the conversation.
    • An Opinion Agent that analyzes the social dynamics and sentiment of the chat, providing a high-level, emotional takeaway.

3. Dynamic & Persistent Environment

  • Multi-Channel Chat: Users can join different, persistent chat channels (e.g., #general, #dev).
  • Session Management: The system handles user logins, ensures unique usernames within a channel (by appending numbers, e.g., Lucy_2), and announces when users join or leave, creating a realistic chat experience.

πŸ› οΈ MCP Server / Tool Capabilities (Track 1)

This Gradio application is fully compliant with the Model Control Protocol (MCP), acting as a powerful server that exposes its core functionalities as tools for other agents or applications.

Exposed Tools

A client connecting to this Space's MCP endpoint will discover the following tools:

  1. login_user(channel: str, username: str) -> Tuple[str, str]

    • Description: Logs a user into a specific chat channel. Handles username uniqueness and returns the final username and channel.
    • Use Case: An external orchestrator agent could use this to programmatically add a bot or user to a conversation.
  2. exit_chat(channel: str, username: str)

    • Description: Logs a user out of a channel, removing them from the active user list.
    • Use Case: Allows for clean session management by external clients.
  3. send_message(channel: str, username: str, message: str) -> List[Dict]

    • Description: The primary interaction tool. It sends a message from a user to a channel, triggers the full AI agent logic (moderation, triage, response), and returns the complete, unformatted conversation history.
    • Use Case: This allows an external agent to fully participate in the chat, just like a human user.
  4. get_summary(channel: str, chat_history: List[Dict]) -> List[Dict]

    • Description: Invokes the Summarization Agent to analyze the provided history and generate a factual summary.
    • Use Case: An external agent could use this to quickly get up to speed on a long-running conversation without processing the entire transcript.
  5. get_opinion(channel: str, chat_history: List[Dict]) -> List[Dict]

    • Description: Invokes the Opinion Agent to analyze the conversation's social dynamics.
    • Use Case: A monitoring agent could use this tool to gauge the health or sentiment of a community conversation.

πŸš€ Future Work & Potential Improvements

This project serves as a robust foundation, but there are many exciting avenues for future development:

  • Enhanced Session Control: Implement a more robust session management system.
  • Streaming Responses: Implement true streaming for the Gemini responses (stream=True in the API call) and handle the streamed chunks in the Gradio UI. This would make the AI's responses appear token-by-token, feeling more immediate and interactive.
  • WebSockets for Real-Time UI: Replace the gr.Timer polling mechanism with a full WebSocket implementation. This would provide instantaneous updates to all clients without any delay, creating a truly real-time experience and eliminating the need for a refresh loop.
  • Dynamic Tool Creation: Allow users to define new "tool agents" on the fly by providing a prompt and a name, further expanding the MCP server's capabilities.
  • Persistent Storage: Integrate a database (like SQLite or a vector database) to store chat histories permanently, so conversations are not lost when the Gradio app restarts.

πŸ› οΈ How to Run Locally

  1. Clone the repository:
    git clone https://huggingface.co/spaces/Agents-MCP-Hackathon/multi-agent-chat
    cd multi-agent-chat
    
  2. Create a virtual environment:
    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
    
  3. Install dependencies:
    pip install -r requirements.txt
    
  4. Set up your environment variables:
    • Create a file named .env.
    • Add your Google API key to it: GOOGLE_API_KEY="your_api_key_here"
  5. Run the application:
    python app.py