Spaces:
Runtime error
A newer version of the Gradio SDK is available:
5.42.0
Part 1: Building the MCP Server with Gradio
1. Project Context: The Server's Role
Our complete project will consist of:
- Server Side (This Document): A Gradio application exposing a sentiment analysis tool via HTTP and MCP.
- Client Side: Different client implementations (e.g., HuggingFace.js, Python with
smolagents
) that will consume the tool from our server via MCP. - Deployment: Deploying the server to a platform like Hugging Face Spaces and configuring clients to use the deployed instance.
This document focuses exclusively on the Server Side, which involves setting up a Python environment, using the Gradio library to create both a web interface and an MCP server, implementing a sentiment analysis function (using TextBlob as an example tool) that our server will expose, and understanding how Gradio allows this tool to be accessible both through a standard HTTP web interface and via the MCP protocol for programmatic client interaction.
2. Core Technologies for the Server
a. Python
The server is implemented in Python, a language well-suited for web development and data processing.
b. Gradio
Gradio is a key Python library for this part. We use it to:
- Quickly create an interactive web UI for our sentiment analysis function.
- Automatically expose the same Python function as a tool over the Model Context Protocol (MCP) by enabling its built-in MCP server functionality.
c. Model Context Protocol (MCP)
MCP (Model Context Protocol) is the communication standard that allows our server's tools to be understood and used by various clients, including AI models or agents. Gradio's MCP integration handles the complexities of exposing our Python function according to this protocol.
d. TextBlob (for the Example Tool)
TextBlob is a Python library for text processing. We use it here to implement our example sentiment_analysis
function. While TextBlob provides the logic for the tool, the focus is on how Gradio exposes any such function as an MCP tool.
3. Server Implementation (app.py
)
The app.py
script contains all the logic for our server.
import gradio as gr
from textblob import TextBlob
# The Python function to be exposed as an MCP tool
def sentiment_analysis(text: str) -> dict:
"""
Analyze the sentiment of the given text.
(This docstring is used by Gradio for the MCP tool description)
Args:
text (str): The text to analyze (used for MCP tool input schema)
Returns:
dict: A dictionary containing polarity, subjectivity, and assessment (used for MCP tool output schema)
"""
blob = TextBlob(text)
sentiment = blob.sentiment
return {
"polarity": round(sentiment.polarity, 2),
"subjectivity": round(sentiment.subjectivity, 2),
"assessment": "positive"
if sentiment.polarity > 0
else "negative"
if sentiment.polarity < 0
else "neutral",
}
# Create the Gradio interface
# This sets up both the web UI and prepares for MCP tool exposure
demo = gr.Interface(
fn=sentiment_analysis, # The function to wrap
inputs=gr.Textbox(placeholder="Enter text to analyze..."), # Defines UI input and helps MCP schema
outputs=gr.JSON(), # Defines UI output and MCP tool output type
title="Sentiment Analysis Server (MCP Enabled)",
description="Exposes a sentiment analysis function via Web UI and MCP.",
)
# Launch the Gradio app
if __name__ == "__main__":
# mcp_server=True enables the MCP server endpoint alongside the web UI
demo.launch(mcp_server=True)
Key Aspects of the Server Code:
sentiment_analysis
function: This is our core logic that we want to offer as a tool. Its docstrings and type hints are crucial for Gradio to automatically generate the MCP tool's schema (name, description, input/output parameters).gr.Interface(...)
: This Gradio command is versatile:- It creates a web-based UI allowing direct interaction with the
sentiment_analysis
function. - When
mcp_server=True
is used inlaunch()
, it also sets up the necessary endpoints and schema for the function to be discovered and called as an MCP tool.
- It creates a web-based UI allowing direct interaction with the
demo.launch(mcp_server=True)
: This is the critical step that activates the MCP server capabilities of Gradio. The server will then listen for MCP requests on a specific path, typically/gradio_api/mcp/sse
.
4. Running the Server
To start the server:
- Ensure you have PDM installed in your environment.
- Add the required dependencies using PDM:
pdm add gradio textblob
- Save the code above as
app.py
. - Run the script using PDM:
pdm run python app.py
You should see output indicating:
- The local URL for the web interface (e.g.,
http://127.0.0.1:7860
). - The URL for the MCP server (e.g.,
http://127.0.0.1:7860/gradio_api/mcp/sse
).
At this point, the server is running and exposing the sentiment_analysis
tool through two protocols:
- HTTP: For human users via the web browser.
- MCP: For programmatic clients (which we'll build in Part 2).
5. Server-Side MCP Exposure with Gradio
Gradio significantly simplifies exposing a Python function as an MCP tool:
- Tool Definition: The Python function itself (
sentiment_analysis
) is the tool. - Schema Generation: Gradio automatically generates the MCP tool schema:
- Name: Derived from the Python function name.
- Description: Taken from the function's docstring.
- Input Schema: Inferred from the
inputs
argument ofgr.Interface
and the function's type hints andArgs
section in the docstring. - Output Schema: Inferred from the
outputs
argument ofgr.Interface
and the function's return type hint andReturns
section in the docstring.
A Note on "Automatic" Schema Generation: It's important to clarify what "automatic" means here. You, the developer, manually write your Python function (like
sentiment_analysis
) with its descriptive docstrings and type hints. You also manually define thegr.Interface
and specify itsinputs
andoutputs
components.Gradio's "automatic" role is to then interpret all this information you've provided. It takes your Python function's definition, your docstrings, your type hints, and your
gr.Interface
configuration, and from these, it automatically constructs the formal, standardized JSON schema that the Model Context Protocol (MCP) requires. You don't have to write this detailed MCP-specific schema yourself; Gradio generates it for you, acting as a bridge between your Python code and the MCP standard.
- MCP Endpoint: Gradio hosts the MCP service at a specific path (usually
/gradio_api/mcp/sse
relative to the base URL), handling the JSON-RPC communication required by MCP.
6. Troubleshooting Tips
Type Hints and Docstrings:
- Always provide type hints for your function parameters and return values.
- Include a docstring with an "Args:" block for each parameter.
- This helps Gradio generate accurate MCP tool schemas.
String Inputs:
- When in doubt, accept input arguments as
str
. - Convert them to the desired type inside the function.
- This provides better compatibility with MCP clients.
- When in doubt, accept input arguments as
SSE Support:
- Some MCP clients don't support SSE-based MCP Servers.
- In those cases, use
mcp-remote
:
{ "mcpServers": { "gradio": { "command": "npx", "args": [ "mcp-remote", "http://localhost:7860/gradio_api/mcp/sse" ] } } }
Connection Issues:
- If you encounter connection problems, try restarting both the client and server.
- Check that the server is running and accessible.
- Verify that the MCP schema is available at the expected URL (e.g.,
http://localhost:7860/gradio_api/mcp/schema
).
7. Deploying to Hugging Face Spaces
To make your server available to others, you can deploy it to Hugging Face Spaces:
Create a new Space on Hugging Face:
- Go to huggingface.co/spaces
- Click "Create new Space"
- Choose "Gradio" as the SDK
- Name your space (e.g., "mcp-sentiment")
Create a
requirements.txt
file:gradio[mcp] textblob
Push your code to the Space:
git init git add app.py requirements.txt git commit -m "Initial commit" git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-sentiment git push -u origin main
Your MCP server will now be available at:
https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse
For more details, see the Hugging Face MCP Course documentation.
8. Conclusion for Part 1
We have successfully built the server-side component of our application. This server uses Gradio to:
- Provide a web UI for direct interaction with our
sentiment_analysis
function. - Expose the
sentiment_analysis
function as an MCP tool, complete with an automatically generated schema.
This server is now ready to be consumed by various MCP clients, which will be the focus of Part 2 of this project. Part 3 will then cover deploying this server.