mcp_mod_test / README.md
mtyrrell's picture
readme
8170b18
metadata
title: Chatfed Generation Service
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit

Generation Module

This is an LLM-based generation service designed to be deployed as a modular component of a broader RAG system. The service runs on a docker container and exposes a gradio UI on port 7860 as well as an MCP endpoint.

Configuration

  1. The module requires an API key (set as an environment variable) for an inference provider to run. Multiple inference providers are supported. Make sure to set the appropriate environment variables:
  • OpenAI: OPENAI_API_KEY
  • Anthropic: ANTHROPIC_API_KEY
  • Cohere: COHERE_API_KEY
  • HuggingFace: HF_TOKEN
  1. Inference provider and model settings are accessible via params.cfg

MCP Endpoint

Available Tools

rag_generate

Generate an answer to a query using provided context through RAG. This function takes a user query and relevant context, then uses a language model to generate a comprehensive answer based on the provided information.

Input Schema:

Parameter Type Description
query string The user's question or query
context string The relevant context/documents to use for answering

Returns: The generated answer based on the query and context

Example Usage:

{
  "query": "What are the benefits of renewable energy?",
  "context": "Documents and information about renewable energy sources..."
}

This tool uses an LLM to generate answers using the most relevant information from the context, along with the input query.