Spaces:
Sleeping
Sleeping
metadata
title: RAG Generation Service
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
RAG Generation Service
This is a Retrieval-Augmented Generation (RAG) service that answers questions based on provided context.
How to use
- Enter your question in the "Query" field
- Paste relevant documents or context in the "Context" field
- Click submit to get an AI-generated answer based on your context
Features
- Uses state-of-the-art language models via Hugging Face Inference API
- Supports multiple model providers
- Clean, intuitive interface
- Example queries to get started
Configuration
This Space requires a HF_TOKEN
environment variable to be set with your Hugging Face access token.
Model Support
By default, this uses meta-llama/Meta-Llama-3-8B-Instruct
, but you can configure different models via environment variables.