File size: 1,589 Bytes
283f830
 
 
 
 
 
 
 
 
 
 
 
 
11a2144
 
f6ae39c
11a2144
f6ae39c
11a2144
f6ae39c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11a2144
f6ae39c
 
 
 
 
11a2144
f6ae39c
11a2144
f6ae39c
 
 
 
 
 
 
 
 
11a2144
f6ae39c
11a2144
f6ae39c
11a2144
f6ae39c
 
11a2144
 
f6ae39c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
title: Medical Chatbot API
emoji: πŸ₯
colorFrom: blue
colorTo: green
sdk: docker
sdk_version: "1.0"
app_file: app/main.py
pinned: false
---

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

# Medical Chatbot API

A FastAPI-based medical chatbot API powered by LangChain and custom models.

## Configuration

### Environment Variables

Create a `.env` file with the following variables:

```env
QDRANT_URL=your_qdrant_url
QDRANT_API_KEY=your_qdrant_api_key
HF_TOKEN=your_huggingface_token
```

### Model Configuration

The application expects the following model structure:
```
models/
    β”œβ”€β”€ embeddings/
    └── llm/
```

### Dependencies

Key dependencies include:
- LangChain ecosystem
- Qdrant for vector storage
- Unsloth for optimized model loading
- FastAPI for the web framework

## Development Setup

1. Install dependencies:
```bash
pip install -r requirements.txt
```

2. Run the development server:
```bash
uvicorn app.main:app --reload --host 0.0.0.0 --port 7860
```

## Docker Deployment

Build and run with Docker:
```bash
docker build -t medical-chatbot .
docker run -p 7860:7860 --env-file .env medical-chatbot
```

## API Endpoints

- `GET /health`: Health check endpoint
- `POST /chat`: Chat endpoint
  ```json
  {
    "question": "What are the symptoms of diabetes?",
    "context": "Optional medical context"
  }
  ```

## Production Deployment

For Hugging Face Spaces:
1. Set repository secrets in Space settings
2. Deploy using the provided Dockerfile
3. Ensure model weights are properly configured