File size: 2,611 Bytes
1cc44f2
b63c874
 
 
 
1cc44f2
 
b63c874
1cc44f2
 
b63c874
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
title: E2B API Proxy
emoji: 🚀
colorFrom: blue
colorTo: indigo
sdk: docker
pinned: false
app_port: 7860
---

# E2B API Proxy with FastAPI

This project is a FastAPI implementation of an API proxy for E2B (fragments.e2b.dev). It provides a compatible interface for various AI model providers including OpenAI, Google, and Anthropic.

## Description

The E2B API Proxy acts as a middleware between your application and the E2B service, providing:

- Proxy API requests to E2B service
- Support for multiple AI models (OpenAI, Google Vertex AI, Anthropic)
- Streaming and non-streaming response handling
- CORS support for cross-origin requests

## Deployment on Hugging Face Spaces

This application is ready to be deployed on Hugging Face Spaces:

1. Create a new Space on Hugging Face with Docker SDK
2. Upload these files to your Space
3. Set the environment variables in the Space settings:
   - `API_KEY`: Your API key for authentication (default: sk-123456)
   - `API_BASE_URL`: The base URL for the E2B service (default: https://fragments.e2b.dev)

## API Endpoints

- `GET /hf/v1/models`: List available models
- `POST /hf/v1/chat/completions`: Send chat completion requests
- `GET /`: Root endpoint (health check)

## Configuration

The main configuration is in the `app.py` file. You can customize:

- API key for authentication
- Base URL for the E2B service
- Model configurations
- Default headers for requests

## Local Development

### Prerequisites

- Docker and Docker Compose

### Running the Application Locally

1. Clone this repository
2. Update the API key in docker-compose.yml (replace `sk-123456` with your actual key)
3. Build and start the container:

```bash
docker-compose up -d
```

4. The API will be available at http://localhost:7860

### Testing the API

You can test the API using curl:

```bash
# Get available models
curl http://localhost:7860/hf/v1/models

# Send a chat completion request
curl -X POST http://localhost:7860/hf/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-123456" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'
```

## Supported Models

The API supports various models from different providers:

- **OpenAI**: o1-preview, o3-mini, gpt-4o, gpt-4.5-preview, gpt-4-turbo
- **Google**: gemini-1.5-pro, gemini-2.5-pro-exp-03-25, gemini-exp-1121, gemini-2.0-flash-exp
- **Anthropic**: claude-3-5-sonnet-latest, claude-3-7-sonnet-latest, claude-3-5-haiku-latest

## License

This project is open source and available under the MIT License.