Spaces:
Running
Running
File size: 5,744 Bytes
fc023ef 3d97d52 fc023ef 3d97d52 fc023ef 3d97d52 fc023ef 3d97d52 e121d54 3d97d52 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
---
title: Responses.js
emoji: π»
colorFrom: red
colorTo: red
sdk: docker
pinned: false
license: mit
short_description: Check out https://github.com/huggingface/responses.js
app_port: 3000
---
# responses.js
A lightweight Express.js server that implements OpenAI's Responses API, built on top of Chat Completions and powered by Hugging Face Inference Providers.
## β¨ Features
- **ResponsesAPI**: Partial implementation of [OpenAI's Responses API](https://platform.openai.com/docs/api-reference/responses), on top of Chat Completion API
- **Inference Providers**: Powered by Hugging Face Inference Providers
- **Streaming Support**: Support for streamed responses
- **Structured Output**: Support for structured data responses (e.g. jsonschema)
- **Function Calling**: Tool and function calling capabilities
- **Multi-modal Input**: Text and image input support
- **Demo UI**: Interactive web interface for testing
Not implemented: remote function calling, MCP server, file upload, stateful API, etc.
## π Quick Start
### Prerequisites
- Node.js (v18 or higher)
- pnpm (recommended) or npm
- an Hugging Face token with inference permissions. Create one from your [user settings](https://huggingface.co/settings/tokens).
### Installation & Setup
```bash
# Clone the repository
git clone https://github.com/huggingface/responses.js.git
cd responses.js
# Install dependencies
pnpm install
# Start the development server
pnpm dev
```
The server will be available at `http://localhost:3000`.
### Running Examples
Explore the various capabilities with our example scripts located in the [./examples](./examples) folder:
```bash
# Basic text input
pnpm run example text
# Multi-turn conversations
pnpm run example multi_turn
# Text + image input
pnpm run example image
# Streaming responses
pnpm run example streaming
# Structured output
pnpm run example structured_output
pnpm run example structured_output_streaming
# Function calling
pnpm run example function
pnpm run example function_streaming
```
## π§ͺ Testing
### Important Notes
- Server must be running (`pnpm dev`) on `http://localhost:3000`
- `HF_TOKEN` environment variable set with your Hugging Face token
- Tests use real inference providers and will incur costs
- Tests are not run in CI due to billing requirements
### Running Tests
```bash
# Run all tests
pnpm test
# Run specific test patterns
pnpm test --grep "streaming"
pnpm test --grep "function"
pnpm test --grep "structured"
```
### Interactive Demo UI
Experience the API through our interactive web interface, adapted from the [openai-responses-starter-app](https://github.com/openai/openai-responses-starter-app).
[](https://youtu.be/F-tAUnW-nd0)
#### Setup
1. Create a configuration file:
```bash
# Create demo/.env
cat > demo/.env << EOF
MODEL="cohere@CohereLabs/c4ai-command-a-03-2025"
OPENAI_BASE_URL=http://localhost:3000/v1
OPENAI_API_KEY=${HF_TOKEN:-<your-huggingface-token>}
EOF
```
2. Install demo dependencies:
```bash
pnpm demo:install
```
3. Launch the demo:
```bash
pnpm demo:dev
```
The demo will be available at `http://localhost:3001`.
## π³ Running with Docker
You can run the server in a production-ready container using Docker.
### Build the Docker image
```bash
docker build -t responses.js .
```
### Run the server
```bash
docker run -p 3000:3000 responses.js
```
The server will be available at `http://localhost:3000`.
## π Project Structure
```
responses.js/
βββ demo/ # Interactive chat UI demo
βββ examples/ # Example scripts using openai-node client
βββ src/
β βββ index.ts # Application entry point
β βββ server.ts # Express app configuration and route definitions
β βββ routes/ # API route implementations
β βββ middleware/ # Middleware (validation, logging, etc.)
β βββ schemas/ # Zod validation schemas
βββ scripts/ # Utility and build scripts
βββ package.json # Package configuration and dependencies
βββ README.md # This file
```
## π£οΈ Done / TODOs
> **Note**: This project is in active development. The roadmap below represents our current priorities and may evolve. Do not take anything for granted.
- [x] OpenAI types integration for consistent output
- [x] Streaming mode support
- [x] Structured output capabilities
- [x] Function calling implementation
- [x] Repository migration to dedicated responses.js repo
- [x] Basic development tooling setup
- [x] Demo application with comprehensive instructions
- [x] Multi-turn conversation fixes for text messages + tool calls
- [x] Correctly return "usage" field
- [x] MCP support (non-streaming)
- [ ] MCP support (streaming)
- [ ] Tools execution (web search, file search, image generation, code interpreter)
- [ ] Background mode support
- [ ] Additional API routes (GET, DELETE, CANCEL, LIST responses)
- [ ] Reasoning capabilities
## π€ Contributing
We welcome contributions! Please feel free to submit issues, feature requests, or pull requests.
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
- Based on OpenAI's [Responses API specification](https://platform.openai.com/docs/api-reference/responses)
- Built on top of [OpenAI's nodejs client](https://github.com/openai/openai-node)
- Demo UI adapted from [openai-responses-starter-app](https://github.com/openai/openai-responses-starter-app)
- Built on top of [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/index)
|