Spaces:
Running
on
CPU Upgrade
Chat Completion
Generate a response given a list of messages in a conversational context, supporting both conversational Language Models (LLMs) and conversational Vision-Language Models (VLMs). This is a subtask of text-generation
and image-text-to-text
.
Recommended models
Conversational Large Language Models (LLMs)
- google/gemma-2-2b-it: A text-generation model trained to follow instructions.
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B: Smaller variant of one of the most powerful models.
- meta-llama/Meta-Llama-3.1-8B-Instruct: Very powerful text generation model trained to follow instructions.
- microsoft/phi-4: Powerful text generation model by Microsoft.
- Qwen/Qwen2.5-7B-Instruct-1M: Strong conversational model that supports very long instructions.
- Qwen/Qwen2.5-Coder-32B-Instruct: Text generation model used to write code.
- deepseek-ai/DeepSeek-R1: Powerful reasoning based open large language model.
Conversational Vision-Language Models (VLMs)
- Qwen/Qwen2.5-VL-7B-Instruct: Strong image-text-to-text model.
Explore all available models and find the one that suits you best here.
API Playground
For Chat Completion models, we provide an interactive UI Playground for easier testing:
- Quickly iterate on your prompts from the UI.
- Set and override system, assistant and user messages.
- Browse and select models currently available on the Inference API.
- Compare the output of two models side-by-side.
- Adjust requests parameters from the UI.
- Easily switch between UI view and code snippets.
Access the Inference UI Playground and start exploring: https://huggingface.co/playground
Using the API
The API supports:
- Using the chat completion API compatible with the OpenAI SDK.
- Using grammars, constraints, and tools.
- Streaming the output
Code snippet example for conversational LLMs
Language
Python JavaScript cURL
Client
huggingface_hub requests openai
Provider
Featherless Nscale
+9
Settings
Settings
Settings
Copied
import os from huggingface_hub import InferenceClient
client = InferenceClient( provider="featherless-ai", api_key=os.environ["HF_TOKEN"], )
completion = client.chat.completions.create( model="meta-llama/Llama-3.3-70B-Instruct", messages=[ { "role": "user", "content": "What is the capital of France?" } ], )
print(completion.choices[0].message)
Code snippet example for conversational VLMs
Language
Python JavaScript cURL
Client
huggingface_hub requests openai
Provider
Fireworks Featherless
+10
Settings
Settings
Settings
Copied
import os from huggingface_hub import InferenceClient
client = InferenceClient( provider="fireworks-ai", api_key=os.environ["HF_TOKEN"], )
completion = client.chat.completions.create( model="meta-llama/Llama-4-Scout-17B-16E-Instruct", messages=[ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ], )
print(completion.choices[0].message)
API specification
Request
Headers
authorization
string
Authentication header in the form 'Bearer: hf_****'
when hf_****
is a personal user access token with “Inference Providers” permission. You can generate one from your settings page.
Payload
frequency_penalty
number
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
logprobs
boolean
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
max_tokens
integer
The maximum number of tokens that can be generated in the chat completion.
messages*
object[]
A list of messages comprising the conversation so far.
(#1)
unknown
One of the following:
(#1)
object
content*
unknown
One of the following:
(#1)
string
(#2)
object[]
(#1)
object
text*
string
type*
enum
Possible values: text.
(#2)
object
image_url*
object
url*
string
type*
enum
Possible values: image_url.
(#2)
object
tool_calls*
object[]
function*
object
parameters*
unknown
description
string
name*
string
id*
string
type*
string
(#2)
object
name
string
role*
string
presence_penalty
number
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics
response_format
unknown
One of the following:
(#1)
object
type*
enum
Possible values: text.
(#2)
object
type*
enum
Possible values: json_schema.
json_schema*
object
name*
string
The name of the response format.
description
string
A description of what the response format is for, used by the model to determine how to respond in the format.
schema
object
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
strict
boolean
Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema
field.
(#3)
object
type*
enum
Possible values: json_object.
seed
integer
stop
string[]
Up to 4 sequences where the API will stop generating further tokens.
stream
boolean
stream_options
object
include_usage
boolean
If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array. All other chunks will also include a usage field, but with a null value.
temperature
number
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p
but not both.
tool_choice
unknown
One of the following:
(#1)
enum
Possible values: auto.
(#2)
enum
Possible values: none.
(#3)
enum
Possible values: required.
(#4)
object
function*
object
name*
string
tool_prompt
string
A prompt to be appended before the tools
tools
object[]
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.
function*
object
parameters*
unknown
description
string
name*
string
type*
string
top_logprobs
integer
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
top_p
number
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Response
Output type depends on the stream
input parameter. If stream
is false
(default), the response will be a JSON object with the following fields:
Body
choices
object[]
finish_reason
string
index
integer
logprobs
object
content
object[]
logprob
number
token
string
top_logprobs
object[]
logprob
number
token
string
message
unknown
One of the following:
(#1)
object
content
string
role
string
tool_call_id
string
(#2)
object
role
string
tool_calls
object[]
function
object
arguments
string
description
string
name
string
id
string
type
string
created
integer
id
string
model
string
system_fingerprint
string
usage
object
completion_tokens
integer
prompt_tokens
integer
total_tokens
integer
If stream
is true
, generated tokens are returned as a stream, using Server-Sent Events (SSE). For more information about streaming, check out this guide.
Body
choices
object[]
delta
unknown
One of the following:
(#1)
object
content
string
role
string
tool_call_id
string
(#2)
object
role
string
tool_calls
object[]
function
object
arguments
string
name
string
id
string
index
integer
type
string
finish_reason
string
index
integer
logprobs
object
content
object[]
logprob
number
token
string
top_logprobs
object[]
logprob
number
token
string
created
integer
id
string
model
string
system_fingerprint
string
usage
object
completion_tokens
integer
prompt_tokens
integer
total_tokens
integer