File size: 2,766 Bytes
c0ce657 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
# Using different models
[[open-in-colab]]
`smolagents` provides a flexible framework that allows you to use various language models from different providers.
This guide will show you how to use different model types with your agents.
## Available model types
`smolagents` supports several model types out of the box:
1. [`InferenceClientModel`]: Uses Hugging Face's Inference API to access models
2. [`TransformersModel`]: Runs models locally using the Transformers library
3. [`VLLMModel`]: Uses vLLM for fast inference with optimized serving
4. [`MLXModel`]: Optimized for Apple Silicon devices using MLX
5. [`LiteLLMModel`]: Provides access to hundreds of LLMs through LiteLLM
6. [`LiteLLMRouterModel`]: Distributes requests among multiple models
7. [`OpenAIServerModel`]: Provides access to any provider that implements an OpenAI-compatible API
8. [`AzureOpenAIServerModel`]: Uses Azure's OpenAI service
9. [`AmazonBedrockServerModel`]: Connects to AWS Bedrock's API
## Using Google Gemini Models
As explained in the Google Gemini API documentation (https://ai.google.dev/gemini-api/docs/openai),
Google provides an OpenAI-compatible API for Gemini models, allowing you to use the [`OpenAIServerModel`]
with Gemini models by setting the appropriate base URL.
First, install the required dependencies:
```bash
pip install smolagents[openai]
```
Then, [get a Gemini API key](https://ai.google.dev/gemini-api/docs/api-key) and set it in your code:
```python
GEMINI_API_KEY = <YOUR-GEMINI-API-KEY>
```
Now, you can initialize the Gemini model using the `OpenAIServerModel` class
and setting the `api_base` parameter to the Gemini API base URL:
```python
from smolagents import OpenAIServerModel
model = OpenAIServerModel(
model_id="gemini-2.0-flash",
# Google Gemini OpenAI-compatible API base URL
api_base="https://generativelanguage.googleapis.com/v1beta/openai/",
api_key=GEMINI_API_KEY,
)
```
## Using OpenRouter Models
OpenRouter provides access to a wide variety of language models through a unified OpenAI-compatible API.
You can use the [`OpenAIServerModel`] to connect to OpenRouter by setting the appropriate base URL.
First, install the required dependencies:
```bash
pip install smolagents[openai]
```
Then, [get an OpenRouter API key](https://openrouter.ai/keys) and set it in your code:
```python
OPENROUTER_API_KEY = <YOUR-OPENROUTER-API-KEY>
```
Now, you can initialize any model available on OpenRouter using the `OpenAIServerModel` class:
```python
from smolagents import OpenAIServerModel
model = OpenAIServerModel(
# You can use any model ID available on OpenRouter
model_id="openai/gpt-4o",
# OpenRouter API base URL
api_base="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY,
)
```
|