Duibonduil commited on
Commit
3330061
·
verified ·
1 Parent(s): 279da32

Upload 5 files

Browse files
docs/source/en/_config.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # docstyle-ignore
2
+ INSTALL_CONTENT = """
3
+ # Installation
4
+ ! pip install smolagents
5
+ # To install from source instead of the last release, comment the command above and uncomment the following one.
6
+ # ! pip install git+https://github.com/huggingface/smolagents.git
7
+ """
8
+
9
+ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
10
+ black_avoid_patterns = {
11
+ "{processor_class}": "FakeProcessorClass",
12
+ "{model_class}": "FakeModelClass",
13
+ "{object_class}": "FakeObjectClass",
14
+ }
docs/source/en/_toctree.yml ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - title: Get started
2
+ sections:
3
+ - local: index
4
+ title: Introduction
5
+ - local: installation
6
+ title: Installation options
7
+ - local: guided_tour
8
+ title: Guided tour
9
+ - title: Tutorials
10
+ sections:
11
+ - local: tutorials/building_good_agents
12
+ title: ✨ Building good agents
13
+ - local: tutorials/inspect_runs
14
+ title: 📊 Inspect your agent runs using telemetry
15
+ - local: tutorials/tools
16
+ title: 🛠️ Tools - in-depth guide
17
+ - local: tutorials/secure_code_execution
18
+ title: 🛡️ Secure code execution
19
+ - local: tutorials/memory
20
+ title: 📚 Manage your agent's memory
21
+ - title: Conceptual guides
22
+ sections:
23
+ - local: conceptual_guides/intro_agents
24
+ title: 🤖 An introduction to agentic systems
25
+ - local: conceptual_guides/react
26
+ title: 🤔 How do Multi-step agents work?
27
+ - title: Examples
28
+ sections:
29
+ - local: examples/text_to_sql
30
+ title: Self-correcting Text-to-SQL
31
+ - local: examples/rag
32
+ title: Master your knowledge base with agentic RAG
33
+ - local: examples/multiagents
34
+ title: Orchestrate a multi-agent system
35
+ - local: examples/web_browser
36
+ title: Build a web browser agent using vision models
37
+ - local: examples/using_different_models
38
+ title: Using different models
39
+ - title: Reference
40
+ sections:
41
+ - local: reference/agents
42
+ title: Agent-related objects
43
+ - local: reference/models
44
+ title: Model-related objects
45
+ - local: reference/tools
46
+ title: Tool-related objects
docs/source/en/guided_tour.md ADDED
@@ -0,0 +1,598 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Agents - Guided tour
2
+
3
+ [[open-in-colab]]
4
+
5
+ In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case.
6
+
7
+ ## Choosing an agent type: CodeAgent or ToolCallingAgent
8
+
9
+ `smolagents` comes with two agent classes: [`CodeAgent`] and [`ToolCallingAgent`], which represent two different paradigms for how agents interact with tools.
10
+ The key difference lies in how actions are specified and executed: code generation vs structured tool calling.
11
+
12
+ - [`CodeAgent`] generates tool calls as Python code snippets.
13
+ - The code is executed either locally (potentially unsecure) or in a secure sandbox.
14
+ - Tools are exposed as Python functions (via bindings).
15
+ - Example of tool call:
16
+ ```py
17
+ result = search_docs("What is the capital of France?")
18
+ print(result)
19
+ ```
20
+ - Strengths:
21
+ - Highly expressive: Allows for complex logic and control flow and can combine tools, loop, transform, reason.
22
+ - Flexible: No need to predefine every possible action, can dynamically generate new actions/tools.
23
+ - Emergent reasoning: Ideal for multi-step problems or dynamic logic.
24
+ - Limitations
25
+ - Risk of errors: Must handle syntax errors, exceptions.
26
+ - Less predictable: More prone to unexpected or unsafe outputs.
27
+ - Requires secure execution environment.
28
+
29
+ - [`ToolCallingAgent`] writes tool calls as structured JSON.
30
+ - This is the common format used in many frameworks (OpenAI API), allowing for structured tool interactions without code execution.
31
+ - Tools are defined with a JSON schema: name, description, parameter types, etc.
32
+ - Example of tool call:
33
+ ```json
34
+ {
35
+ "tool_call": {
36
+ "name": "search_docs",
37
+ "arguments": {
38
+ "query": "What is the capital of France?"
39
+ }
40
+ }
41
+ }
42
+ ```
43
+ - Strengths:
44
+ - Reliable: Less prone to hallucination, outputs are structured and validated.
45
+ - Safe: Arguments are strictly validated, no risk of arbitrary code running.
46
+ - Interoperable: Easy to map to external APIs or services.
47
+ - Limitations:
48
+ - Low expressivity: Can't easily combine or transform results dynamically, or perform complex logic or control flow.
49
+ - Inflexible: Must define all possible actions in advance, limited to predefined tools.
50
+ - No code synthesis: Limited to tool capabilities.
51
+
52
+ When to use which agent type:
53
+ - Use [`CodeAgent`] when:
54
+ - You need reasoning, chaining, or dynamic composition.
55
+ - Tools are functions that can be combined (e.g., parsing + math + querying).
56
+ - Your agent is a problem solver or programmer.
57
+
58
+ - Use [`ToolCallingAgent`] when:
59
+ - You have simple, atomic tools (e.g., call an API, fetch a document).
60
+ - You want high reliability and clear validation.
61
+ - Your agent is like a dispatcher or controller.
62
+
63
+ ## CodeAgent
64
+
65
+ [`CodeAgent`] generates Python code snippets to perform actions and solve tasks.
66
+
67
+ By default, the Python code execution is done in your local environment.
68
+ This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed.
69
+
70
+ The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue.
71
+ You can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`CodeAgent`]:
72
+
73
+ ```py
74
+ model = InferenceClientModel()
75
+ agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])
76
+ agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
77
+ ```
78
+
79
+ Additionally, as an extra security layer, access to submodule is forbidden by default, unless explicitly authorized within the import list.
80
+ For instance, to access the `numpy.random` submodule, you need to add `'numpy.random'` to the `additional_authorized_imports` list.
81
+ This could also be authorized by using `numpy.*`, which will allow `numpy` as well as any subpackage like `numpy.random` and its own subpackages.
82
+
83
+ > [!WARNING]
84
+ > The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!
85
+
86
+ The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.
87
+
88
+ You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) or Docker instead of a local Python interpreter. For E2B, first [set the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then pass `executor_type="e2b"` upon agent initialization. For Docker, pass `executor_type="docker"` during initialization.
89
+
90
+
91
+ > [!TIP]
92
+ > Learn more about code execution [in this tutorial](tutorials/secure_code_execution).
93
+
94
+ ### ToolCallingAgent
95
+
96
+ [`ToolCallingAgent`] outputs JSON tool calls, which is the common format used in many frameworks (OpenAI API), allowing for structured tool interactions without code execution.
97
+
98
+ It works much in the same way like [`CodeAgent`], of course without `additional_authorized_imports` since it doesn't execute code:
99
+
100
+ ```py
101
+ from smolagents import ToolCallingAgent
102
+
103
+ agent = ToolCallingAgent(tools=[], model=model)
104
+ agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
105
+ ```
106
+
107
+ ## Building your agent
108
+
109
+ To initialize a minimal agent, you need at least these two arguments:
110
+
111
+ - `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:
112
+ - [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
113
+ - [`InferenceClientModel`] leverages a `huggingface_hub.InferenceClient` under the hood and supports all Inference Providers on the Hub: Cerebras, Cohere, Fal, Fireworks, HF-Inference, Hyperbolic, Nebius, Novita, Replicate, SambaNova, Together, and more.
114
+ - [`LiteLLMModel`] similarly lets you call 100+ different models and providers through [LiteLLM](https://docs.litellm.ai/)!
115
+ - [`AzureOpenAIServerModel`] allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service).
116
+ - [`AmazonBedrockServerModel`] allows you to use Amazon Bedrock in [AWS](https://aws.amazon.com/bedrock/?nc1=h_ls).
117
+ - [`MLXModel`] creates a [mlx-lm](https://pypi.org/project/mlx-lm/) pipeline to run inference on your local machine.
118
+
119
+ - `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.
120
+
121
+ Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Inference Providers](https://huggingface.co/blog/inference-providers), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service), [Amazon Bedrock](https://aws.amazon.com/bedrock/?nc1=h_ls), or [mlx-lm](https://pypi.org/project/mlx-lm/).
122
+
123
+ <hfoptions id="Pick a LLM">
124
+ <hfoption id="Inference Providers">
125
+
126
+ Inference Providers need a `HF_TOKEN` to authenticate, but a free HF account already comes with included credits. Upgrade to PRO to raise your included credits.
127
+
128
+ To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `InferenceClientModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens)
129
+
130
+ ```python
131
+ from smolagents import CodeAgent, InferenceClientModel
132
+
133
+ model_id = "meta-llama/Llama-3.3-70B-Instruct"
134
+
135
+ model = InferenceClientModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") # You can choose to not pass any model_id to InferenceClientModel to use a default model
136
+ # you can also specify a particular provider e.g. provider="together" or provider="sambanova"
137
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
138
+
139
+ agent.run(
140
+ "Could you give me the 118th number in the Fibonacci sequence?",
141
+ )
142
+ ```
143
+ </hfoption>
144
+ <hfoption id="Local Transformers Model">
145
+
146
+ ```python
147
+ # !pip install smolagents[transformers]
148
+ from smolagents import CodeAgent, TransformersModel
149
+
150
+ model_id = "meta-llama/Llama-3.2-3B-Instruct"
151
+
152
+ model = TransformersModel(model_id=model_id)
153
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
154
+
155
+ agent.run(
156
+ "Could you give me the 118th number in the Fibonacci sequence?",
157
+ )
158
+ ```
159
+ </hfoption>
160
+ <hfoption id="OpenAI or Anthropic API">
161
+
162
+ To use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization.
163
+
164
+ ```python
165
+ # !pip install smolagents[litellm]
166
+ from smolagents import CodeAgent, LiteLLMModel
167
+
168
+ model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o'
169
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
170
+
171
+ agent.run(
172
+ "Could you give me the 118th number in the Fibonacci sequence?",
173
+ )
174
+ ```
175
+ </hfoption>
176
+ <hfoption id="Ollama">
177
+
178
+ ```python
179
+ # !pip install smolagents[litellm]
180
+ from smolagents import CodeAgent, LiteLLMModel
181
+
182
+ model = LiteLLMModel(
183
+ model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though
184
+ api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary
185
+ api_key="YOUR_API_KEY", # replace with API key if necessary
186
+ num_ctx=8192, # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.
187
+ )
188
+
189
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
190
+
191
+ agent.run(
192
+ "Could you give me the 118th number in the Fibonacci sequence?",
193
+ )
194
+ ```
195
+ </hfoption>
196
+ <hfoption id="Azure OpenAI">
197
+
198
+ To connect to Azure OpenAI, you can either use `AzureOpenAIServerModel` directly, or use `LiteLLMModel` and configure it accordingly.
199
+
200
+ To initialize an instance of `AzureOpenAIServerModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.
201
+
202
+ ```python
203
+ # !pip install smolagents[openai]
204
+ from smolagents import CodeAgent, AzureOpenAIServerModel
205
+
206
+ model = AzureOpenAIServerModel(model_id="gpt-4o-mini")
207
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
208
+
209
+ agent.run(
210
+ "Could you give me the 118th number in the Fibonacci sequence?",
211
+ )
212
+ ```
213
+
214
+ Similarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows:
215
+
216
+ - pass your model deployment name as `model_id`, and make sure to prefix it with `azure/`
217
+ - make sure to set the environment variable `AZURE_API_VERSION`
218
+ - either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE`
219
+
220
+ ```python
221
+ import os
222
+ from smolagents import CodeAgent, LiteLLMModel
223
+
224
+ AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="gpt-35-turbo-16k-deployment" # example of deployment name
225
+
226
+ os.environ["AZURE_API_KEY"] = "" # api_key
227
+ os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
228
+ os.environ["AZURE_API_VERSION"] = "" # "2024-10-01-preview"
229
+
230
+ model = LiteLLMModel(model_id="azure/" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME)
231
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
232
+
233
+ agent.run(
234
+ "Could you give me the 118th number in the Fibonacci sequence?",
235
+ )
236
+ ```
237
+
238
+ </hfoption>
239
+ <hfoption id="Amazon Bedrock">
240
+
241
+ The `AmazonBedrockServerModel` class provides native integration with Amazon Bedrock, allowing for direct API calls and comprehensive configuration.
242
+
243
+ Basic Usage:
244
+
245
+ ```python
246
+ # !pip install smolagents[aws_sdk]
247
+ from smolagents import CodeAgent, AmazonBedrockServerModel
248
+
249
+ model = AmazonBedrockServerModel(model_id="anthropic.claude-3-sonnet-20240229-v1:0")
250
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
251
+
252
+ agent.run(
253
+ "Could you give me the 118th number in the Fibonacci sequence?",
254
+ )
255
+ ```
256
+
257
+ Advanced Configuration:
258
+
259
+ ```python
260
+ import boto3
261
+ from smolagents import AmazonBedrockServerModel
262
+
263
+ # Create a custom Bedrock client
264
+ bedrock_client = boto3.client(
265
+ 'bedrock-runtime',
266
+ region_name='us-east-1',
267
+ aws_access_key_id='YOUR_ACCESS_KEY',
268
+ aws_secret_access_key='YOUR_SECRET_KEY'
269
+ )
270
+
271
+ additional_api_config = {
272
+ "inferenceConfig": {
273
+ "maxTokens": 3000
274
+ },
275
+ "guardrailConfig": {
276
+ "guardrailIdentifier": "identify1",
277
+ "guardrailVersion": 'v1'
278
+ },
279
+ }
280
+
281
+ # Initialize with comprehensive configuration
282
+ model = AmazonBedrockServerModel(
283
+ model_id="us.amazon.nova-pro-v1:0",
284
+ client=bedrock_client, # Use custom client
285
+ **additional_api_config
286
+ )
287
+
288
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
289
+
290
+ agent.run(
291
+ "Could you give me the 118th number in the Fibonacci sequence?",
292
+ )
293
+ ```
294
+
295
+ Using LiteLLMModel:
296
+
297
+ Alternatively, you can use `LiteLLMModel` with Bedrock models:
298
+
299
+ ```python
300
+ from smolagents import LiteLLMModel, CodeAgent
301
+
302
+ model = LiteLLMModel(model_name="bedrock/anthropic.claude-3-sonnet-20240229-v1:0")
303
+ agent = CodeAgent(tools=[], model=model)
304
+
305
+ agent.run("Explain the concept of quantum computing")
306
+ ```
307
+
308
+ </hfoption>
309
+ <hfoption id="mlx-lm">
310
+
311
+ ```python
312
+ # !pip install smolagents[mlx-lm]
313
+ from smolagents import CodeAgent, MLXModel
314
+
315
+ mlx_model = MLXModel("mlx-community/Qwen2.5-Coder-32B-Instruct-4bit")
316
+ agent = CodeAgent(model=mlx_model, tools=[], add_base_tools=True)
317
+
318
+ agent.run("Could you give me the 118th number in the Fibonacci sequence?")
319
+ ```
320
+
321
+ </hfoption>
322
+ </hfoptions>
323
+
324
+ ## Advanced agent configuration
325
+
326
+ ### Customizing agent termination conditions
327
+
328
+ By default, an agent continues running until it calls the `final_answer` function or reaches the maximum number of steps.
329
+ The `final_answer_checks` parameter gives you more control over when and how an agent terminates its execution:
330
+
331
+ ```python
332
+ from smolagents import CodeAgent, InferenceClientModel
333
+
334
+ # Define a custom final answer check function
335
+ def is_integer(final_answer: str, agent_memory=None) -> bool:
336
+ """Return True if final_answer is an integer."""
337
+ try:
338
+ int(final_answer)
339
+ return True
340
+ except ValueError:
341
+ return False
342
+
343
+ # Initialize agent with custom final answer check
344
+ agent = CodeAgent(
345
+ tools=[],
346
+ model=InferenceClientModel(),
347
+ final_answer_checks=[is_integer]
348
+ )
349
+
350
+ agent.run("Calculate the least common multiple of 3 and 7")
351
+ ```
352
+
353
+ The `final_answer_checks` parameter accepts a list of functions that each:
354
+ - Take the agent's final_answer string the agent's memory as parameters
355
+ - Return a boolean indicating whether the final_answer is valid (True) or not (False)
356
+
357
+ If any function returns `False`, the agent will log the error message and continue the run.
358
+ This validation mechanism enables:
359
+ - Enforcing output format requirements (e.g., ensuring numeric answers for math problems)
360
+ - Implementing domain-specific validation rules
361
+ - Creating more robust agents that validate their own outputs
362
+
363
+ ## Inspecting an agent run
364
+
365
+ Here are a few useful attributes to inspect what happened after a run:
366
+ - `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`.
367
+ - Running `agent.write_memory_to_messages()` writes the agent's memory as list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method.
368
+
369
+ ## Tools
370
+
371
+ A tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool:
372
+ - A name
373
+ - A description
374
+ - Input types and descriptions
375
+ - An output type
376
+
377
+ You can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `forward` method to perform the action.
378
+
379
+ When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why.
380
+
381
+ ### Default toolbox
382
+
383
+ If you install `smolagents` with the "toolkit" extra, it comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools=True`:
384
+
385
+ - **DuckDuckGo web search***: performs a web search using DuckDuckGo browser.
386
+ - **Python code interpreter**: runs your LLM generated Python code in a secure environment. This tool will only be added to [`ToolCallingAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code
387
+ - **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text.
388
+
389
+ You can manually use a tool by calling it with its arguments.
390
+
391
+ ```python
392
+ # !pip install smolagents[toolkit]
393
+ from smolagents import WebSearchTool
394
+
395
+ search_tool = WebSearchTool()
396
+ print(search_tool("Who's the current president of Russia?"))
397
+ ```
398
+
399
+ ### Create a new tool
400
+
401
+ You can create your own tool for use cases not covered by the default tools from Hugging Face.
402
+ For example, let's create a tool that returns the most downloaded model for a given task from the Hub.
403
+
404
+ You'll start with the code below.
405
+
406
+ ```python
407
+ from huggingface_hub import list_models
408
+
409
+ task = "text-classification"
410
+
411
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
412
+ print(most_downloaded_model.id)
413
+ ```
414
+
415
+ This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:
416
+ This is not the only way to build the tool: you can directly define it as a subclass of [`Tool`], which gives you more flexibility, for instance the possibility to initialize heavy class attributes.
417
+
418
+ Let's see how it works for both options:
419
+
420
+ <hfoptions id="build-a-tool">
421
+ <hfoption id="Decorate a function with @tool">
422
+
423
+ ```py
424
+ from smolagents import tool
425
+
426
+ @tool
427
+ def model_download_tool(task: str) -> str:
428
+ """
429
+ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
430
+ It returns the name of the checkpoint.
431
+
432
+ Args:
433
+ task: The task for which to get the download count.
434
+ """
435
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
436
+ return most_downloaded_model.id
437
+ ```
438
+
439
+ The function needs:
440
+ - A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.
441
+ - Type hints on both inputs and output
442
+ - A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint). Same as for the tool name, this description is an instruction manual for the LLM powering your agent, so do not neglect it.
443
+
444
+ All these elements will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!
445
+
446
+ > [!TIP]
447
+ > This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).
448
+ </hfoption>
449
+ <hfoption id="Subclass Tool">
450
+
451
+ ```py
452
+ from smolagents import Tool
453
+
454
+ class ModelDownloadTool(Tool):
455
+ name = "model_download_tool"
456
+ description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint."
457
+ inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}}
458
+ output_type = "string"
459
+
460
+ def forward(self, task: str) -> str:
461
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
462
+ return most_downloaded_model.id
463
+ ```
464
+
465
+ The subclass needs the following attributes:
466
+ - A clear `name`. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.
467
+ - A `description`. Same as for the `name`, this description is an instruction manual for the LLM powering your agent, so do not neglect it.
468
+ - Input types and descriptions
469
+ - Output type
470
+ All these attributes will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!
471
+ </hfoption>
472
+ </hfoptions>
473
+
474
+
475
+ Then you can directly initialize your agent:
476
+ ```py
477
+ from smolagents import CodeAgent, InferenceClientModel
478
+ agent = CodeAgent(tools=[model_download_tool], model=InferenceClientModel())
479
+ agent.run(
480
+ "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
481
+ )
482
+ ```
483
+
484
+ You get the following logs:
485
+ ```text
486
+ ╭──────────────────────────────────────── New run ─────────────────────────────────────────╮
487
+ │ │
488
+ │ Can you give me the name of the model that has the most downloads in the 'text-to-video' │
489
+ │ task on the Hugging Face Hub? │
490
+ │ │
491
+ ╰─ InferenceClientModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯
492
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
493
+ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
494
+ │ 1 model_name = model_download_tool(task="text-to-video") │
495
+ │ 2 print(model_name) │
496
+ ╰──────────────────────────────────────────────────────────────────────────────────────────╯
497
+ Execution logs:
498
+ ByteDance/AnimateDiff-Lightning
499
+
500
+ Out: None
501
+ [Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]
502
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
503
+ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
504
+ │ 1 final_answer("ByteDance/AnimateDiff-Lightning") │
505
+ ╰──────────────────────────────────────────────────────────────────────────────────────────╯
506
+ Out - Final answer: ByteDance/AnimateDiff-Lightning
507
+ [Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]
508
+ Out[20]: 'ByteDance/AnimateDiff-Lightning'
509
+ ```
510
+
511
+ > [!TIP]
512
+ > Read more on tools in the [dedicated tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one).
513
+
514
+ ## Multi-agents
515
+
516
+ Multi-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155).
517
+
518
+ In this type of framework, you have several agents working together to solve your task instead of only one.
519
+ It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate.
520
+
521
+ You can easily build hierarchical multi-agent systems with `smolagents`.
522
+
523
+ To do so, just ensure your agent has `name` and`description` attributes, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.
524
+ Then you can pass this managed agent in the parameter managed_agents upon initialization of the manager agent.
525
+
526
+ Here's an example of making an agent that managed a specific web search agent using our native [`WebSearchTool`]:
527
+
528
+ ```py
529
+ from smolagents import CodeAgent, InferenceClientModel, WebSearchTool
530
+
531
+ model = InferenceClientModel()
532
+
533
+ web_agent = CodeAgent(
534
+ tools=[WebSearchTool()],
535
+ model=model,
536
+ name="web_search_agent",
537
+ description="Runs web searches for you. Give it your query as an argument."
538
+ )
539
+
540
+ manager_agent = CodeAgent(
541
+ tools=[], model=model, managed_agents=[web_agent]
542
+ )
543
+
544
+ manager_agent.run("Who is the CEO of Hugging Face?")
545
+ ```
546
+
547
+ > [!TIP]
548
+ > For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).
549
+
550
+
551
+ ## Talk with your agent and visualize its thoughts in a cool Gradio interface
552
+
553
+ You can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example:
554
+
555
+ ```py
556
+ from smolagents import (
557
+ load_tool,
558
+ CodeAgent,
559
+ InferenceClientModel,
560
+ GradioUI
561
+ )
562
+
563
+ # Import tool from Hub
564
+ image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)
565
+
566
+ model = InferenceClientModel(model_id=model_id)
567
+
568
+ # Initialize the agent with the image generation tool
569
+ agent = CodeAgent(tools=[image_generation_tool], model=model)
570
+
571
+ GradioUI(agent).launch()
572
+ ```
573
+
574
+ Under the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`.
575
+ The `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on.
576
+
577
+ You can also use this `reset=False` argument to keep the conversation going in any other agentic application.
578
+
579
+ In gradio UIs, if you want to allow users to interrupt a running agent, you could do this with a button that triggers method `agent.interrupt()`.
580
+ This will stop the agent at the end of its current step, then raise an error.
581
+
582
+ ## Next steps
583
+
584
+ Finally, when you've configured your agent to your needs, you can share it to the Hub!
585
+
586
+ ```py
587
+ agent.push_to_hub("m-ric/my_agent")
588
+ ```
589
+
590
+ Similarly, to load an agent that has been pushed to hub, if you trust the code from its tools, use:
591
+ ```py
592
+ agent.from_hub("m-ric/my_agent", trust_remote_code=True)
593
+ ```
594
+
595
+ For more in-depth usage, you will then want to check out our tutorials:
596
+ - [the explanation of how our code agents work](./tutorials/secure_code_execution)
597
+ - [this guide on how to build good agents](./tutorials/building_good_agents).
598
+ - [the in-depth guide for tool usage](./tutorials/building_good_agents).
docs/source/en/index.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # `smolagents`
2
+
3
+ <div class="flex justify-center">
4
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/license_to_call.png" style="max-width:700px"/>
5
+ </div>
6
+
7
+ ## What is smolagents?
8
+
9
+ `smolagents` is an open-source Python library designed to make it extremely easy to build and run agents using just a few lines of code.
10
+
11
+ Key features of `smolagents` include:
12
+
13
+ ✨ **Simplicity**: The logic for agents fits in ~thousand lines of code. We kept abstractions to their minimal shape above raw code!
14
+
15
+ 🧑‍💻 **First-class support for Code Agents**: [`CodeAgent`](reference/agents#smolagents.CodeAgent) writes its actions in code (as opposed to "agents being used to write code") to invoke tools or perform computations, enabling natural composability (function nesting, loops, conditionals). To make it secure, we support [executing in sandboxed environment](tutorials/secure_code_execution) via [E2B](https://e2b.dev/) or via Docker.
16
+
17
+ 📡 **Common Tool-Calling Agent Support**: In addition to CodeAgents, [`ToolCallingAgent`](reference/agents#smolagents.ToolCallingAgent) supports usual JSON/text-based tool-calling for scenarios where that paradigm is preferred.
18
+
19
+ 🤗 **Hub integrations**: Seamlessly share and load agents and tools to/from the Hub as Gradio Spaces.
20
+
21
+ 🌐 **Model-agnostic**: Easily integrate any large language model (LLM), whether it's hosted on the Hub via [Inference providers](https://huggingface.co/docs/inference-providers/index), accessed via APIs such as OpenAI, Anthropic, or many others via LiteLLM integration, or run locally using Transformers or Ollama. Powering an agent with your preferred LLM is straightforward and flexible.
22
+
23
+ 👁️ **Modality-agnostic**: Beyond text, agents can handle vision, video, and audio inputs, broadening the range of possible applications. Check out [this tutorial](examples/web_browser) for vision.
24
+
25
+ 🛠️ **Tool-agnostic**: You can use tools from any [MCP server](reference/tools#smolagents.ToolCollection.from_mcp), from [LangChain](reference/tools#smolagents.Tool.from_langchain), you can even use a [Hub Space](reference/tools#smolagents.Tool.from_space) as a tool.
26
+
27
+ 💻 **CLI Tools**: Comes with command-line utilities (smolagent, webagent) for quickly running agents without writing boilerplate code.
28
+
29
+ ## Quickstart
30
+
31
+ [[open-in-colab]]
32
+
33
+ Get started with smolagents in just a few minutes! This guide will show you how to create and run your first agent.
34
+
35
+ ### Installation
36
+
37
+ Install smolagents with pip:
38
+
39
+ ```bash
40
+ pip install smolagents[toolkit] # Includes default tools like web search
41
+ ```
42
+
43
+ ### Create Your First Agent
44
+
45
+ Here's a minimal example to create and run an agent:
46
+
47
+ ```python
48
+ from smolagents import CodeAgent, InferenceClientModel
49
+
50
+ # Initialize a model (using Hugging Face Inference API)
51
+ model = InferenceClientModel() # Uses a default model
52
+
53
+ # Create an agent with no tools
54
+ agent = CodeAgent(tools=[], model=model)
55
+
56
+ # Run the agent with a task
57
+ result = agent.run("Calculate the sum of numbers from 1 to 10")
58
+ print(result)
59
+ ```
60
+
61
+ That's it! Your agent will use Python code to solve the task and return the result.
62
+
63
+ ### Adding Tools
64
+
65
+ Let's make our agent more capable by adding some tools:
66
+
67
+ ```python
68
+ from smolagents import CodeAgent, InferenceClientModel, DuckDuckGoSearchTool
69
+
70
+ model = InferenceClientModel()
71
+ agent = CodeAgent(
72
+ tools=[DuckDuckGoSearchTool()],
73
+ model=model,
74
+ )
75
+
76
+ # Now the agent can search the web!
77
+ result = agent.run("What is the current weather in Paris?")
78
+ print(result)
79
+ ```
80
+
81
+ ### Using Different Models
82
+
83
+ You can use various models with your agent:
84
+
85
+ ```python
86
+ # Using a specific model from Hugging Face
87
+ model = InferenceClientModel(model_id="meta-llama/Llama-2-70b-chat-hf")
88
+
89
+ # Using OpenAI/Anthropic (requires smolagents[litellm])
90
+ from smolagents import LiteLLMModel
91
+ model = LiteLLMModel(model_id="gpt-4")
92
+
93
+ # Using local models (requires smolagents[transformers])
94
+ from smolagents import TransformersModel
95
+ model = TransformersModel(model_id="meta-llama/Llama-2-7b-chat-hf")
96
+ ```
97
+
98
+ ## Next Steps
99
+
100
+ - Learn how to set up smolagents with various models and tools in the [Installation Guide](installation)
101
+ - Check out the [Guided Tour](guided_tour) for more advanced features
102
+ - Learn about [building custom tools](tutorials/tools)
103
+ - Explore [secure code execution](tutorials/secure_code_execution)
104
+ - See how to create [multi-agent systems](tutorials/building_good_agents)
105
+
106
+ <div class="mt-10">
107
+ <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
108
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour"
109
+ ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Guided tour</div>
110
+ <p class="text-gray-700">Learn the basics and become familiar with using Agents. Start here if you are using Agents for the first time!</p>
111
+ </a>
112
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql"
113
+ ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
114
+ <p class="text-gray-700">Practical guides to help you achieve a specific goal: create an agent to generate and test SQL queries!</p>
115
+ </a>
116
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents"
117
+ ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
118
+ <p class="text-gray-700">High-level explanations for building a better understanding of important topics.</p>
119
+ </a>
120
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents"
121
+ ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
122
+ <p class="text-gray-700">Horizontal tutorials that cover important aspects of building agents.</p>
123
+ </a>
124
+ </div>
125
+ </div>
docs/source/en/installation.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Installation Options
2
+
3
+ The `smolagents` library can be installed using pip. Here are the different installation methods and options available.
4
+
5
+ ## Prerequisites
6
+ - Python 3.10 or newer
7
+ - pip
8
+
9
+ ## Basic Installation
10
+
11
+ Install `smolagents` core library with:
12
+ ```bash
13
+ pip install smolagents
14
+ ```
15
+
16
+ ## Installation with Extras
17
+
18
+ `smolagents` provides several optional dependencies (extras) that can be installed based on your needs.
19
+ You can install these extras using the following syntax:
20
+ ```bash
21
+ pip install "smolagents[extra1,extra2]"
22
+ ```
23
+
24
+ ### Tools
25
+ These extras include various tools and integrations:
26
+ - **toolkit**: Install a default set of tools for common tasks.
27
+ ```bash
28
+ pip install "smolagents[toolkit]"
29
+ ```
30
+ - **mcp**: Add support for the Model Context Protocol (MCP) to integrate with external tools and services.
31
+ ```bash
32
+ pip install "smolagents[mcp]"
33
+ ```
34
+
35
+ ### Model Integration
36
+ These extras enable integration with various AI models and frameworks:
37
+ - **openai**: Add support for OpenAI API models.
38
+ ```bash
39
+ pip install "smolagents[openai]"
40
+ ```
41
+ - **transformers**: Enable Hugging Face Transformers models.
42
+ ```bash
43
+ pip install "smolagents[transformers]"
44
+ ```
45
+ - **vllm**: Add VLLM support for efficient model inference.
46
+ ```bash
47
+ pip install "smolagents[vllm]"
48
+ ```
49
+ - **mlx-lm**: Enable support for MLX-LM models.
50
+ ```bash
51
+ pip install "smolagents[mlx-lm]"
52
+ ```
53
+ - **litellm**: Add LiteLLM support for lightweight model inference.
54
+ ```bash
55
+ pip install "smolagents[litellm]"
56
+ ```
57
+ - **bedrock**: Enable support for AWS Bedrock models.
58
+ ```bash
59
+ pip install "smolagents[bedrock]"
60
+ ```
61
+
62
+ ### Multimodal Capabilities
63
+ Extras for handling different types of media and input:
64
+ - **vision**: Add support for image processing and computer vision tasks.
65
+ ```bash
66
+ pip install "smolagents[vision]"
67
+ ```
68
+ - **audio**: Enable audio processing capabilities.
69
+ ```bash
70
+ pip install "smolagents[audio]"
71
+ ```
72
+
73
+ ### Remote Execution
74
+ Extras for executing code remotely:
75
+ - **docker**: Add support for executing code in Docker containers.
76
+ ```bash
77
+ pip install "smolagents[docker]"
78
+ ```
79
+ - **e2b**: Enable E2B support for remote execution.
80
+ ```bash
81
+ pip install "smolagents[e2b]"
82
+ ```
83
+
84
+ ### Telemetry and User Interface
85
+ Extras for telemetry, monitoring and user interface components:
86
+ - **telemetry**: Add support for monitoring and tracing.
87
+ ```bash
88
+ pip install "smolagents[telemetry]"
89
+ ```
90
+ - **gradio**: Add support for interactive Gradio UI components.
91
+ ```bash
92
+ pip install "smolagents[gradio]"
93
+ ```
94
+
95
+ ### Complete Installation
96
+ To install all available extras, you can use:
97
+ ```bash
98
+ pip install "smolagents[all]"
99
+ ```
100
+
101
+ ## Verifying Installation
102
+ After installation, you can verify that `smolagents` is installed correctly by running:
103
+ ```python
104
+ import smolagents
105
+ print(smolagents.__version__)
106
+ ```
107
+
108
+ ## Next Steps
109
+ Once you have successfully installed `smolagents`, you can:
110
+ - Follow the [guided tour](./guided_tour) to learn the basics.
111
+ - Explore the [how-to guides](./examples/text_to_sql) for practical examples.
112
+ - Read the [conceptual guides](./conceptual_guides/intro_agents) for high-level explanations.
113
+ - Check out the [tutorials](./tutorials/building_good_agents) for in-depth tutorials on building agents.
114
+ - Explore the [API reference](./reference/index) for detailed information on classes and functions.