RobertoBarrosoLuque commited on
Commit
663d0bf
·
1 Parent(s): b384e24

Delete unused notebooks

Browse files
notebooks/1-Building-Blocks.ipynb DELETED
The diff for this file is too large to render. See raw diff
 
notebooks/2-Exercises.ipynb DELETED
@@ -1,470 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "metadata": {
6
- "id": "view-in-github",
7
- "colab_type": "text"
8
- },
9
- "source": [
10
- "<a href=\"https://colab.research.google.com/github/RobertoBarrosoLuque/scout-claims/blob/main/notebooks/2-Exercises.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
11
- ]
12
- },
13
- {
14
- "cell_type": "markdown",
15
- "id": "0",
16
- "metadata": {
17
- "id": "0"
18
- },
19
- "source": [
20
- "# Exercises: Putting the Building Blocks into Practice\n",
21
- "\n",
22
- "Welcome to the hands-on portion of the workshop! In these exercises, you will apply the concepts we've learned to solve a few practical problems.\n",
23
- "\n",
24
- "**Your goals will be to:**\n",
25
- "1. **Extend Function Calling**: Add a new tool for the LLM to use.\n",
26
- "2. **Modify Structured Output**: Change a Pydantic schema to extract additional structured information from an image.\n",
27
- "3. **Bonus! Use Grammar Mode**: Force the LLM to respond in a highly specific, token-efficient format.\n",
28
- "\n",
29
- "Look out for the lines marked \"TODO\" in each cell; those are where you will write your code. Let's get started!"
30
- ]
31
- },
32
- {
33
- "cell_type": "code",
34
- "execution_count": null,
35
- "id": "e966e0b4",
36
- "metadata": {
37
- "id": "e966e0b4"
38
- },
39
- "outputs": [],
40
- "source": [
41
- "#\n",
42
- "# SETUP CELL #1: PLEASE RUN THIS BEFORE CONTINUING WITH THE EXERCISES.\n",
43
- "# RESTART THE RUNTIME AFTER RUNNING THIS CELL IF PROMPTED TO DO SO.\n",
44
- "#\n",
45
- "!pip install pydantic requests Pillow python-dotenv"
46
- ]
47
- },
48
- {
49
- "cell_type": "code",
50
- "execution_count": null,
51
- "id": "eac6208b",
52
- "metadata": {
53
- "id": "eac6208b"
54
- },
55
- "outputs": [],
56
- "source": [
57
- "#\n",
58
- "# SETUP CELL #2: PLEASE RUN THIS BEFORE CONTINUING WITH THE EXERCISES\n",
59
- "#\n",
60
- "import os\n",
61
- "import io\n",
62
- "import base64\n",
63
- "from dotenv import load_dotenv\n",
64
- "import requests\n",
65
- "import json\n",
66
- "load_dotenv()\n",
67
- "\n",
68
- "MODEL_ID = \"accounts/fireworks/models/llama4-scout-instruct-basic\"\n",
69
- "\n",
70
- "# This pattern is for Google Colab.\n",
71
- "# If running locally, set the FIREWORKS_API_KEY environment variable.\n",
72
- "try:\n",
73
- " from google.colab import userdata\n",
74
- " FIREWORKS_API_KEY = userdata.get('FIREWORKS_API_KEY')\n",
75
- "except ImportError:\n",
76
- " FIREWORKS_API_KEY = os.getenv(\"FIREWORKS_API_KEY\")\n",
77
- "\n",
78
- "# Make sure to set your FIREWORKS_API_KEY\n",
79
- "if not FIREWORKS_API_KEY:\n",
80
- " print(\"⚠️ Warning: FIREWORKS_API_KEY not set. The following cells will not run without it.\")\n",
81
- "\n",
82
- "# Helper function to prepare images for VLMs.\n",
83
- "# It is defined here to be available for later exercises.\n",
84
- "def pil_to_base64_dict(pil_image):\n",
85
- " \"\"\"Convert PIL image to the format expected by VLMs\"\"\"\n",
86
- " if pil_image is None:\n",
87
- " return None\n",
88
- "\n",
89
- " buffered = io.BytesIO()\n",
90
- " if pil_image.mode != \"RGB\":\n",
91
- " pil_image = pil_image.convert(\"RGB\")\n",
92
- "\n",
93
- " pil_image.save(buffered, format=\"JPEG\")\n",
94
- " img_base64 = base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
95
- "\n",
96
- " return {\"image\": pil_image, \"path\": \"uploaded_image.jpg\", \"base64\": img_base64}\n",
97
- "\n",
98
- "# Helper function to make api calls with requests\n",
99
- "def make_api_call(payload, tools=None, model_id=None, base_url=None):\n",
100
- " \"\"\"Make API call with requests\"\"\"\n",
101
- " # Use defaults if not provided\n",
102
- " final_model_id = model_id or MODEL_ID\n",
103
- " final_base_url = base_url or \"https://api.fireworks.ai/inference/v1\"\n",
104
- "\n",
105
- " # Add model to payload\n",
106
- " payload[\"model\"] = final_model_id\n",
107
- "\n",
108
- " # Add tools if provided\n",
109
- " if tools:\n",
110
- " payload[\"tools\"] = tools\n",
111
- " payload[\"tool_choice\"] = \"auto\"\n",
112
- "\n",
113
- " headers = {\n",
114
- " \"Authorization\": f\"Bearer {FIREWORKS_API_KEY}\",\n",
115
- " \"Content-Type\": \"application/json\"\n",
116
- " }\n",
117
- "\n",
118
- " response = requests.post(\n",
119
- " f\"{final_base_url}/chat/completions\",\n",
120
- " headers=headers,\n",
121
- " json=payload\n",
122
- " )\n",
123
- "\n",
124
- " if response.status_code == 200:\n",
125
- " return response.json()\n",
126
- " else:\n",
127
- " raise Exception(f\"API Error: {response.status_code} - {response.text}\")\n",
128
- "\n",
129
- "print(\"✅ Setup complete. Helper function and API key are ready.\")"
130
- ]
131
- },
132
- {
133
- "cell_type": "markdown",
134
- "id": "09bc4200",
135
- "metadata": {
136
- "id": "09bc4200"
137
- },
138
- "source": [
139
- "## Exercise 1: Extending Function Calling\n",
140
- "\n",
141
- "[Function calling](https://docs.fireworks.ai/guides/function-calling) allows an LLM to use external tools. Your first task is to give the LLM a new tool.\n",
142
- "\n",
143
- "**Goal**: Define a new function called `count_letter` that counts the occurrences of a specific letter in a word. You will then define its schema and make it available to the LLM.\n",
144
- "\n",
145
- "**Your Steps:**\n",
146
- "1. Define the Python function `count_letter`.\n",
147
- "2. Add it to the `available_functions` dictionary.\n",
148
- "3. Define its schema and add it to the `tools` list.\n",
149
- "4. Write a prompt to test your new function"
150
- ]
151
- },
152
- {
153
- "cell_type": "code",
154
- "execution_count": null,
155
- "id": "99c48d84",
156
- "metadata": {
157
- "id": "99c48d84"
158
- },
159
- "outputs": [],
160
- "source": [
161
- "###\n",
162
- "### EXERCISE 1: WRITE YOUR CODE IN THIS CELL\n",
163
- "###\n",
164
- "import json\n",
165
- "\n",
166
- "# --- Step 1: Define the Python function and the available functions mapping ---\n",
167
- "\n",
168
- "# Base function from the previous notebook\n",
169
- "def get_weather(location: str) -> str:\n",
170
- " \"\"\"Get current weather for a location\"\"\"\n",
171
- " weather_data = {\"New York\": \"Sunny, 72°F\", \"London\": \"Cloudy, 15°C\", \"Tokyo\": \"Rainy, 20°C\"}\n",
172
- " return weather_data.get(location, \"Weather data not available\")\n",
173
- "\n",
174
- "# ---TODO Block start---- #\n",
175
- "# Define a new function `count_letter` that takes a `word` and a `letter`\n",
176
- "# and returns the number of times the letter appears in the word.\n",
177
- "def count_letter(): # TODO: Add your function header here\n",
178
- " # TODO: Add your function body here\n",
179
- " pass\n",
180
- "# ---TODO Block end---- #\n",
181
- "\n",
182
- "available_functions = {\n",
183
- " \"get_weather\": get_weather,\n",
184
- " # TODO: Add your new function to this dictionary\n",
185
- "}\n",
186
- "\n",
187
- "\n",
188
- "# --- Step 2: Define the function schemas for the LLM ---\n",
189
- "\n",
190
- "# Base tool schema from the previous notebook\n",
191
- "tools = [\n",
192
- " {\n",
193
- " \"type\": \"function\",\n",
194
- " \"function\": {\n",
195
- " \"name\": \"get_weather\",\n",
196
- " \"description\": \"Get current weather for a location\",\n",
197
- " \"parameters\": {\n",
198
- " \"type\": \"object\",\n",
199
- " \"properties\": {\n",
200
- " \"location\": {\n",
201
- " \"type\": \"string\",\n",
202
- " \"description\": \"The city name\"\n",
203
- " }\n",
204
- " },\n",
205
- " \"required\": [\"location\"]\n",
206
- " }\n",
207
- " }\n",
208
- " },\n",
209
- " # TODO: Add the JSON schema for your `count_letter` function here.\n",
210
- " # It should have two parameters: \"word\" and \"letter\", both are required strings.\n",
211
- "]\n",
212
- "\n",
213
- "\n",
214
- "# --- Step 3: Build your input to the LLM ---\n",
215
- "\n",
216
- "# Initialize the messages list\n",
217
- "messages = [\n",
218
- " {\n",
219
- " \"role\": \"system\",\n",
220
- " \"content\": \"You are a helpful assistant. You have access to a couple of tools, use them when needed.\"\n",
221
- " },\n",
222
- " {\n",
223
- " \"role\": \"user\",\n",
224
- " \"content\": \"\" #TODO: Add your user prompt here\n",
225
- " }\n",
226
- "]\n",
227
- "\n",
228
- "# Create payload\n",
229
- "payload = {\n",
230
- " \"messages\": messages,\n",
231
- " \"tools\": tools,\n",
232
- " \"model\": \"accounts/fireworks/models/llama4-maverick-instruct-basic\"\n",
233
- "}\n",
234
- "\n",
235
- "# Get response from LLM\n",
236
- "response = make_api_call(payload=payload)\n",
237
- "\n",
238
- "# Check if the model wants to call a tool/function\n",
239
- "if response[\"choices\"][0][\"message\"][\"tool_calls\"]:\n",
240
- " tool_call = response[\"choices\"][0][\"message\"][\"tool_calls\"][0]\n",
241
- " function_name = tool_call[\"function\"][\"name\"]\n",
242
- " function_args = json.loads(tool_call[\"function\"][\"arguments\"])\n",
243
- "\n",
244
- " print(f\"LLM wants to call: {function_name}\")\n",
245
- " print(f\"With arguments: {function_args}\")\n",
246
- "\n",
247
- " # Execute the function\n",
248
- " function_response = available_functions[function_name](**function_args)\n",
249
- " print(f\"Function result: {function_response}\")\n",
250
- "\n",
251
- " # Add the assistant's tool call to the conversation\n",
252
- " messages.append({\n",
253
- " \"role\": \"assistant\",\n",
254
- " \"content\": \"\",\n",
255
- " \"tool_calls\": response[\"choices\"][0][\"message\"][\"tool_calls\"]\n",
256
- " })\n",
257
- "\n",
258
- " # Add the function result to the conversation\n",
259
- " messages.append({\n",
260
- " \"role\": \"tool\",\n",
261
- " \"content\": json.dumps(function_response) if isinstance(function_response, dict) else str(function_response)\n",
262
- " })\n",
263
- "\n",
264
- " # Create the final payload\n",
265
- " final_payload = {\n",
266
- " \"messages\": messages,\n",
267
- " \"tools\": tools,\n",
268
- " \"model\": \"accounts/fireworks/models/llama4-maverick-instruct-basic\"\n",
269
- " }\n",
270
- "\n",
271
- " # Get final response from LLM\n",
272
- " final_response = make_api_call(payload=payload)\n",
273
- "\n",
274
- " print(f'Final response: {final_response[\"choices\"][0][\"message\"][\"content\"]}')"
275
- ]
276
- },
277
- {
278
- "cell_type": "markdown",
279
- "id": "4d198002",
280
- "metadata": {
281
- "id": "4d198002"
282
- },
283
- "source": [
284
- "## Exercise 2: Modifying Structured Outputs (JSON Mode)\n",
285
- "\n",
286
- "Structured output is critical for building reliable applications. Here, you'll modify an existing schema to extract more information from an image.\n",
287
- "\n",
288
- "**Goal**: Update the `IncidentAnalysis` Pydantic model to also extract the `make` and `model` of the vehicle in the image.\n",
289
- "\n",
290
- "**Your Steps:**\n",
291
- "1. Add the `make` and `model` fields to the `IncidentAnalysis` Pydantic class.\n",
292
- "2. Run the VLM call using [JSON mode](https://docs.fireworks.ai/structured-responses/structured-response-formatting) to see the new structured output."
293
- ]
294
- },
295
- {
296
- "cell_type": "code",
297
- "execution_count": null,
298
- "id": "1dc5d727",
299
- "metadata": {
300
- "id": "1dc5d727"
301
- },
302
- "outputs": [],
303
- "source": [
304
- "###\n",
305
- "### EXERCISE 2: WRITE YOUR CODE IN THIS CELL\n",
306
- "###\n",
307
- "import requests\n",
308
- "import io\n",
309
- "from PIL import Image\n",
310
- "from pydantic import BaseModel, Field\n",
311
- "from typing import Literal\n",
312
- "\n",
313
- "# --- Step 1: Download a sample image ---\n",
314
- "url = \"https://raw.githubusercontent.com/RobertoBarrosoLuque/scout-claims/main/images/back_rhs_damage.png\"\n",
315
- "response = requests.get(url)\n",
316
- "image = Image.open(io.BytesIO(response.content))\n",
317
- "print(\"Image downloaded.\")\n",
318
- "\n",
319
- "\n",
320
- "# --- Step 2: Define the output schema ---\n",
321
- "# ---TODO Block start---- #\n",
322
- "# Add two new string fields to this Pydantic model:\n",
323
- "# - `make`: To store the make of the car (e.g., \"Ford\")\n",
324
- "# - `model`: To store the model of the car (e.g., \"Mustang\")\n",
325
- "class IncidentAnalysis(BaseModel):\n",
326
- " description: str = Field(description=\"A description of the damage to the vehicle.\")\n",
327
- " location: Literal[\"front-left\", \"front-right\", \"back-left\", \"back-right\", \"front\", \"side\"]\n",
328
- " severity: Literal[\"minor\", \"moderate\", \"major\"]\n",
329
- " license_plate: str | None = Field(description=\"The license plate of the vehicle, if visible.\")\n",
330
- "# ---TODO Block end---- #\n",
331
- "\n",
332
- "# --- Step 3: Call the VLM with the new schema ---\n",
333
- "# The 'pil_to_base64_dict' function was defined in the setup cell\n",
334
- "image_for_llm = pil_to_base64_dict(image)\n",
335
- "\n",
336
- "# Create payload\n",
337
- "prompt = \"Describe the car damage in this image and extract all useful information.\" # TODO: modify the prompt to include the new fields\n",
338
- "messages=[\n",
339
- " {\n",
340
- " \"role\": \"user\",\n",
341
- " \"content\": [\n",
342
- " {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_for_llm['base64']}\"}},\n",
343
- " {\"type\": \"text\", \"text\": prompt},\n",
344
- " ],\n",
345
- " }\n",
346
- "]\n",
347
- "response_format={\n",
348
- " \"type\": \"json_object\",\n",
349
- " \"schema\": IncidentAnalysis.model_json_schema(),\n",
350
- "}\n",
351
- "\n",
352
- "payload = {\n",
353
- " \"messages\": messages,\n",
354
- " \"response_format\": response_format,\n",
355
- " \"model\": \"accounts/fireworks/models/llama4-maverick-instruct-basic\"\n",
356
- "}\n",
357
- "\n",
358
- "# Get response from LLM\n",
359
- "response = make_api_call(payload=payload)\n",
360
- "\n",
361
- "\n",
362
- "result = json.loads(response[\"choices\"][0][\"message\"][\"content\"])\n",
363
- "print(json.dumps(result, indent=2))"
364
- ]
365
- },
366
- {
367
- "cell_type": "markdown",
368
- "id": "8e5a2e3d",
369
- "metadata": {
370
- "id": "8e5a2e3d"
371
- },
372
- "source": [
373
- "## Bonus Exercise: Constrained Output with Grammar Mode\n",
374
- "\n",
375
- "Sometimes you need the model to respond in a very specific, non-JSON format. This is where [Grammar Mode](https://docs.fireworks.ai/structured-responses/structured-output-grammar-based) excels. It forces the model's output to conform to a strict pattern you define, which can also save output tokens vs. JSON mode and offer even more granular control.\n",
376
- "\n",
377
- "**Goal**: Use grammar mode to force the model to output *only* the make and model of the car as a single lowercase string (e.g., \"ford mustang\").\n",
378
- "\n",
379
- "**Your Steps:**\n",
380
- "1. Define a GBNF grammar string.\n",
381
- "2. Call the model using `response_format={\"type\": \"grammar\", \"grammar\": ...}`."
382
- ]
383
- },
384
- {
385
- "cell_type": "code",
386
- "execution_count": null,
387
- "id": "1ea8cec3",
388
- "metadata": {
389
- "id": "1ea8cec3"
390
- },
391
- "outputs": [],
392
- "source": [
393
- "###\n",
394
- "### BONUS EXERCISE: WRITE YOUR CODE IN THIS CELL\n",
395
- "###\n",
396
- "\n",
397
- "# The 'image' variable and 'pil_to_base64_dict' helper function from previous\n",
398
- "# cells are used here. Make sure those cells have been run.\n",
399
- "# This assumes the image from Exercise 2 is still loaded.\n",
400
- "image_for_llm = pil_to_base64_dict(image)\n",
401
- "\n",
402
- "\n",
403
- "# --- Step 1: Define the GBNF grammar ---\n",
404
- "# Define a grammar that forces the output to be:\n",
405
- "# 1. A 'make' (one or more lowercase letters).\n",
406
- "# 2. Followed by a single space.\n",
407
- "# 3. Followed by a 'model' (one or more lowercase letters).\n",
408
- "car_grammar = r'''\n",
409
- "# TODO: define a grammar that forces the output to satisfy the format specified above (example output: \"ford mustang\")\n",
410
- "'''\n",
411
- "\n",
412
- "# --- Step 2: Define the prompt ---\n",
413
- "# Update the prompt to ask the model to identify the make and model and to respond only in the format specified above\n",
414
- "prompt = \"\" # TODO: write your prompt here\n",
415
- "\n",
416
- "\n",
417
- "# --- Step 3: Call the VLM with grammar mode ---\n",
418
- "messages=[\n",
419
- " {\n",
420
- " \"role\": \"user\",\n",
421
- " \"content\": [\n",
422
- " {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_for_llm['base64']}\"}},\n",
423
- " {\"type\": \"text\", \"text\": prompt},\n",
424
- " ],\n",
425
- " }\n",
426
- "]\n",
427
- "response_format={\n",
428
- " # TODO: define the response format to use the grammar defined above\n",
429
- "}\n",
430
- "\n",
431
- "# Define payload\n",
432
- "payload = {\n",
433
- " \"messages\": messages,\n",
434
- " \"response_format\": response_format,\n",
435
- " \"model\": \"accounts/fireworks/models/llama4-maverick-instruct-basic\"\n",
436
- "}\n",
437
- "\n",
438
- "# Get response from LLM\n",
439
- "response = make_api_call(payload=payload)\n",
440
- "\n",
441
- "print(f'Constrained output from model: {response[\"choices\"][0][\"message\"][\"content\"]}')"
442
- ]
443
- }
444
- ],
445
- "metadata": {
446
- "colab": {
447
- "provenance": [],
448
- "include_colab_link": true
449
- },
450
- "kernelspec": {
451
- "display_name": ".venv",
452
- "language": "python",
453
- "name": "python3"
454
- },
455
- "language_info": {
456
- "codemirror_mode": {
457
- "name": "ipython",
458
- "version": 2
459
- },
460
- "file_extension": ".py",
461
- "mimetype": "text/x-python",
462
- "name": "python",
463
- "nbconvert_exporter": "python",
464
- "pygments_lexer": "ipython2",
465
- "version": "3.11.13"
466
- }
467
- },
468
- "nbformat": 4,
469
- "nbformat_minor": 5
470
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
notebooks/3-Fine-Tuning.ipynb DELETED
@@ -1,49 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "metadata": {
6
- "id": "view-in-github",
7
- "colab_type": "text"
8
- },
9
- "source": [
10
- "<a href=\"https://colab.research.google.com/github/RobertoBarrosoLuque/scout-claims/blob/main/notebooks/3-Fine-Tuning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
11
- ]
12
- },
13
- {
14
- "cell_type": "code",
15
- "execution_count": null,
16
- "id": "0",
17
- "metadata": {
18
- "id": "0"
19
- },
20
- "outputs": [],
21
- "source": []
22
- }
23
- ],
24
- "metadata": {
25
- "kernelspec": {
26
- "display_name": "Python 3",
27
- "language": "python",
28
- "name": "python3"
29
- },
30
- "language_info": {
31
- "codemirror_mode": {
32
- "name": "ipython",
33
- "version": 2
34
- },
35
- "file_extension": ".py",
36
- "mimetype": "text/x-python",
37
- "name": "python",
38
- "nbconvert_exporter": "python",
39
- "pygments_lexer": "ipython2",
40
- "version": "2.7.6"
41
- },
42
- "colab": {
43
- "provenance": [],
44
- "include_colab_link": true
45
- }
46
- },
47
- "nbformat": 4,
48
- "nbformat_minor": 5
49
- }