{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "OBCrx5fSG4Qm" }, "source": [ "# CS-UY 4613: Project\n", "\n", "Yufei Zhen\n", "\n", "macOS: Ventura 13.3.1 (a), GPU: Apple M2 Max" ] }, { "cell_type": "markdown", "metadata": { "id": "IptBGhoVG790" }, "source": [ "## Setup\n", "\n", "* video source: [https://www.youtube.com/@pantelism](https://www.youtube.com/@pantelism)\n", "\n", "* **option 1** (repository source: [https://github.com/PacktPublishing/LLM-Engineers-Handbook](https://github.com/PacktPublishing/LLM-Engineers-Handbook))\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "8i3CcnpG_VPn", "outputId": "597a492a-6305-43a6-e94e-b74fa8a12d7b" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cloning into 'LLM-Engineers-Handbook'...\n", "remote: Enumerating objects: 1970, done.\u001b[K\n", "remote: Counting objects: 100% (515/515), done.\u001b[K\n", "remote: Compressing objects: 100% (138/138), done.\u001b[K\n", "remote: Total 1970 (delta 414), reused 377 (delta 377), pack-reused 1455 (from 2)\u001b[K\n", "Receiving objects: 100% (1970/1970), 4.77 MiB | 21.22 MiB/s, done.\n", "Resolving deltas: 100% (1263/1263), done.\n" ] } ], "source": [ "# !git clone https://github.com/PacktPublishing/LLM-Engineers-Handbook.git" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !poetry env use 3.11\n", "# !poetry install --without aws\n", "# !poetry run pre-commit install" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "MPS available: True\n", "CUDA available: False\n" ] } ], "source": [ "import torch\n", "print(f\"MPS available: {torch.backends.mps.is_available()}\")\n", "print(f\"CUDA available: {torch.cuda.is_available()}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "ufyNDhgOYiUh" }, "source": [ "## RAG Architecture\n", "\n", "- Integrating into [https://github.com/PacktPublishing/LLM-Engineers-Handbook/tree/main/llm_engineering/application/rag](https://github.com/PacktPublishing/LLM-Engineers-Handbook/tree/main/llm_engineering/application/rag):\n", "\n", "- Directory overview: \n", "\n", "```\n", ".\n", "├── ... \n", "├── clips/ # Generated video clip responses\n", "├── llm_engineering/ # Core project package\n", "│ ├── application/\n", "│ │ ├── ...\n", "│ │ ├── rag # Main RAG architecture\n", "│ │ │ ├── __init__.py\n", "│ │ │ ├── base.py\n", "│ │ │ ├── multimodel_dispatcher.py (new)\n", "│ │ │ ├── pipeline.py (new)\n", "│ │ │ ├── prompt_templates.py\n", "│ │ │ ├── query_expansion.py\n", "│ │ │ ├── reranking.py\n", "│ │ │ ├── retriever.py (modified)\n", "│ │ │ ├── self_query.py\n", "│ │ │ ├── topic_retriever.py (new)\n", "│ │ │ ├── video_ingetser.py (new)\n", "│ │ │ ├── video_processor.py (new)\n", "│ ├── domain/\n", "│ │ ├── ...\n", "│ │ ├── queries.py (modified)\n", "│ │ ├── video_chunks.py (new)\n", "├── demonstration.ipynb (YOU'RE HERE)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Video Ingestion" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "video_db = \"/Users/yufeizhen/Desktop/project/videos\"" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32m2025-05-04 03:25:21.777\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mllm_engineering.settings\u001b[0m:\u001b[36mload_settings\u001b[0m:\u001b[36m94\u001b[0m - \u001b[1mLoading settings from the ZenML secret store.\u001b[0m\n", "\u001b[32m2025-05-04 03:25:21.929\u001b[0m | \u001b[33m\u001b[1mWARNING \u001b[0m | \u001b[36mllm_engineering.settings\u001b[0m:\u001b[36mload_settings\u001b[0m:\u001b[36m99\u001b[0m - \u001b[33m\u001b[1mFailed to load settings from the ZenML secret store. Defaulting to loading the settings from the '.env' file.\u001b[0m\n", "\u001b[32m2025-05-04 03:25:22.015\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mllm_engineering.infrastructure.db.mongo\u001b[0m:\u001b[36m__new__\u001b[0m:\u001b[36m20\u001b[0m - \u001b[1mConnection to MongoDB with URI successful: mongodb://llm_engineering:llm_engineering@127.0.0.1:27017\u001b[0m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1;35mPyTorch version 2.2.2 available.\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32m2025-05-04 03:25:23.410\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mllm_engineering.infrastructure.db.qdrant\u001b[0m:\u001b[36m__new__\u001b[0m:\u001b[36m29\u001b[0m - \u001b[1mConnection to Qdrant DB with URI successful: str\u001b[0m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1;35mLoad pretrained SentenceTransformer: all-MiniLM-L6-v2\u001b[0m\n", "Initializing fallback TextEmbedder\n", "\u001b[1;35mLoad pretrained SentenceTransformer: all-MiniLM-L6-v2\u001b[0m\n", "Loading CLIP model: openai/clip-vit-base-patch32\n", "CLIP model loaded successfully\n", "Initialized embedders\n", "Loaded NLP model\n", "Loaded BERTopic\n", "Processing videos from: /Users/yufeizhen/Desktop/project/videos\n", "Already processed 8 videos\n", "Previously processed videos:\n", " - 9CGGh6ivg68\n", " - FCQ-rih6cHY\n", " - TV-DjM8242s\n", " - WXoOohWU28Y\n", " - eFgkZKhNUdM\n", " - eQ6UE968Xe4\n", " - lb_5AdUpfuA\n", " - rCVlIVKqqGE\n", "Found 8 video folders\n", "Will process 0 videos (8 skipped)\n", "Skipping TV-DjM8242s (already processed)\n", "Skipping eFgkZKhNUdM (already processed)\n", "Skipping eQ6UE968Xe4 (already processed)\n", "Skipping rCVlIVKqqGE (already processed)\n", "Skipping lb_5AdUpfuA (already processed)\n", "Skipping FCQ-rih6cHY (already processed)\n", "Skipping 9CGGh6ivg68 (already processed)\n", "Skipping WXoOohWU28Y (already processed)\n", "\n", "All videos processed!\n", "Total processed videos: 8\n" ] } ], "source": [ "from llm_engineering.application.rag.video_ingester import VideoIngester\n", "\n", "ingester = VideoIngester(video_root=video_db)\n", "# ingester.process_video_library(force_reprocess=True)\n", "ingester.process_video_library()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total stored vectors: 403\n" ] } ], "source": [ "from qdrant_client import QdrantClient\n", "\n", "client = QdrantClient(path=\"/Users/yufeizhen/Desktop/project/qdrant_storage\")\n", "print(\"Total stored vectors:\", client.count(\"video_chunks\").count)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Video Q&A" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Initializing VideoQAEngine\n", "Video root: /Users/yufeizhen/Desktop/project/videos\n", "Qdrant storage path: /Users/yufeizhen/Desktop/project/qdrant_storage\n", "Connected to Qdrant storage at: /Users/yufeizhen/Desktop/project/qdrant_storage\n", "Available collections: collections=[CollectionDescription(name='video_chunks')]\n", "Found video_chunks collection with 403 points\n", "Initializing fallback TextEmbedder\n", "\u001b[1;35mLoad pretrained SentenceTransformer: all-MiniLM-L6-v2\u001b[0m\n", "Loading CLIP model: openai/clip-vit-base-patch32\n", "CLIP model loaded successfully\n", "VideoQAEngine initialized successfully\n" ] } ], "source": [ "from llm_engineering.application.rag.pipeline import VideoQAEngine\n", "\n", "engine = VideoQAEngine(video_root=video_db)\n", "\n", "def respond(question):\n", " clips = engine.ask(question)\n", " return [(str(clip[\"path\"]), f\"Relevance: {clip['score']:.2f}\") for clip in clips]" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "question = \"Using only the videos, explain the the binary cross entropy loss function.\"" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "--- Processing query: 'Using only the videos, explain the the binary cross entropy loss function.' ---\n", "Retrieving relevant video segments...\n", "Encoding query with CLIP: 'Using only the videos, explain the the binary cros...'\n", "Cleaned text for CLIP: Using only the videos, explain the the binary cros...\n", "Query embedded successfully\n", "Sending search request to Qdrant (attempt 1/5)\n", "Creating fresh connection to Qdrant...\n", "Search successful, found 3 results\n", "Retrieval completed in 0.07 seconds\n", "Found 3 relevant video segments\n", "\n", "Processing result 1/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 1270.0s - 1302.0s\n", " Score: 0.8472\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_1270_0.847.mp4\n", " Clip created successfully\n", "\n", "Processing result 2/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 642.0s - 647.0s\n", " Score: 0.8467\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_642_0.847.mp4\n", " Clip created successfully\n", "\n", "Processing result 3/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 874.0s - 882.0s\n", " Score: 0.8379\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_874_0.838.mp4\n", " Clip created successfully\n", "\n", "Processed 3 clips successfully\n" ] }, { "data": { "text/plain": [ "[('clips/clip_eFgkZKhNUdM_1270_0.847.mp4', 'Relevance: 0.85'),\n", " ('clips/clip_eFgkZKhNUdM_642_0.847.mp4', 'Relevance: 0.85'),\n", " ('clips/clip_eFgkZKhNUdM_874_0.838.mp4', 'Relevance: 0.84')]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "respond(question)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gradio App" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1;35mHTTP Request: GET \u001b[0m\u001b[34mhttps://api.gradio.app/pkg-version\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n" ] } ], "source": [ "import gradio as gr\n", "\n", "interface = gr.Interface(\n", " fn=respond,\n", " inputs=gr.Textbox(label=\"Ask about the video content\"),\n", " outputs=gr.Gallery(label=\"Relevant Video Clips\"),\n", " examples=[\n", " [\"Using only the videos, explain how ResNets work.\"],\n", " [\"Using only the videos, explain the advantages of CNNs over fully connected networks.\"],\n", " [\"Using only the videos, explain the the binary cross entropy loss function.\"]\n", " ]\n", ")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* Running on local URL: http://127.0.0.1:7860\n", "\u001b[1;35mHTTP Request: GET \u001b[0m\u001b[34mhttp://127.0.0.1:7860/gradio_api/startup-events\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n", "\u001b[1;35mHTTP Request: HEAD \u001b[0m\u001b[34mhttp://127.0.0.1:7860/\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n", "\u001b[1;35mHTTP Request: GET \u001b[0m\u001b[34mhttps://api.gradio.app/v3/tunnel-request\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n", "* Running on public URL: https://382d4d0bacff86ee02.gradio.live\n", "\n", "This share link expires in 1 week. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)\n", "\u001b[1;35mHTTP Request: HEAD \u001b[0m\u001b[34mhttps://382d4d0bacff86ee02.gradio.live\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "--- Processing query: 'Using only the videos, explain the the binary cross entropy loss function.' ---\n", "Retrieving relevant video segments...\n", "Encoding query with CLIP: 'Using only the videos, explain the the binary cros...'\n", "Cleaned text for CLIP: Using only the videos, explain the the binary cross entropy loss function....\n", "Cleaned text for CLIP: Using only the videos, explain the the binary cros...\n", "Query embedded successfully\n", "Sending search request to Qdrant (attempt 1/5)\n", "Search successful, found 3 results\n", "Retrieval completed in 0.34 seconds\n", "Found 3 relevant video segments\n", "\n", "Processing result 1/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 1270.0s - 1302.0s\n", " Score: 0.8472\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_1270_0.847.mp4\n", " Clip created successfully\n", "\n", "Processing result 2/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 642.0s - 647.0s\n", " Score: 0.8467\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_642_0.847.mp4\n", " Clip created successfully\n", "\n", "Processing result 3/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 874.0s - 882.0s\n", " Score: 0.8379\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_874_0.838.mp4\n", " Clip created successfully\n", "\n", "Processed 3 clips successfully\n" ] } ], "source": [ "interface.launch(share=True)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Initializing VideoQAEngine\n", "Video root: /Users/yufeizhen/Desktop/project/videos\n", "Qdrant storage path: /Users/yufeizhen/Desktop/project/qdrant_storage\n", "Connected to Qdrant storage at: /Users/yufeizhen/Desktop/project/qdrant_storage\n", "Available collections: collections=[CollectionDescription(name='video_chunks')]\n", "Found video_chunks collection with 403 points\n", "Initializing fallback TextEmbedder\n", "\u001b[1;35mLoad pretrained SentenceTransformer: all-MiniLM-L6-v2\u001b[0m\n", "Loading CLIP model: openai/clip-vit-base-patch32\n", "CLIP model loaded successfully\n", "VideoQAEngine initialized successfully\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1;35mHTTP Request: GET \u001b[0m\u001b[34mhttps://api.gradio.app/pkg-version\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n" ] } ], "source": [ "import gradio as gr\n", "from llm_engineering.application.rag.pipeline import VideoQAEngine\n", "\n", "# Initialize the VideoQAEngine with the video database\n", "video_db = \"/Users/yufeizhen/Desktop/project/videos\"\n", "engine = VideoQAEngine(video_root=video_db)\n", "\n", "# Define the chat function that processes messages and returns relevant video clips\n", "def chat(message, history):\n", " # Process message to get relevant clips\n", " clips = engine.ask(message)\n", " \n", " # Format for display\n", " clips_gallery = [(str(clip[\"path\"]), \"Relevance: {:.2f}\".format(clip['score'])) for clip in clips]\n", " \n", " # Return both a text response and the clips\n", " return \"Here are the relevant video clips for: '{}'\".format(message), clips_gallery\n", "\n", "# Create a more flexible interface using Blocks\n", "with gr.Blocks(theme=\"soft\") as demo:\n", " gr.Markdown(\"# Chat with your Video Library\")\n", " gr.Markdown(\"Ask questions about the video content and get relevant clips. You can continue the conversation with follow-up questions.\")\n", " \n", " # Create chatbot for conversation history\n", " chatbot = gr.Chatbot(height=300)\n", " \n", " # Create gallery to display video clips\n", " gallery = gr.Gallery(label=\"Relevant Video Clips\", show_label=True)\n", " \n", " # Create message input\n", " msg = gr.Textbox(\n", " placeholder=\"Ask about the video content...\", \n", " label=\"Your Question\",\n", " show_label=False\n", " )\n", " \n", " # Define clear button\n", " clear = gr.Button(\"Clear\")\n", " \n", " # Example questions\n", " gr.Examples(\n", " examples=[\n", " \"Using only the videos, explain how ResNets work.\",\n", " \"Using only the videos, explain the advantages of CNNs over fully connected networks.\",\n", " \"Using only the videos, explain the the binary cross entropy loss function.\"\n", " ],\n", " inputs=msg\n", " )\n", " \n", " # Define the chat function that updates both chatbot and gallery\n", " def respond(message, chat_history):\n", " # Get text response and clips\n", " response, clips = chat(message, chat_history)\n", " \n", " # Update chat history\n", " chat_history.append((message, response))\n", " \n", " # Return updated chat history and gallery\n", " return \"\", chat_history, clips\n", " \n", " # Set up the event handlers\n", " msg.submit(respond, [msg, chatbot], [msg, chatbot, gallery])\n", " clear.click(lambda: ([], [], None), None, [chatbot, gallery, msg])" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* Running on local URL: http://127.0.0.1:7861\n", "\u001b[1;35mHTTP Request: GET \u001b[0m\u001b[34mhttp://127.0.0.1:7861/gradio_api/startup-events\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n", "\u001b[1;35mHTTP Request: HEAD \u001b[0m\u001b[34mhttp://127.0.0.1:7861/\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n", "\u001b[1;35mHTTP Request: GET \u001b[0m\u001b[34mhttps://api.gradio.app/v3/tunnel-request\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n", "* Running on public URL: https://48d861a2319613eb9b.gradio.live\n", "\n", "This share link expires in 1 week. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)\n", "\u001b[1;35mHTTP Request: HEAD \u001b[0m\u001b[34mhttps://48d861a2319613eb9b.gradio.live\u001b[1;35m \"HTTP/1.1 200 OK\"\u001b[0m\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "--- Processing query: 'Using only the videos, explain the the binary cross entropy loss function.' ---\n", "Retrieving relevant video segments...\n", "Encoding query with CLIP: 'Using only the videos, explain the the binary cros...'\n", "Cleaned text for CLIP: Using only the videos, explain the the binary cros...\n", "Query embedded successfully\n", "Sending search request to Qdrant (attempt 1/5)\n", "Creating fresh connection to Qdrant...\n", "Search successful, found 3 results\n", "Retrieval completed in 0.07 seconds\n", "Found 3 relevant video segments\n", "\n", "Processing result 1/3:\n", " Video ID: eFgkZKhNUdM\n", " Timestamps: 1270.0s - 1302.0s\n", " Score: 0.8472\n", " Found alternative video path: /Users/yufeizhen/Desktop/project/videos/eFgkZKhNUdM/eFgkZKhNUdM.mp4\n", " Creating clip to: clips/clip_eFgkZKhNUdM_1270_0.847.mp4\n" ] } ], "source": [ "demo.launch(share=True)" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [ "gFdZON-DKOlx", "KVkt7HBUKTig", "d9KkwKhjKXgr", "6NQUu-rUKfln" ], "gpuType": "A100", "machine_shape": "hm", "provenance": [] }, "kernelspec": { "display_name": "llm-engineering---hH_lZm-py3.11", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.2" }, "vscode": { "interpreter": { "hash": "b62b85bf5dbe6dfb396652aed826e60c089a2288076109f466cd63de941fd51e" } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "01c5ce2d579c4b7d8ba2607fc6d76328": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_15bf9c9d1cf44b5abee88dddc74a927b", "IPY_MODEL_05fb5c50098f4e5997c07afed3b9383e", "IPY_MODEL_cf25f66f1ca943be8af0c7ce14727f21" ], "layout": "IPY_MODEL_d3f93597cb6640a29656daa95153f07f" } }, "05fb5c50098f4e5997c07afed3b9383e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_7d39d74185e549e483dd3bc9a5fe8c76", "max": 1, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_c66b78a449c6451fbe757d1082304de5", "value": 1 } }, "15bf9c9d1cf44b5abee88dddc74a927b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_2aa0d8f2c2bc4b3083edfcc5dc7ccc10", "placeholder": "​", "style": "IPY_MODEL_a0e715cd015d499ba015c54e810bf81d", "value": "Batches: 100%" } }, "2aa0d8f2c2bc4b3083edfcc5dc7ccc10": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "5072655a4a724daa9d7b660e1709fa11": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "7d39d74185e549e483dd3bc9a5fe8c76": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "a0e715cd015d499ba015c54e810bf81d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "c66b78a449c6451fbe757d1082304de5": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "cb41643480ca4834bcc03611c4783326": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "cf25f66f1ca943be8af0c7ce14727f21": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_cb41643480ca4834bcc03611c4783326", "placeholder": "​", "style": "IPY_MODEL_5072655a4a724daa9d7b660e1709fa11", "value": " 1/1 [00:00<00:00,  3.12it/s]" } }, "d3f93597cb6640a29656daa95153f07f": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } } } } }, "nbformat": 4, "nbformat_minor": 0 }