t2m / gradio_app.log
thanhkt's picture
Upload 11 files
6ec19f4 verified
2025-05-25 06:56:15,917 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 06:56:15,937 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 06:56:15,946 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK"
2025-05-25 06:56:14,370 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
2025-05-25 06:57:30,387 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:37,096 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 06:57:37,664 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:42,160 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 06:57:42,195 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:45,702 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 06:57:45,731 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:46,850 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 06:57:46,884 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:51,613 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 06:57:51,648 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:55,180 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 06:57:55,219 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:57:58,621 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:58:02,122 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:58:05,752 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:58:09,486 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:58:13,000 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 06:58:16,595 - src.core.code_generator - INFO - CodeGenerator initialized with RAG: True, Context Learning: False
2025-05-25 06:58:16,597 - __main__ - INFO - Starting job 7e0fc9f1-7dc6-4186-8c48-e66a97b0a14d for topic: Fourier transform
2025-05-25 06:58:16,598 - __main__ - INFO - Running generate_video_pipeline for topic: Fourier transform
2025-05-25 06:58:16,598 - __main__ - INFO - Starting video generation pipeline for job 7e0fc9f1-7dc6-4186-8c48-e66a97b0a14d
2025-05-25 06:58:16,598 - __main__ - INFO - Job 7e0fc9f1-7dc6-4186-8c48-e66a97b0a14d progress: 15% - Starting video generation pipeline...
2025-05-25 06:58:16,607 - __main__ - INFO - Job 7e0fc9f1-7dc6-4186-8c48-e66a97b0a14d progress: 15% - Creating scene outline...
2025-05-25 06:58:16,638 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:15,479 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:15,500 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 06:58:29,056 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 06:58:29,062 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 06:58:29,070 - __main__ - INFO - Job 7e0fc9f1-7dc6-4186-8c48-e66a97b0a14d progress: 25% - Generating implementation plans...
2025-05-25 06:58:29,150 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:29,545 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:29,562 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:30,910 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:30,931 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:32,275 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:32,290 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:33,613 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:33,635 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:34,045 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:34,068 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:34,465 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:34,481 - __main__ - INFO - Job 7e0fc9f1-7dc6-4186-8c48-e66a97b0a14d progress: 35% - Generating code for scenes...
2025-05-25 06:58:34,598 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:58:35,182 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:58:35,193 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 06:58:35,194 - src.core.code_generator - ERROR - Response text was: litellm.RateLimitError: litellm.RateLimitError: VertexAIException - {
"error": {
"code": 429,
"message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.QuotaFailure",
"violations": [
{
"quotaMetric": "generativelanguage.g...
2025-05-25 06:58:35,195 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 06:59:56,806 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 06:59:56,810 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 06:59:56,812 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 1
2025-05-25 06:59:56,822 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 06:59:57,225 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 06:59:57,266 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 06:59:57,266 - src.core.code_generator - ERROR - Response text was: litellm.RateLimitError: litellm.RateLimitError: VertexAIException - {
"error": {
"code": 429,
"message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.QuotaFailure",
"violations": [
{
"quotaMetric": "generativelanguage.g...
2025-05-25 06:59:57,268 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:00:00,534 - LiteLLM - INFO -
LiteLLM completion() model= gemini-1.5-pro-002; provider = gemini
2025-05-25 07:00:01,361 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 429 Too Many Requests"
2025-05-25 07:00:01,375 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for error fix: Expecting value: line 1 column 1 (char 0)
2025-05-25 07:00:01,376 - src.core.code_generator - ERROR - Response text was: litellm.RateLimitError: litellm.RateLimitError: VertexAIException - {
"error": {
"code": 429,
"message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.QuotaFailure",
"violations": [
{
"quotaMetric": "generativelanguage.g...
2025-05-25 07:00:01,377 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:10:04,502 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 07:10:04,509 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 07:10:04,534 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK"
2025-05-25 07:10:05,494 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
2025-05-25 07:10:27,323 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:35,474 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 07:10:35,965 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:41,213 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 07:10:41,242 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:44,675 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 07:10:44,701 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:49,230 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 07:10:49,256 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:52,817 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 07:10:52,843 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:54,174 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 07:10:54,205 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:10:57,678 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:11:01,569 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:11:05,099 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:11:08,551 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:11:12,065 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 07:11:15,636 - src.core.code_generator - INFO - CodeGenerator initialized with RAG: True, Context Learning: False
2025-05-25 07:11:15,639 - __main__ - INFO - Starting job c9beddff-bb95-4d59-8aca-69c9a51a6efc for topic: Fourier transform
2025-05-25 07:11:15,639 - __main__ - INFO - Running generate_video_pipeline for topic: Fourier transform
2025-05-25 07:11:15,639 - __main__ - INFO - Starting video generation pipeline for job c9beddff-bb95-4d59-8aca-69c9a51a6efc
2025-05-25 07:11:15,640 - __main__ - INFO - Job c9beddff-bb95-4d59-8aca-69c9a51a6efc progress: 15% - Starting video generation pipeline...
2025-05-25 07:11:15,664 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:11:24,617 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:11:24,623 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:11:24,737 - __main__ - INFO - Job c9beddff-bb95-4d59-8aca-69c9a51a6efc progress: 15% - Creating scene outline...
2025-05-25 07:11:24,742 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:11:33,357 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:11:33,362 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:11:33,969 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:11:52,239 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:11:52,247 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:11:52,251 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 1
2025-05-25 07:11:52,263 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:11:53,727 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:02,589 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:02,594 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:02,825 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:05,650 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:05,652 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:05,833 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:25,934 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:25,946 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:25,949 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 1
2025-05-25 07:12:30,758 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene1
2025-05-25 07:12:30,906 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:33,377 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:33,383 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:33,385 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 2
2025-05-25 07:12:33,395 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:35,069 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:42,269 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:42,274 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:42,451 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:48,910 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:48,913 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:49,212 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 07:12:54,079 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 07:12:54,093 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 07:12:54,095 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 1
2025-05-25 08:04:58,578 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 08:04:58,607 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 08:04:58,607 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK"
2025-05-25 08:04:59,519 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
2025-05-25 08:14:03,765 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:10,315 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:14:10,838 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:15,140 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:14:15,163 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:19,394 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:14:19,419 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:20,217 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:14:20,243 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:23,715 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:14:23,739 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:27,029 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:14:27,061 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:30,468 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:33,741 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:37,092 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:40,504 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:43,796 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:14:47,290 - src.core.code_generator - INFO - CodeGenerator initialized with RAG: True, Context Learning: False
2025-05-25 08:14:47,293 - __main__ - INFO - Starting job fb610799-54aa-4a07-8ead-2b566374b866 for topic: Fourier transform
2025-05-25 08:14:47,293 - __main__ - INFO - Running generate_video_pipeline for topic: Fourier transform
2025-05-25 08:14:47,293 - __main__ - INFO - Starting video generation pipeline for job fb610799-54aa-4a07-8ead-2b566374b866
2025-05-25 08:14:47,294 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 15% - Starting video generation pipeline...
2025-05-25 08:14:47,333 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:14:50,123 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:14:50,128 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:14:50,331 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 15% - Creating scene outline...
2025-05-25 08:14:50,347 - src.core.code_generator - INFO - Using cached RAG queries for Fourier transform_scene1
2025-05-25 08:14:50,795 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:15:11,878 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:15:11,887 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:15:11,889 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 1
2025-05-25 08:15:11,978 - src.core.code_generator - INFO - Using cached RAG queries for Fourier transform_scene2
2025-05-25 08:15:12,181 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:15:16,237 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene1
2025-05-25 08:15:16,407 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:15:33,858 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:15:33,870 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:15:33,872 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 1
2025-05-25 08:15:39,739 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:15:39,748 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:15:39,751 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 2
2025-05-25 08:15:39,779 - src.core.code_generator - INFO - Using cached RAG queries for Fourier transform_scene3
2025-05-25 08:15:40,752 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:15:43,060 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene2
2025-05-25 08:15:43,377 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:15:56,780 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:15:56,791 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:15:56,793 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 2
2025-05-25 08:15:56,929 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene1
2025-05-25 08:15:57,108 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:16:03,372 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene2
2025-05-25 08:16:03,636 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:16:17,461 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:16:17,473 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:16:17,477 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 1
2025-05-25 08:16:37,974 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:16:37,976 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:16:37,977 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 2
2025-05-25 08:17:00,476 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:17:00,479 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:17:00,482 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 3
2025-05-25 08:17:00,490 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:17:07,803 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:17:09,234 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:17:09,236 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:17:09,608 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:17:18,399 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:17:18,403 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:17:18,672 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:17:44,368 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:17:44,370 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:17:44,372 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 4
2025-05-25 08:17:44,373 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 25% - Generating implementation plans...
2025-05-25 08:17:44,386 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:17:48,804 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:17:48,807 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:17:48,809 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 3
2025-05-25 08:17:51,518 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:17:51,520 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:17:52,309 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:17:54,970 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene3
2025-05-25 08:17:55,238 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:26,336 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:18:26,340 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:18:26,343 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 5
2025-05-25 08:18:26,354 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:37,247 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:18:37,250 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:18:37,765 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:39,769 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:44,378 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:44,699 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:18:44,707 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:18:44,710 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 3
2025-05-25 08:18:47,281 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:18:47,283 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:18:47,555 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:54,166 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:18:54,168 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:18:54,392 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:18:55,658 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:18:55,667 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:18:55,669 - src.core.code_generator - INFO - Successfully generated code for Fourier transform scene 6
2025-05-25 08:18:55,670 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 35% - Generating code for scenes...
2025-05-25 08:19:00,676 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 45% - Compiling Manim code...
2025-05-25 08:19:05,677 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 60% - Rendering scenes...
2025-05-25 08:19:05,729 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:19:10,697 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 80% - Combining videos...
2025-05-25 08:19:15,709 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 progress: 90% - Finalizing video...
2025-05-25 08:19:16,638 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:19:16,640 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:19:16,642 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 5
2025-05-25 08:19:18,974 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:19:18,976 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:19:19,460 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:19:20,366 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene5
2025-05-25 08:19:20,735 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:19:28,978 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:19:28,981 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:19:28,984 - src.core.code_generator - ERROR - Error during code extraction attempt 1: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,984 - src.core.code_generator - ERROR - Error during code extraction attempt 2: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,985 - src.core.code_generator - ERROR - Error during code extraction attempt 3: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,985 - src.core.code_generator - ERROR - Error during code extraction attempt 4: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,986 - src.core.code_generator - ERROR - Error during code extraction attempt 5: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,987 - src.core.code_generator - ERROR - Error during code extraction attempt 6: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,987 - src.core.code_generator - ERROR - Error during code extraction attempt 7: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,988 - src.core.code_generator - ERROR - Error during code extraction attempt 8: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,989 - src.core.code_generator - ERROR - Error during code extraction attempt 9: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,989 - src.core.code_generator - ERROR - Error during code extraction attempt 10: expected string or bytes-like object, got 'NoneType'
2025-05-25 08:19:28,990 - src.core.code_generator - ERROR - Error fixing code for Fourier transform scene 4: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 08:19:35,070 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:19:35,080 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:19:35,083 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 6
2025-05-25 08:19:40,525 - src.core.code_generator - INFO - Using cached RAG error fix queries for Fourier transform_scene4
2025-05-25 08:19:40,783 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:19:42,057 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:19:42,059 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:19:42,062 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 5
2025-05-25 08:20:12,212 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:20:12,220 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:20:12,227 - src.core.code_generator - INFO - Successfully fixed code errors for Fourier transform scene 4
2025-05-25 08:20:46,751 - __main__ - INFO - Video generation pipeline completed for job fb610799-54aa-4a07-8ead-2b566374b866
2025-05-25 08:20:47,118 - __main__ - INFO - Job fb610799-54aa-4a07-8ead-2b566374b866 completed successfully in 359.46 seconds
2025-05-25 08:41:03,341 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 08:41:03,347 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 08:41:03,368 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK"
2025-05-25 08:41:04,279 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
2025-05-25 08:51:15,634 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:22,534 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:51:23,036 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:26,586 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:51:26,620 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:31,189 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:51:31,218 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:34,618 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:51:34,642 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:38,937 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:51:38,961 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:40,882 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 08:51:40,909 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:44,329 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:50,832 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:54,320 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:51:57,925 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:52:01,360 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 08:52:04,908 - src.core.code_generator - INFO - CodeGenerator initialized with RAG: True, Context Learning: False
2025-05-25 08:52:04,913 - __main__ - INFO - Starting job 644fb913-8734-4ec4-96fa-5f1be8a191b2 for topic: Derivative
2025-05-25 08:52:04,914 - __main__ - INFO - Running generate_video_pipeline for topic: Derivative
2025-05-25 08:52:04,914 - __main__ - INFO - Starting video generation pipeline for job 644fb913-8734-4ec4-96fa-5f1be8a191b2
2025-05-25 08:52:04,915 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 15% - Starting video generation pipeline...
2025-05-25 08:52:04,921 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 15% - Creating scene outline...
2025-05-25 08:52:04,950 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:52:09,925 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 25% - Generating implementation plans...
2025-05-25 08:52:11,070 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:52:11,072 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:52:11,075 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:52:29,382 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:52:29,388 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:52:29,407 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 35% - Generating code for scenes...
2025-05-25 08:52:29,488 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:52:35,406 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:52:35,408 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:52:36,080 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:52:54,249 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:52:54,254 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:52:54,256 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 45% - Compiling Manim code...
2025-05-25 08:52:54,265 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:53:00,995 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:53:01,000 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:53:01,216 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:53:46,494 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:53:46,503 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:53:46,505 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 60% - Rendering scenes...
2025-05-25 08:53:46,510 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:53:55,042 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:53:55,044 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:53:55,339 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:54:14,614 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:54:14,616 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:54:14,617 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 80% - Combining videos...
2025-05-25 08:54:14,632 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:54:22,152 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:54:22,153 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:54:22,393 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:54:42,545 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:54:42,554 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:54:42,581 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:54:52,106 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:54:52,110 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:54:52,343 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:55:10,577 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:55:10,584 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:55:10,585 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 progress: 90% - Finalizing video...
2025-05-25 08:55:10,596 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:55:17,688 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:55:17,691 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:55:17,915 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:55:41,485 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:55:41,493 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:55:41,505 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:55:48,990 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:55:48,994 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:55:49,253 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:56:10,782 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:56:10,795 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:56:10,839 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:56:21,394 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:56:21,401 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:56:21,652 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:56:52,792 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:56:52,797 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:56:52,811 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:57:00,944 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:57:00,946 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:57:01,158 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:57:36,094 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:57:36,102 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:57:36,125 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:57:41,311 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:57:41,317 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:57:41,566 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:58:08,260 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:58:08,272 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:58:08,292 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:58:11,954 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:58:11,956 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:58:12,211 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:58:37,109 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:58:37,116 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:58:37,150 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:58:40,617 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:58:40,623 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:58:40,864 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:59:11,378 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:59:11,390 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:59:11,408 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:59:22,619 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:59:22,621 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:59:22,868 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:59:51,197 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:59:51,200 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:59:51,224 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 08:59:57,741 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 08:59:57,743 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 08:59:57,960 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:00:38,684 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:00:38,696 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:00:38,705 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:00:43,068 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:00:43,073 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:00:43,330 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:01:07,543 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:01:07,550 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:01:07,596 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:01:13,017 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:01:13,019 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:01:13,237 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:01:32,132 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:01:32,140 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:01:32,171 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:01:40,396 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:01:40,402 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:01:40,643 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:02:01,365 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:02:01,377 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:02:01,387 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:02:08,883 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:02:08,890 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:02:09,088 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:02:43,929 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:02:43,942 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:02:44,128 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:02:52,325 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:02:52,326 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:02:52,547 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:03:22,656 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:03:22,659 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:03:22,661 - src.core.code_generator - INFO - Successfully generated code for Derivative scene 1
2025-05-25 09:03:22,673 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:03:32,110 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:03:32,112 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:03:32,377 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:03:44,127 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:03:57,636 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:03:57,637 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:03:57,638 - src.core.code_generator - INFO - Successfully generated code for Derivative scene 2
2025-05-25 09:03:57,646 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:03:58,842 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:03:58,844 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:03:59,102 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:04:06,667 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:04:06,670 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:04:06,903 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:04:30,570 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:04:30,573 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:04:30,575 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 1
2025-05-25 09:04:41,252 - src.core.code_generator - INFO - Using cached RAG error fix queries for Derivative_scene1
2025-05-25 09:04:41,516 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:04:39,734 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:04:39,737 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:04:39,739 - src.core.code_generator - INFO - Successfully generated code for Derivative scene 3
2025-05-25 09:04:39,750 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:04:50,684 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:04:50,686 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:04:51,239 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:05:11,225 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:05:11,236 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:05:11,241 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 1
2025-05-25 09:05:25,202 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:05:36,857 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:05:36,861 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:05:36,868 - src.core.code_generator - INFO - Successfully generated code for Derivative scene 4
2025-05-25 09:05:39,678 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:05:39,682 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:05:40,556 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:06:22,724 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:06:31,300 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:06:31,305 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:06:32,033 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:06:32,101 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:06:32,104 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:06:32,106 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 3
2025-05-25 09:06:59,757 - src.core.code_generator - INFO - Using cached RAG error fix queries for Derivative_scene3
2025-05-25 09:07:00,220 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:07:03,326 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:07:10,811 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:07:10,816 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:07:11,109 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:07:24,695 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:07:24,697 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:07:24,698 - src.core.code_generator - INFO - Successfully generated code for Derivative scene 5
2025-05-25 09:07:29,033 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:07:29,037 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:07:29,040 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 3
2025-05-25 09:07:40,809 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:07:40,816 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:07:40,823 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 4
2025-05-25 09:07:40,083 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:07:49,184 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:07:49,186 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:07:49,589 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:08:33,017 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:08:33,023 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:08:33,043 - src.core.code_generator - INFO - Successfully generated code for Derivative scene 6
2025-05-25 09:09:34,668 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:09:42,409 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:09:42,411 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:09:42,805 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:10:05,300 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:10:05,302 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:10:05,305 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 5
2025-05-25 09:10:12,757 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:10:30,614 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:10:30,617 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:10:31,120 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:11:12,659 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:11:12,662 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:11:12,663 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 6
2025-05-25 09:11:24,838 - src.core.code_generator - INFO - Using cached RAG error fix queries for Derivative_scene6
2025-05-25 09:11:25,103 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 09:11:53,292 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 09:11:53,294 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 09:11:53,296 - src.core.code_generator - INFO - Successfully fixed code errors for Derivative scene 6
2025-05-25 09:12:42,724 - __main__ - INFO - Video generation pipeline completed for job 644fb913-8734-4ec4-96fa-5f1be8a191b2
2025-05-25 09:12:43,115 - __main__ - INFO - Job 644fb913-8734-4ec4-96fa-5f1be8a191b2 completed successfully in 1237.81 seconds
2025-05-25 10:56:48,339 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 10:56:48,632 - httpx - INFO - HTTP Request: GET http://127.0.0.1:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 10:56:48,662 - httpx - INFO - HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
2025-05-25 10:57:37,716 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:57:46,670 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 10:57:46,977 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:57:51,932 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 10:57:51,958 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:57:53,983 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 10:57:54,007 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:57:57,712 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 10:57:57,738 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:01,130 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 10:58:01,158 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:04,640 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 10:58:04,668 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:08,088 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:11,486 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:15,182 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:18,604 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:22,225 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 10:58:25,612 - src.core.code_generator - INFO - CodeGenerator initialized with RAG: True, Context Learning: False
2025-05-25 10:58:25,615 - __main__ - INFO - Starting job b184cf20-7f7d-4518-8b87-bb4551854d94 for topic: Inverse function
2025-05-25 10:58:25,615 - __main__ - INFO - Running generate_video_pipeline for topic: Inverse function
2025-05-25 10:58:25,615 - __main__ - INFO - Starting video generation pipeline for job b184cf20-7f7d-4518-8b87-bb4551854d94
2025-05-25 10:58:25,615 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 15% - Starting video generation pipeline...
2025-05-25 10:58:25,622 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 15% - Creating scene outline...
2025-05-25 10:58:25,636 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 10:58:25,431 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 10:58:28,147 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 25% - Generating implementation plans...
2025-05-25 10:58:33,222 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 35% - Generating code for scenes...
2025-05-25 10:58:36,202 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 10:58:36,210 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 10:58:38,833 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 10:59:11,691 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 10:59:11,718 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 45% - Compiling Manim code...
2025-05-25 10:59:11,831 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 10:59:14,690 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 10:59:24,555 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 10:59:25,171 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 10:59:26,481 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 10:59:55,970 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 10:59:55,975 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 60% - Rendering scenes...
2025-05-25 10:59:55,985 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 10:59:57,653 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:00:08,880 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:00:09,112 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:00:11,873 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:00:52,630 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:00:52,639 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:00:55,856 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:01:04,736 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:01:04,968 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:01:08,225 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:01:40,160 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:01:40,168 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 80% - Combining videos...
2025-05-25 11:01:40,191 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:01:42,350 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:01:52,036 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:01:52,271 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:01:54,722 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:02:22,131 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:02:22,138 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:02:24,288 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:02:31,552 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:02:31,765 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:02:34,099 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:03:09,498 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:03:09,503 - __main__ - INFO - Job b184cf20-7f7d-4518-8b87-bb4551854d94 progress: 90% - Finalizing video...
2025-05-25 11:03:09,510 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:03:12,731 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:03:23,234 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:03:23,458 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:03:24,389 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:03:59,559 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:03:59,584 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:04:01,552 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:04:12,584 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:04:12,778 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:04:15,833 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:04:55,083 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:04:55,092 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:04:58,809 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:05:12,424 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:05:12,638 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:05:15,206 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:05:47,937 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:05:47,946 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:05:51,141 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:06:03,797 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:06:04,038 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:06:07,520 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:06:50,124 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:06:50,134 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:06:53,086 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:07:01,782 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:07:02,045 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:07:05,560 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:07:37,532 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:07:37,541 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:07:40,213 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:07:48,878 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:07:49,116 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:07:52,137 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:08:36,637 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:08:36,652 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:08:39,031 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:08:49,361 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:08:49,572 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:08:52,381 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:09:23,360 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:09:23,370 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:09:24,726 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:09:32,882 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:09:33,109 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:09:35,909 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:10:28,671 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:10:28,681 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:10:31,325 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:10:35,776 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:10:35,979 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:10:38,862 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:11:25,818 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:11:25,835 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:11:28,179 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:11:38,665 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:11:38,892 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:11:42,004 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:12:40,828 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:12:40,885 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:12:44,894 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:12:57,796 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:12:57,993 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:13:01,382 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:13:36,658 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:13:36,683 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:13:36,978 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:13:49,056 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:13:49,283 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:13:52,583 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:14:28,470 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:14:28,480 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:14:31,467 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:14:41,477 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:14:41,694 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:14:44,410 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:15:55,983 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:15:55,990 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:15:58,500 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:16:06,584 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:16:06,799 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:16:09,698 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:16:44,095 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:16:44,124 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:16:46,907 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:16:58,237 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:16:58,484 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:17:02,427 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:17:35,953 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:17:35,967 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:17:38,182 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:17:50,934 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:17:51,167 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:17:53,984 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:19:06,091 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:19:06,218 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:19:07,394 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:19:23,412 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:19:23,611 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:19:27,098 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:20:11,063 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:20:11,065 - src.core.code_generator - INFO - Successfully generated code for Inverse function scene 1
2025-05-25 11:20:11,076 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:14,359 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:20:14,750 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:16,877 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 200 OK"
2025-05-25 11:20:28,161 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:20:28,467 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:30,009 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:30,045 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:20:30,046 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:31,157 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:31,166 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:20:31,167 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:31,591 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 11:20:31,808 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:32,315 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:32,329 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:20:32,331 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:32,988 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:33,012 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:20:33,014 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:33,443 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:33,456 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:20:33,457 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:34,089 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:34,111 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:20:34,112 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:34,486 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:34,495 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:20:34,496 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:35,255 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:35,262 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:20:35,263 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:35,650 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:35,690 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:20:35,692 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:36,456 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:36,470 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:20:36,471 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:36,871 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:36,898 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:20:36,900 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:37,649 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:37,661 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:20:37,663 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:38,016 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:38,030 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:20:38,031 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:38,860 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:38,870 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:20:38,875 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:39,169 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:39,185 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:20:39,187 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:40,035 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:40,056 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:20:40,058 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:40,302 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:40,313 - src.core.code_generator - ERROR - Error fixing code for Inverse function scene 1: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:20:41,247 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:41,257 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:20:41,259 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:42,293 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:37,409 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:20:37,410 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:37,599 - src.core.code_generator - INFO - Using cached RAG error fix queries for Inverse function_scene1
2025-05-25 11:20:37,774 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:38,407 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:38,413 - src.core.code_generator - ERROR - Error generating Manim code for Inverse function scene 2: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:20:38,422 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:38,875 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:38,885 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:20:38,886 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:39,502 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:39,508 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 11:20:39,508 - src.core.code_generator - ERROR - Response text was: Error: litellm.APIError: APIError: OpenrouterException - {"error":{"message":"This request requires more credits, or fewer max_tokens. You requested up to 113710 tokens, but can only afford 22475. To increase, visit https://openrouter.ai/settings/credits and upgrade to a paid account","code":402,"metadata":{"provider_name":null}},"user_id":"user_2xQBPeHSKPg6iSDzL846iuIkBKZ"}...
2025-05-25 11:20:39,511 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:39,915 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:39,928 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:20:39,930 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:40,630 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:40,637 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:20:40,639 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:41,110 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:41,125 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:20:41,127 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:41,681 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:41,691 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:20:41,692 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:42,183 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:42,196 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:20:42,197 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:42,816 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:42,839 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:20:42,841 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:43,261 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:43,289 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:20:43,290 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:44,009 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:44,015 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:20:44,016 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:44,518 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:44,530 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:20:44,531 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:45,146 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:45,152 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:20:45,153 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:45,652 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:45,662 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:20:45,663 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:46,269 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:46,275 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:20:46,277 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:46,797 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:46,809 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:20:46,810 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:47,403 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:47,415 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:20:47,416 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:47,897 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:47,908 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:20:47,909 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:48,515 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:48,528 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:20:48,530 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:49,042 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:49,068 - src.core.code_generator - ERROR - Error fixing code for Inverse function scene 1: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:20:49,698 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:49,711 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:20:49,713 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:50,831 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:50,837 - src.core.code_generator - ERROR - Error generating Manim code for Inverse function scene 3: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:20:50,849 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:52,003 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:52,011 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 11:20:52,012 - src.core.code_generator - ERROR - Response text was: Error: litellm.APIError: APIError: OpenrouterException - {"error":{"message":"This request requires more credits, or fewer max_tokens. You requested up to 113660 tokens, but can only afford 22475. To increase, visit https://openrouter.ai/settings/credits and upgrade to a paid account","code":402,"metadata":{"provider_name":null}},"user_id":"user_2xQBPeHSKPg6iSDzL846iuIkBKZ"}...
2025-05-25 11:20:52,013 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:53,208 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:53,215 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:20:53,217 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:54,426 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:54,436 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:20:54,437 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:55,607 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:55,615 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:20:55,616 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:56,771 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:56,782 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:20:56,783 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:57,914 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:57,923 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:20:57,924 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:20:59,078 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:20:59,084 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:20:59,086 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:00,253 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:00,284 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:21:00,285 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:01,449 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:01,460 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:21:01,462 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:02,604 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:02,629 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:21:02,631 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:03,796 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:03,802 - src.core.code_generator - ERROR - Error generating Manim code for Inverse function scene 4: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:21:03,813 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:04,978 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:04,986 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 11:21:04,986 - src.core.code_generator - ERROR - Response text was: Error: litellm.APIError: APIError: OpenrouterException - {"error":{"message":"This request requires more credits, or fewer max_tokens. You requested up to 113326 tokens, but can only afford 22475. To increase, visit https://openrouter.ai/settings/credits and upgrade to a paid account","code":402,"metadata":{"provider_name":null}},"user_id":"user_2xQBPeHSKPg6iSDzL846iuIkBKZ"}...
2025-05-25 11:21:04,988 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:06,208 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:06,217 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:21:06,218 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:07,375 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:07,386 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:21:07,388 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:08,581 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:08,595 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:21:08,596 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:09,771 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:09,782 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:21:09,784 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:08,959 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:08,965 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:21:08,971 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:10,118 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:10,147 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:21:10,150 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:11,339 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:11,352 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:21:11,354 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:12,526 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:12,538 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:21:12,539 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:13,625 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:13,645 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:21:13,647 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:14,834 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:14,845 - src.core.code_generator - ERROR - Error generating Manim code for Inverse function scene 5: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:21:14,879 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:16,052 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:16,059 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 11:21:16,059 - src.core.code_generator - ERROR - Response text was: Error: litellm.APIError: APIError: OpenrouterException - {"error":{"message":"This request requires more credits, or fewer max_tokens. You requested up to 112842 tokens, but can only afford 22475. To increase, visit https://openrouter.ai/settings/credits and upgrade to a paid account","code":402,"metadata":{"provider_name":null}},"user_id":"user_2xQBPeHSKPg6iSDzL846iuIkBKZ"}...
2025-05-25 11:21:16,061 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:17,274 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:17,281 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:21:17,282 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:18,413 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:18,422 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:21:18,423 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:19,554 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:19,561 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:21:19,565 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:20,665 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:20,675 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:21:20,677 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:21,835 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:21,853 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:21:21,855 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:22,964 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:22,974 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:21:22,976 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:24,210 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:24,237 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:21:24,239 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:25,458 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:25,474 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:21:25,475 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:26,575 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:26,583 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:21:26,584 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:27,735 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:27,742 - src.core.code_generator - ERROR - Error generating Manim code for Inverse function scene 6: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:21:27,753 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:28,916 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:28,923 - src.core.code_generator - ERROR - JSONDecodeError when parsing RAG queries for code generation: Expecting value: line 1 column 1 (char 0)
2025-05-25 11:21:28,923 - src.core.code_generator - ERROR - Response text was: Error: litellm.APIError: APIError: OpenrouterException - {"error":{"message":"This request requires more credits, or fewer max_tokens. You requested up to 112874 tokens, but can only afford 22475. To increase, visit https://openrouter.ai/settings/credits and upgrade to a paid account","code":402,"metadata":{"provider_name":null}},"user_id":"user_2xQBPeHSKPg6iSDzL846iuIkBKZ"}...
2025-05-25 11:21:28,924 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:30,183 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:30,194 - src.core.code_generator - WARNING - Attempt 1: Failed to extract code pattern. Retrying...
2025-05-25 11:21:30,199 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:31,349 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:31,355 - src.core.code_generator - WARNING - Attempt 2: Failed to extract code pattern. Retrying...
2025-05-25 11:21:31,357 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:32,512 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:32,524 - src.core.code_generator - WARNING - Attempt 3: Failed to extract code pattern. Retrying...
2025-05-25 11:21:32,526 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:33,656 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:33,669 - src.core.code_generator - WARNING - Attempt 4: Failed to extract code pattern. Retrying...
2025-05-25 11:21:33,670 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:35,268 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:35,281 - src.core.code_generator - WARNING - Attempt 5: Failed to extract code pattern. Retrying...
2025-05-25 11:21:35,282 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:36,446 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:36,461 - src.core.code_generator - WARNING - Attempt 6: Failed to extract code pattern. Retrying...
2025-05-25 11:21:36,462 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:37,606 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:37,613 - src.core.code_generator - WARNING - Attempt 7: Failed to extract code pattern. Retrying...
2025-05-25 11:21:37,614 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:38,894 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:38,905 - src.core.code_generator - WARNING - Attempt 8: Failed to extract code pattern. Retrying...
2025-05-25 11:21:38,906 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:40,038 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:40,045 - src.core.code_generator - WARNING - Attempt 9: Failed to extract code pattern. Retrying...
2025-05-25 11:21:40,046 - LiteLLM - INFO -
LiteLLM completion() model= deepseek/deepseek-chat; provider = openrouter
2025-05-25 11:21:38,753 - httpx - INFO - HTTP Request: POST https://openrouter.ai/api/v1/chat/completions "HTTP/1.1 402 Payment Required"
2025-05-25 11:21:38,762 - src.core.code_generator - ERROR - Error generating Manim code for Inverse function scene 7: Failed to extract code pattern after 10 attempts. Pattern: ```python(.*)```
2025-05-25 11:21:38,783 - __main__ - INFO - Video generation pipeline completed for job b184cf20-7f7d-4518-8b87-bb4551854d94
2025-05-25 11:21:38,895 - __main__ - ERROR - No video output file found for job b184cf20-7f7d-4518-8b87-bb4551854d94
2025-05-25 11:21:38,896 - __main__ - ERROR - Error in job b184cf20-7f7d-4518-8b87-bb4551854d94: No video output was generated. Check Manim execution logs.
Traceback (most recent call last):
File "/mnt/d/Theory2Manim-2/Theory2Manim/gradio_app.py", line 242, in process_video_generation
raise Exception("No video output was generated. Check Manim execution logs.")
Exception: No video output was generated. Check Manim execution logs.
2025-05-25 12:23:22,292 - httpx - INFO - HTTP Request: GET http://127.0.0.1:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 12:23:22,320 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 12:23:22,320 - httpx - INFO - HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
2025-05-25 12:25:59,347 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:07,126 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 12:26:07,348 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:11,914 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 12:26:11,939 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:15,618 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 12:26:15,642 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:18,875 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 12:26:18,900 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:20,791 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 12:26:20,815 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:24,062 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2025-05-25 12:26:24,087 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:27,590 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:31,031 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:34,374 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:39,165 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:42,401 - sentence_transformers.SentenceTransformer - INFO - Load pretrained SentenceTransformer: ibm-granite/granite-embedding-30m-english
2025-05-25 12:26:45,695 - src.core.code_generator - INFO - CodeGenerator initialized with RAG: True, Context Learning: False
2025-05-25 12:26:45,698 - __main__ - INFO - Starting job 0860a7ac-fcb0-404a-a8a9-b5d00fbf19fe for topic: Inverse function
2025-05-25 12:26:45,698 - __main__ - INFO - Running generate_video_pipeline for topic: Inverse function
2025-05-25 12:26:45,698 - __main__ - INFO - Starting video generation pipeline for job 0860a7ac-fcb0-404a-a8a9-b5d00fbf19fe
2025-05-25 12:26:45,698 - __main__ - INFO - Job 0860a7ac-fcb0-404a-a8a9-b5d00fbf19fe progress: 15% - Starting video generation pipeline...
2025-05-25 12:26:45,720 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:26:50,040 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:26:50,041 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:26:50,165 - __main__ - INFO - Job 0860a7ac-fcb0-404a-a8a9-b5d00fbf19fe progress: 15% - Creating scene outline...
2025-05-25 12:26:50,173 - src.core.code_generator - INFO - Using cached RAG queries for Inverse function_scene1
2025-05-25 12:26:50,567 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:27:16,169 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:27:16,177 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:27:16,180 - src.core.code_generator - INFO - Successfully generated code for Inverse function scene 1
2025-05-25 12:27:16,192 - src.core.code_generator - INFO - Using cached RAG queries for Inverse function_scene2
2025-05-25 12:27:16,553 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:27:19,696 - src.core.code_generator - INFO - Using cached RAG error fix queries for Inverse function_scene1
2025-05-25 12:27:19,894 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:27:38,434 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:27:38,445 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:27:38,446 - src.core.code_generator - INFO - Successfully fixed code errors for Inverse function scene 1
2025-05-25 12:28:16,987 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:28:16,990 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:28:16,991 - src.core.code_generator - INFO - Successfully generated code for Inverse function scene 2
2025-05-25 12:28:17,001 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:28:18,715 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:28:28,026 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:28:28,029 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:28:28,219 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:28:29,237 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:28:29,239 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:28:29,545 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:28:59,315 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:28:59,323 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:28:59,327 - src.core.code_generator - INFO - Successfully fixed code errors for Inverse function scene 2
2025-05-25 12:29:04,650 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent?key=AIzaSyBUCGQ_hDLAHQN-T1ycWBJV8SGfwusfEjg "HTTP/1.1 200 OK"
2025-05-25 12:29:04,652 - LiteLLM - INFO - Wrapper: Completed Call, calling success_handler
2025-05-25 12:29:04,653 - src.core.code_generator - INFO - Successfully generated code for Inverse function scene 3
2025-05-25 12:29:04,661 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:29:06,729 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:29:09,233 - src.core.code_generator - INFO - Using cached RAG error fix queries for Inverse function_scene2
2025-05-25 12:29:09,433 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:29:11,619 - src.core.code_generator - INFO - Using cached RAG error fix queries for Inverse function_scene1
2025-05-25 12:29:11,844 - LiteLLM - INFO -
LiteLLM completion() model= gemini-2.5-flash-preview-04-17; provider = gemini
2025-05-25 12:29:33,994 - httpx - INFO - HTTP Request: GET http://127.0.0.1:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
2025-05-25 12:29:33,997 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
2025-05-25 12:29:34,022 - httpx - INFO - HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"