chansung commited on
Commit
8d5dd22
·
verified ·
1 Parent(s): aae35f1

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. auto_diffusers.log +197 -0
  2. gradio_app.py +16 -12
auto_diffusers.log CHANGED
@@ -1332,3 +1332,200 @@ IMPORTANT GUIDELINES:
1332
  2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB
1333
  2025-05-29 17:00:34,005 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell
1334
  2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1332
  2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB
1333
  2025-05-29 17:00:34,005 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell
1334
  2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell
1335
+ 2025-05-29 17:03:33,448 - __main__ - INFO - Initializing GradioAutodiffusers
1336
+ 2025-05-29 17:03:33,448 - __main__ - DEBUG - API key found, length: 39
1337
+ 2025-05-29 17:03:33,448 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator
1338
+ 2025-05-29 17:03:33,448 - auto_diffusers - DEBUG - API key length: 39
1339
+ 2025-05-29 17:03:33,448 - auto_diffusers - DEBUG - Creating tools for Gemini
1340
+ 2025-05-29 17:03:33,448 - auto_diffusers - INFO - Created 3 tools for Gemini
1341
+ 2025-05-29 17:03:33,448 - auto_diffusers - INFO - Successfully configured Gemini AI model with tools
1342
+ 2025-05-29 17:03:33,448 - hardware_detector - INFO - Initializing HardwareDetector
1343
+ 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - Starting system hardware detection
1344
+ 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64
1345
+ 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.12.9
1346
+ 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi
1347
+ 2025-05-29 17:03:33,452 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected
1348
+ 2025-05-29 17:03:33,452 - hardware_detector - DEBUG - Checking PyTorch availability
1349
+ 2025-05-29 17:03:33,924 - hardware_detector - INFO - PyTorch 2.7.0 detected
1350
+ 2025-05-29 17:03:33,924 - hardware_detector - DEBUG - CUDA available: False, MPS available: True
1351
+ 2025-05-29 17:03:33,924 - hardware_detector - INFO - Hardware detection completed successfully
1352
+ 2025-05-29 17:03:33,924 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.12.9', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'}
1353
+ 2025-05-29 17:03:33,924 - auto_diffusers - INFO - Hardware detector initialized successfully
1354
+ 2025-05-29 17:03:33,924 - __main__ - INFO - AutoDiffusersGenerator initialized successfully
1355
+ 2025-05-29 17:03:33,925 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator
1356
+ 2025-05-29 17:03:33,925 - simple_memory_calculator - DEBUG - HuggingFace API initialized
1357
+ 2025-05-29 17:03:33,925 - simple_memory_calculator - DEBUG - Known models in database: 4
1358
+ 2025-05-29 17:03:33,925 - __main__ - INFO - SimpleMemoryCalculator initialized successfully
1359
+ 2025-05-29 17:03:33,925 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7
1360
+ 2025-05-29 17:03:33,927 - asyncio - DEBUG - Using selector: KqueueSelector
1361
+ 2025-05-29 17:03:33,940 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None
1362
+ 2025-05-29 17:03:33,947 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
1363
+ 2025-05-29 17:03:33,995 - asyncio - DEBUG - Using selector: KqueueSelector
1364
+ 2025-05-29 17:03:34,042 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None
1365
+ 2025-05-29 17:03:34,042 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x13730a8d0>
1366
+ 2025-05-29 17:03:34,042 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'GET']>
1367
+ 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - send_request_headers.complete
1368
+ 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'GET']>
1369
+ 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - send_request_body.complete
1370
+ 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'GET']>
1371
+ 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:03:33 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')])
1372
+ 2025-05-29 17:03:34,043 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
1373
+ 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'GET']>
1374
+ 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - receive_response_body.complete
1375
+ 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - response_closed.started
1376
+ 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - response_closed.complete
1377
+ 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - close.started
1378
+ 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - close.complete
1379
+ 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None
1380
+ 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x13730aed0>
1381
+ 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'HEAD']>
1382
+ 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - send_request_headers.complete
1383
+ 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'HEAD']>
1384
+ 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - send_request_body.complete
1385
+ 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'HEAD']>
1386
+ 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:03:33 GMT'), (b'server', b'uvicorn'), (b'content-length', b'74295'), (b'content-type', b'text/html; charset=utf-8')])
1387
+ 2025-05-29 17:03:34,051 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK"
1388
+ 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'HEAD']>
1389
+ 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - receive_response_body.complete
1390
+ 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - response_closed.started
1391
+ 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - response_closed.complete
1392
+ 2025-05-29 17:03:34,051 - httpcore.connection - DEBUG - close.started
1393
+ 2025-05-29 17:03:34,051 - httpcore.connection - DEBUG - close.complete
1394
+ 2025-05-29 17:03:34,063 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None
1395
+ 2025-05-29 17:03:34,200 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0
1396
+ 2025-05-29 17:03:34,469 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x1313d2030>
1397
+ 2025-05-29 17:03:34,470 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x1373a57d0> server_hostname='api.gradio.app' timeout=30
1398
+ 2025-05-29 17:03:34,476 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x1345117f0>
1399
+ 2025-05-29 17:03:34,476 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x131d87450> server_hostname='api.gradio.app' timeout=3
1400
+ 2025-05-29 17:03:34,760 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x131424d70>
1401
+ 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'GET']>
1402
+ 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_headers.complete
1403
+ 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'GET']>
1404
+ 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_body.complete
1405
+ 2025-05-29 17:03:34,762 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'GET']>
1406
+ 2025-05-29 17:03:34,771 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x13191f500>
1407
+ 2025-05-29 17:03:34,771 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'GET']>
1408
+ 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - send_request_headers.complete
1409
+ 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'GET']>
1410
+ 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - send_request_body.complete
1411
+ 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'GET']>
1412
+ 2025-05-29 17:03:34,907 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:03:34 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')])
1413
+ 2025-05-29 17:03:34,907 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
1414
+ 2025-05-29 17:03:34,907 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'GET']>
1415
+ 2025-05-29 17:03:34,908 - httpcore.http11 - DEBUG - receive_response_body.complete
1416
+ 2025-05-29 17:03:34,908 - httpcore.http11 - DEBUG - response_closed.started
1417
+ 2025-05-29 17:03:34,908 - httpcore.http11 - DEBUG - response_closed.complete
1418
+ 2025-05-29 17:03:34,908 - httpcore.connection - DEBUG - close.started
1419
+ 2025-05-29 17:03:34,908 - httpcore.connection - DEBUG - close.complete
1420
+ 2025-05-29 17:03:34,919 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:03:34 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')])
1421
+ 2025-05-29 17:03:34,919 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
1422
+ 2025-05-29 17:03:34,919 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'GET']>
1423
+ 2025-05-29 17:03:34,919 - httpcore.http11 - DEBUG - receive_response_body.complete
1424
+ 2025-05-29 17:03:34,920 - httpcore.http11 - DEBUG - response_closed.started
1425
+ 2025-05-29 17:03:34,920 - httpcore.http11 - DEBUG - response_closed.complete
1426
+ 2025-05-29 17:03:34,920 - httpcore.connection - DEBUG - close.started
1427
+ 2025-05-29 17:03:34,920 - httpcore.connection - DEBUG - close.complete
1428
+ 2025-05-29 17:03:35,503 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
1429
+ 2025-05-29 17:03:35,733 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0
1430
+ 2025-05-29 17:05:44,828 - __main__ - INFO - Initializing GradioAutodiffusers
1431
+ 2025-05-29 17:05:44,828 - __main__ - DEBUG - API key found, length: 39
1432
+ 2025-05-29 17:05:44,828 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator
1433
+ 2025-05-29 17:05:44,828 - auto_diffusers - DEBUG - API key length: 39
1434
+ 2025-05-29 17:05:44,828 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools
1435
+ 2025-05-29 17:05:44,828 - hardware_detector - INFO - Initializing HardwareDetector
1436
+ 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - Starting system hardware detection
1437
+ 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64
1438
+ 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11
1439
+ 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi
1440
+ 2025-05-29 17:05:44,831 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected
1441
+ 2025-05-29 17:05:44,832 - hardware_detector - DEBUG - Checking PyTorch availability
1442
+ 2025-05-29 17:05:45,252 - hardware_detector - INFO - PyTorch 2.7.0 detected
1443
+ 2025-05-29 17:05:45,252 - hardware_detector - DEBUG - CUDA available: False, MPS available: True
1444
+ 2025-05-29 17:05:45,252 - hardware_detector - INFO - Hardware detection completed successfully
1445
+ 2025-05-29 17:05:45,252 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'}
1446
+ 2025-05-29 17:05:45,252 - auto_diffusers - INFO - Hardware detector initialized successfully
1447
+ 2025-05-29 17:05:45,252 - __main__ - INFO - AutoDiffusersGenerator initialized successfully
1448
+ 2025-05-29 17:05:45,252 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator
1449
+ 2025-05-29 17:05:45,252 - simple_memory_calculator - DEBUG - HuggingFace API initialized
1450
+ 2025-05-29 17:05:45,252 - simple_memory_calculator - DEBUG - Known models in database: 4
1451
+ 2025-05-29 17:05:45,252 - __main__ - INFO - SimpleMemoryCalculator initialized successfully
1452
+ 2025-05-29 17:05:45,252 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7
1453
+ 2025-05-29 17:05:45,254 - asyncio - DEBUG - Using selector: KqueueSelector
1454
+ 2025-05-29 17:05:45,267 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None
1455
+ 2025-05-29 17:05:45,272 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
1456
+ 2025-05-29 17:05:45,344 - asyncio - DEBUG - Using selector: KqueueSelector
1457
+ 2025-05-29 17:05:45,377 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None
1458
+ 2025-05-29 17:05:45,377 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x12a758190>
1459
+ 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'GET']>
1460
+ 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_headers.complete
1461
+ 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'GET']>
1462
+ 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_body.complete
1463
+ 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'GET']>
1464
+ 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')])
1465
+ 2025-05-29 17:05:45,379 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK"
1466
+ 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'GET']>
1467
+ 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - receive_response_body.complete
1468
+ 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - response_closed.started
1469
+ 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - response_closed.complete
1470
+ 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - close.started
1471
+ 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - close.complete
1472
+ 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None
1473
+ 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x12d3cd450>
1474
+ 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'HEAD']>
1475
+ 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_headers.complete
1476
+ 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'HEAD']>
1477
+ 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_body.complete
1478
+ 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'HEAD']>
1479
+ 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'server', b'uvicorn'), (b'content-length', b'75706'), (b'content-type', b'text/html; charset=utf-8')])
1480
+ 2025-05-29 17:05:45,385 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK"
1481
+ 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'HEAD']>
1482
+ 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - receive_response_body.complete
1483
+ 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - response_closed.started
1484
+ 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - response_closed.complete
1485
+ 2025-05-29 17:05:45,385 - httpcore.connection - DEBUG - close.started
1486
+ 2025-05-29 17:05:45,385 - httpcore.connection - DEBUG - close.complete
1487
+ 2025-05-29 17:05:45,396 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None
1488
+ 2025-05-29 17:05:45,466 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x129322610>
1489
+ 2025-05-29 17:05:45,466 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x11872cdd0> server_hostname='api.gradio.app' timeout=3
1490
+ 2025-05-29 17:05:45,538 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x12d39b810>
1491
+ 2025-05-29 17:05:45,538 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x12a7cf920> server_hostname='api.gradio.app' timeout=30
1492
+ 2025-05-29 17:05:45,548 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0
1493
+ 2025-05-29 17:05:45,746 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x1289a17d0>
1494
+ 2025-05-29 17:05:45,746 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'GET']>
1495
+ 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - send_request_headers.complete
1496
+ 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'GET']>
1497
+ 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - send_request_body.complete
1498
+ 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'GET']>
1499
+ 2025-05-29 17:05:45,821 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x129321950>
1500
+ 2025-05-29 17:05:45,821 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'GET']>
1501
+ 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - send_request_headers.complete
1502
+ 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'GET']>
1503
+ 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - send_request_body.complete
1504
+ 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'GET']>
1505
+ 2025-05-29 17:05:45,885 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')])
1506
+ 2025-05-29 17:05:45,886 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
1507
+ 2025-05-29 17:05:45,886 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'GET']>
1508
+ 2025-05-29 17:05:45,887 - httpcore.http11 - DEBUG - receive_response_body.complete
1509
+ 2025-05-29 17:05:45,887 - httpcore.http11 - DEBUG - response_closed.started
1510
+ 2025-05-29 17:05:45,887 - httpcore.http11 - DEBUG - response_closed.complete
1511
+ 2025-05-29 17:05:45,887 - httpcore.connection - DEBUG - close.started
1512
+ 2025-05-29 17:05:45,888 - httpcore.connection - DEBUG - close.complete
1513
+ 2025-05-29 17:05:45,965 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')])
1514
+ 2025-05-29 17:05:45,965 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK"
1515
+ 2025-05-29 17:05:45,966 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'GET']>
1516
+ 2025-05-29 17:05:45,967 - httpcore.http11 - DEBUG - receive_response_body.complete
1517
+ 2025-05-29 17:05:45,967 - httpcore.http11 - DEBUG - response_closed.started
1518
+ 2025-05-29 17:05:45,967 - httpcore.http11 - DEBUG - response_closed.complete
1519
+ 2025-05-29 17:05:45,968 - httpcore.connection - DEBUG - close.started
1520
+ 2025-05-29 17:05:45,968 - httpcore.connection - DEBUG - close.complete
1521
+ 2025-05-29 17:05:46,631 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
1522
+ 2025-05-29 17:05:46,857 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0
1523
+ 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell
1524
+ 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell
1525
+ 2025-05-29 17:05:55,606 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0}
1526
+ 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM
1527
+ 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell
1528
+ 2025-05-29 17:05:55,607 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell
1529
+ 2025-05-29 17:05:55,607 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB
1530
+ 2025-05-29 17:05:55,607 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell
1531
+ 2025-05-29 17:05:55,607 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell
gradio_app.py CHANGED
@@ -724,16 +724,20 @@ def create_gradio_interface():
724
 
725
  /* Adjust hero header for mobile */
726
  .hero-header {
727
- padding: 2rem 1rem !important;
728
- margin-bottom: 2rem !important;
729
  }
730
 
731
  .hero-header h1 {
732
- font-size: 2.5rem !important;
733
  }
734
 
735
  .hero-header h2 {
736
- font-size: 1.4rem !important;
 
 
 
 
737
  }
738
 
739
  /* Mobile-friendly glass panels */
@@ -793,18 +797,18 @@ def create_gradio_interface():
793
  with gr.Row():
794
  with gr.Column(scale=1):
795
  gr.HTML("""
796
- <div class="hero-header floating" style="text-align: center; padding: 3rem 2rem; margin-bottom: 3rem; position: relative;">
797
  <div style="position: relative; z-index: 2;">
798
- <h1 style="color: white; font-size: 3.5rem; margin: 0; font-weight: 800; text-shadow: 0 4px 8px rgba(0,0,0,0.3); letter-spacing: -0.02em; background: linear-gradient(135deg, #ffffff 0%, #f8fafc 50%, #e2e8f0 100%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text;">
799
- ✨ Auto-Diffusers
800
  </h1>
801
- <h2 style="color: rgba(255,255,255,0.95); font-size: 1.8rem; margin: 0.5rem 0 1rem 0; font-weight: 600; text-shadow: 0 2px 4px rgba(0,0,0,0.2);">
802
- Code Generator
803
  </h2>
804
- <p style="color: rgba(255,255,255,0.9); font-size: 1.2rem; margin: 0; font-weight: 400; text-shadow: 0 2px 4px rgba(0,0,0,0.2); max-width: 600px; margin: 0 auto; line-height: 1.6;">
805
- Generate stunning, optimized diffusers code tailored perfectly for your hardware using advanced AI
806
  </p>
807
- <div style="margin-top: 2rem;">
808
  <span style="display: inline-block; background: rgba(255,255,255,0.2); padding: 0.5rem 1rem; border-radius: 20px; color: white; font-size: 0.9rem; backdrop-filter: blur(10px); border: 1px solid rgba(255,255,255,0.3);">
809
  🤖 Powered by Google Gemini 2.5
810
  </span>
 
724
 
725
  /* Adjust hero header for mobile */
726
  .hero-header {
727
+ padding: 1rem 0.5rem !important;
728
+ margin-bottom: 1rem !important;
729
  }
730
 
731
  .hero-header h1 {
732
+ font-size: 1.8rem !important;
733
  }
734
 
735
  .hero-header h2 {
736
+ font-size: 1rem !important;
737
+ }
738
+
739
+ .hero-header p {
740
+ font-size: 0.9rem !important;
741
  }
742
 
743
  /* Mobile-friendly glass panels */
 
797
  with gr.Row():
798
  with gr.Column(scale=1):
799
  gr.HTML("""
800
+ <div class="hero-header floating" style="text-align: center; padding: 1.5rem 1rem; margin-bottom: 1.5rem; position: relative;">
801
  <div style="position: relative; z-index: 2;">
802
+ <h1 style="color: white; font-size: 2.2rem; margin: 0; font-weight: 800; text-shadow: 0 4px 8px rgba(0,0,0,0.3); letter-spacing: -0.02em; background: linear-gradient(135deg, #ffffff 0%, #f8fafc 50%, #e2e8f0 100%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text;">
803
+ ✨ Auto Diffusers Config
804
  </h1>
805
+ <h2 style="color: rgba(255,255,255,0.95); font-size: 1.2rem; margin: 0.3rem 0 0.8rem 0; font-weight: 600; text-shadow: 0 2px 4px rgba(0,0,0,0.2);">
806
+ Hardware-Optimized Code Generator
807
  </h2>
808
+ <p style="color: rgba(255,255,255,0.9); font-size: 1rem; margin: 0; font-weight: 400; text-shadow: 0 2px 4px rgba(0,0,0,0.2); max-width: 500px; margin: 0 auto; line-height: 1.5;">
809
+ Generate optimized diffusion model code for your hardware
810
  </p>
811
+ <div style="margin-top: 1rem;">
812
  <span style="display: inline-block; background: rgba(255,255,255,0.2); padding: 0.5rem 1rem; border-radius: 20px; color: white; font-size: 0.9rem; backdrop-filter: blur(10px); border: 1px solid rgba(255,255,255,0.3);">
813
  🤖 Powered by Google Gemini 2.5
814
  </span>