runtime error
Exit code: 1. Reason: 178MB/s] Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s][A Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:00<00:00, 4.93it/s] You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers Loading pipeline components...: 43%|βββββ | 3/7 [00:01<00:01, 2.93it/s][A Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 6.75it/s] Loading pipeline components...: 100%|ββββββββββ| 7/7 [00:01<00:00, 3.98it/s] [info] probing https://huggingfaceh4-zephyr-chat.hf.space Loaded as API: https://huggingfaceh4-zephyr-chat.hf.space/ β [warn] https://huggingfaceh4-zephyr-chat.hf.space unusable β Could not fetch config for https://huggingfaceh4-zephyr-chat.hf.space/ [info] probing meta-llama/Llama-3.3-70B-Instruct [warn] meta-llama/Llama-3.3-70B-Instruct unusable β 404 Client Error. (Request ID: Root=1-6896ee6d-64e90d7a397683387110fe73;bff57c7b-3937-4469-a434-821d35e6f3a1) Repository Not Found for url: https://huggingface.co/api/spaces/meta-llama/Llama-3.3-70B-Instruct. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication [info] probing huggingface-projects/gemma-2-9b-it Loaded as API: https://huggingface-projects-gemma-2-9b-it.hf.space β [warn] huggingface-projects/gemma-2-9b-it unusable β You have exceeded your free GPU quota (90s requested vs. 90s left). Try again in 22:17:19 Traceback (most recent call last): File "/home/user/app/app.py", line 42, in <module> llm_client = first_live_space(LLM_SPACES) File "/home/user/app/app.py", line 40, in first_live_space raise RuntimeError("No live chat Space found!") RuntimeError: No live chat Space found!
Container logs:
Fetching error logs...