Not working

#1
by Yntec - opened

Hey @John6666 , I was wondering if you could implement inference providers as in here https://discuss.huggingface.co/t/constant-503-error-for-several-days-when-running-llama-3-1/105144/5 on this space? Currently requesting an image never provides an image, if this space could be made to work all my spaces could be made to work!

Hey.๐Ÿ˜€
Adding an Inference Provider is easy. I'm thinking of modifying it according to the situation, but I'm not sure what the situation is!
The problem is that even well-known models (Llama and Qwen in LLM terms) are still not deployed after a large-scale failure, so even if I implement it, it doesn't work. I get a 404 or 503 error.
This is a meaningless state for HF, so I think it's temporary, but as usual, there's no announcement...๐Ÿฅถ
https://discuss.huggingface.co/t/inference-api-stopped-working/150492/30

Thanks for the information, heh, it was odd to see me mentioned as one of the "big names" on there... so I'm probably just getting a bit impatient, but I take "no announcement" as a good news, bad news are the first to fly and get to places, since we don't have any it means there's no bad news! I had no idea it's a current issue where the last thing said is from 7 hours ago... 40 hours after your reply here!

Who knows, it's possible we wake up tomorrow and everything works without having to change any code, it'll be funny if someone sets up a place that does what HF was doing and we all move en masse there, though! ๐Ÿ˜‚

huggingface migration

Sign up or log in to comment