AI & ML interests

Optimised quants for high-throughput deployments! Compatible with Transformers, TGI & vLLM πŸ€—

XenovaΒ 
posted an update 8 days ago
view post
Post
3002
The next generation of AI-powered websites is going to be WILD! 🀯

In-browser tool calling & MCP is finally here, allowing LLMs to interact with websites programmatically.

To show what's possible, I built a demo using Liquid AI's new LFM2 model, powered by πŸ€— Transformers.js: LiquidAI/LFM2-WebGPU

As always, the demo is open source (which you can find under the "Files" tab), so I'm excited to see how the community builds upon this! πŸš€
  • 1 reply
Β·
XenovaΒ 
posted an update 21 days ago
view post
Post
2894
Introducing Voxtral WebGPU: State-of-the-art audio transcription directly in your browser! 🀯
πŸ—£οΈ Transcribe videos, meeting notes, songs and more
πŸ” Runs on-device, meaning no data is sent to a server
🌎 Multilingual (8 languages)
πŸ€— Completely free (forever) & open source

That's right, we're running Mistral's new Voxtral-Mini-3B model 100% locally in-browser on WebGPU, powered by Transformers.js and ONNX Runtime Web! πŸ”₯

Try it out yourself! πŸ‘‡
webml-community/Voxtral-WebGPU
danieldkΒ 
posted an update 30 days ago
view post
Post
1964
kernels 0.8.0 is out: https://github.com/huggingface/kernels/releases/tag/v0.8.0

This release refines kernel selection in the kernelize function:

β€’ You can now register kernels for certain CUDA capability ranges.
β€’ Rather than doing exact mating of modes, fall back to other compatible modes. If you are kernelizing for inference, but you only registered a training + torch.compile kernel, it will use that kernel since it is compatible with inference as well.
  • 1 reply
Β·
danieldkΒ 
posted an update about 1 month ago
danieldkΒ 
posted an update about 1 month ago
view post
Post
351
Kernels 0.7.0 is out: https://github.com/huggingface/kernels/releases/tag/v0.7.0 πŸš€

This release makes it possible to register multiple kernels for a layer. Do you have a super-fast kernel for inference and another kernel for training? Register them both and kernelize will pick the kernel depending on whether you are going to do training or inference.
reach-vbΒ 
posted an update 2 months ago
view post
Post
3948
Excited to onboard FeatherlessAI on Hugging Face as an Inference Provider - they bring a fleet of 6,700+ LLMs on-demand on the Hugging Face Hub 🀯

Starting today, you'd be able to access all those LLMs (OpenAI compatible) on HF model pages and via OpenAI client libraries too! πŸ’₯

Go, play with it today: https://huggingface.co/blog/inference-providers-featherless

P.S. They're also bringing on more GPUs to support all your concurrent requests!
  • 1 reply
Β·
dvilasueroΒ 
posted an update 2 months ago
view post
Post
2789
Super excited to launch Hugging Face Sheets: Spreadsheets meet AI and unstructured data.

A few months ago, we started imagining new ways to build and transform datasets with the latest open-source models.

Today, I'm thrilled to introduce our first step in this direction.


In a nutshell:

πŸ“ Effortlessly run prompts and models over your data.
🌐 Agentic search for accuracy and real-time information.
πŸ–ΌοΈ Familiar, minimalistic interface for interacting with data.
🎯 Human feedback 2.0: Your input directly improves generated data.
πŸ’― Access hundreds of open models and leading inference providers.

Go to this space to try it out!

aisheets/sheets

Leave your questions below, we're just getting started!
Β·
XenovaΒ 
posted an update 2 months ago
view post
Post
6962
NEW: Real-time conversational AI models can now run 100% locally in your browser! 🀯

πŸ” Privacy by design (no data leaves your device)
πŸ’° Completely free... forever
πŸ“¦ Zero installation required, just visit a website
⚑️ Blazingly-fast WebGPU-accelerated inference

Try it out: webml-community/conversational-webgpu

For those interested, here's how it works:
- Silero VAD for voice activity detection
- Whisper for speech recognition
- SmolLM2-1.7B for text generation
- Kokoro for text to speech

Powered by Transformers.js and ONNX Runtime Web! πŸ€— I hope you like it!
Β·
danieldkΒ 
posted an update 2 months ago
view post
Post
1787
We have been working on a project called kernels. kernels makes it possible to load compute kernels directly from the Hub! πŸš€

We plan to give kernels a more proper introduction soon. But for those who have been following along, we are happy to announce a new release:

- New layer API with torch.compile support.
- Experimental support for loading Apple Silicon Metal 🀘 Kernels.
- Generate wheels from Hub kernels for legacy deployments.

Full release notes here: https://github.com/huggingface/kernels/releases/tag/v0.6.0
joaoganteΒ 
posted an update 3 months ago
view post
Post
559
Let's go! Custom generation code has landed in transformers πŸš€

Have you designed a new cool KV cache? Maybe you're comparing new test-time compute ideas you've been researching? Have you found a way to do diffusion with existing models? You can now easily share your findings with the community with custom generation code, sharing the well-known generate interface πŸ€“

In a nutshell, we have expanded the support of custom modeling code on the Hub with *model-agnostic* custom generation code. Write for one model, reuse with any model -- hopefully, this will democratize access to new generation ideas 🫑

As a creator, you gain the ability to get your ideas in transformers with minimal effort. You'll also have access to all Hub features: a landing page for your creation, discussions, usage metrics, ... πŸ€“

πŸ’Ž Resources πŸ’Ž
- docs: https://huggingface.co/docs/transformers/generation_strategies#custom-decoding-methods
- minimal example: transformers-community/custom_generate_example
- discussion: transformers-community/support#10
reach-vbΒ 
posted an update 3 months ago
view post
Post
4233
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! πŸ’₯

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!

in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.

p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage

p.p.s. this is fully backwards compatible so everything will work as it should! πŸ€—
Β·
XenovaΒ 
posted an update 4 months ago
view post
Post
8341
Introducing the ONNX model explorer: Browse, search, and visualize neural networks directly in your browser. 🀯 A great tool for anyone studying Machine Learning! We're also releasing the entire dataset of graphs so you can use them in your own projects! πŸ€—

Check it out! πŸ‘‡
Demo: onnx-community/model-explorer
Dataset: onnx-community/model-explorer
Source code: https://github.com/xenova/model-explorer
XenovaΒ 
posted an update 4 months ago
view post
Post
2863
Reasoning models like o3 and o4-mini are advancing faster than ever, but imagine what will be possible when they can run locally in your browser! 🀯

Well, with πŸ€— Transformers.js, you can do just that! Here's Zyphra's new ZR1 model running at over 100 tokens/second on WebGPU! ⚑️

Giving models access to browser APIs (like File System, Screen Capture, and more) could unlock an entirely new class of web experiences that are personalized, interactive, and run locally in a secure, sandboxed environment.

For now, try out the demo! πŸ‘‡
webml-community/Zyphra-ZR1-WebGPU
  • 1 reply
Β·