@nyuuzyou we definitely want everyone to be on Xet in the future, so yup!
Jared Sulzdorf PRO
AI & ML interests
Recent Activity
Organizations



A month ago there were 5,500 users/orgs on Xet with 150K repos and 4PB. Today?
π€ 700,000 users/orgs
π 350,000 repos
π 15PB
Meanwhile, our migrations have pushed throughput to numbers that are bonkers. In June, we hit upload speeds of 577Gb/s (crossing 500Gb/s for the first time).
These are hard numbers to put into context, but let's try:
The latest run of the Common Crawl from

We now have ~32 crawls stored in Xet. At peak upload speed we could move the latest crawl into Xet in about two hours.
We're moving to a new phase in the process, so stay tuned.
This shift in gears means it's also time to roll up our sleeves and look at all the bytes we have and the value we're adding to the community.
I already have some homework from @RichardErkhov to look at the dedupe across their uploads, and I'll be doing the same for other early adopters, big models/datasets, and frequent uploaders (looking at you @bartowski π)
Let me know if there's anything you're interested in; happy to dig in!

we have launched Kernel Hub: easy optimized kernels for all models on Hugging Face π₯ use them right away!
it's where the community populates optimized kernels π€
this release comes in three parts
> Kernel Hub: contains (as of now) 14 kernels
> kernels: Python library to load kernels from Kernel Hub
> kernel-builder: Nix package to build kernels for PyTorch (made using PyTorch C++ frontend)
when building models, your regular workflow should be pulling kernels from Hub and building your model with them π€
here's a practical example with RMSNorm:
1. pull the kernel from Hub with
get_kernel
2. decorate with
use_kernel_forward_from_hub
3. inject it to your model
we'd love to hear your feedback! ππ»
we also welcome kernel contributions by community π₯Ήπ
- request kernels here: kernels-community/README#1
- check out this org:

- read the blog: https://huggingface.co/blog/hello-hf-kernels

Cursor: Hold my beer.
Me: *Slacking off with colleagues*
Cursor: Ping.
Me: π€―

kernels
. kernels
makes it possible to load compute kernels directly from the Hub! πWe plan to give kernels a more proper introduction soon. But for those who have been following along, we are happy to announce a new release:
- New layer API with
torch.compile
support.- Experimental support for loading Apple Silicon Metal π€ Kernels.
- Generate wheels from Hub kernels for legacy deployments.
Full release notes here: https://github.com/huggingface/kernels/releases/tag/v0.6.0

Hey @RichardErkhov we've begun onboarding you to Xet! π
All new repos you create will be Xet-enabled by default and your existing repos are being migrated as we speak.
Since you have a lot of repos the migration of existing content may take some time. While it's ongoing you may notice instances where a repo is a mixture of LFS and Xet-backed files. This shouldn't be an problem due to how we manage backwards compatibility, but if you have any issues, please let me know here.
For new repos you create, just make sure to follow the instructions here to get the full benefits of using Xet storage.
I'll follow up here once all of your repos have been moved over!
Hey @mradermacher sorry for the delay; just wanted to let you know that your migration to Xet should be complete!
Feel free to ping me here if you have any questions or feedback. Excited to hear how it's been going! π€
Also, if you have any issues, don't hesitate to open a discussion here https://huggingface.co/spaces/xet-team/README/discussions or an issue on this repo https://github.com/huggingface/xet-core



Xet is now the default storage for new AI builders π π π
Just sign up for an account, create a new model or dataset, pip install huggingface_hub and you're off to the races!
Read more here https://huggingface.co/changelog/xet-default-for-new-users
And for everyone with existing repositories, just sign up here https://huggingface.co/join/xet - we'll migrate all existing repositories to Xet and all new repos you create will be Xet-backed by default.
Hey @mradermacher just wanted to let you know that we've begun onboarding you to Xet!
All new repos that you create will be Xet-enabled by default. We are still migrating existing repos, so you will see times when there are a mixture of LFS and Xet files side-by-side, but as the migration progresses everything will become Xet.
As I mentioned in my last message, none of this is an issue due to how we've designed the system for backward compatibility, but if you have any questions or concerns, please let me know. Otherwise, I'll follow up here once all your repos are migrated!

Inspired by Tiny Agents in JS from @julien-c , we ported the idea to Python and integrated it directly into
huggingface_hub
β with a built-in MCP Client and a Tiny Agents CLI.TL;DR: With MCP (Model Context Protocol), you can expose tools like web search or image generation and connect them directly to LLMs. Itβs simple β and surprisingly powerful.
pip install "huggingface_hub[mcp]>=0.32.0"
We wrote a blog post where we show how to run Tiny Agents, and dive deeper into how they work and how to build your own.
π https://huggingface.co/blog/python-tiny-agents

Woohoo!! Thanks for joining β€οΈ I'll onboard you from the waitlist soon and follow up here when done.
Will do on the storage side - I'm also quite curious.
If you have any questions or feedback, don't hesitate to ping me here π€


We've been onboarding folks https://huggingface.co/blog/xet-on-the-hub know the backend can scale (Llama 4 and Qwen 3 are on Xet), is great for working with quants (see xet-team/quantization-dedup ), and we're pushing on inviting impactful orgs and users on the Hub. You fit the bill.
We'd love to onboard you, get some feedback, and create some excitement π
The steps are pretty straightforward - join the waitlist at hf.co/join/xet and we'll take care of the rest.
The system is fully backward compatible, so you shouldn't notice a thing. BUT to get the best experience when uploading/downloading, make sure you have
hf_xet
installed alongside the latest huggingface_hub
What do you think?
Woohoo! Xet team member here. Thanks for signing up @mradermacher π€
The migration process should be very seamless. Because of the way Xet supports backward compatibility - can read about it here if you're interested https://huggingface.co/docs/hub/storage-backends#backward-compatibility-with-lfs - everyone will continue to be able to access the repos before, during, and after the migration.
I'll onboard you from the waitlist this week and then follow up once everything is moved over! If you have any questions, don't hesitate to follow up here and @ me, happy to keep supporting all the work you're doing :)

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!
in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.
p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage
p.p.s. this is fully backwards compatible so everything will work as it should! π€