AI & ML interests

None defined yet.

Recent Activity

danielhanchenΒ 
posted an update about 17 hours ago
view post
Post
1895
We’re excited to announce that Unsloth has joined the PyTorch Ecosystem! πŸ”₯πŸ¦₯

Unsloth is an open-source project that makes training & running models more accurate and faster with less compute. Our mission is to make local AI accessible to everyone. Thanks to all of you for making this possible! πŸ’•

Blog: https://unsloth.ai/blog/pytorch
GitHub: https://github.com/unslothai/unsloth
  • 1 reply
Β·
danielhanchenΒ 
posted an update 5 days ago
view post
Post
7451
We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! πŸš€

Learn how 3 optimizations help your home GPU train models faster:
1. Packed-sequence metadata caching
2. Double-buffered checkpoint reloads
3. Faster MoE routing

Guide: https://unsloth.ai/blog/nvidia-collab
GitHub: https://github.com/unslothai/unsloth
Aurelien-MorganΒ 
posted an update 9 days ago
view post
Post
1007
@retrain-pipelines v0.2.0 is out !
I'm at Station F at My booth with GOSIM Paris 2026 today & tomorrow.
Come meet me for a live in-person demo and a chat !
  • 1 reply
Β·
danielhanchenΒ 
posted an update 9 days ago
view post
Post
8679
We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw.

Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAM

Run with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cpp

Guide: https://unsloth.ai/docs/basics/api
danielhanchenΒ 
posted an update 15 days ago
view post
Post
10734
Unsloth is now one of the top 10 most followed organizations on Hugging Face. πŸ€—πŸ¦₯

Thanks so much for all the support!
Our HF page:
unsloth
  • 5 replies
Β·
danielhanchenΒ 
posted an update 22 days ago
danielhanchenΒ 
posted an update 28 days ago
Aurelien-MorganΒ 
posted an update about 1 month ago
view post
Post
217
Launching a workweek of @retrain-pipelines wheels.

Day #1 : Compose
  • 4 replies
Β·
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
view post
Post
2770
A new way to use Unsloth.

Coming soon...
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
935
You don’t need to set LLM parameters anymore! πŸš€

llama.cpp uses only the context length + compute your local setup needs. Unsloth also auto-applies the correct model settings

Try in Unsloth Studio - now with precompiled llama.cpp binaries.

GitHub: https://github.com/unslothai/unsloth
  • 2 replies
Β·
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
3417
Introducing Unsloth Studio ✨
A new open-source web UI to train and run LLMs.

β€’ Run models locally on Mac, Windows, Linux
β€’ Train 500+ models 2x faster with 70% less VRAM
β€’ Supports GGUF, vision, audio, embedding models
β€’ Auto-create datasets from PDF, CSV, DOCX
β€’ Self-healing tool calling and code execution
β€’ Compare models side by side + export to GGUF

GitHub: https://github.com/unslothai/unsloth
Blog and Guide: https://unsloth.ai/docs/new/studio

Available now on Hugging Face, NVIDIA, Docker and Colab.
danielhanchenΒ 
posted an update 2 months ago
view post
Post
3934
We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. πŸ’š Learn:

β€’ Why RL environments matter + how to build them
β€’ When RL is better than SFT
β€’ GRPO and RL best practices
β€’ How verifiable rewards and RLVR work

Blog: https://unsloth.ai/blog/rl-environments
  • 4 replies
Β·
danielhanchenΒ 
posted an update 3 months ago
view post
Post
3458
100,000+ models trained with Unsloth have now been open-sourced on πŸ€—Hugging Face! πŸ¦₯

Here are the most popular ones you can run local:
1. TeichAI - GLM-4.7-Flash distilled from Claude 4.5 Opus (high)
2. Zed - Qwen Coder 7B fine-tuned for stronger coding
3. DavidAU - Llama-3.3-8B distilled from Claude 4.5 Opus (high)
4. huihui - gpt-oss made β€œabliberated”

Links to models:
1. TeichAI: TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
2. Zed: zed-industries/zeta
3. DavidAU: DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
4. huihui: huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated

See all the 100K latest models fine-tuned with Unsloth here: https://huggingface.co/models?other=u
  • 2 replies
Β·
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 3 months ago
view post
Post
5226
We collaborated with Hugging Face to enable you to train MoE models 12Γ— faster with 35% less VRAM via our new Triton kernels (no accuracy loss). πŸ€—

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
Β·
danielhanchenΒ 
posted an update 4 months ago