-
-
-
-
-
-
Inference Providers
Active filters: Distill
stepenZEN/DeepSeek-R1-Distill-Llama-70B-bitsandbytes-4bit
72B • Updated
• 5
prithivMLmods/QwQ-R1-Distill-1.5B-CoT
Text Generation
• 2B • Updated
• 8
• 4
mradermacher/QwQ-R1-Distill-1.5B-CoT-GGUF
2B • Updated
• 54
• 1
mradermacher/QwQ-R1-Distill-1.5B-CoT-i1-GGUF
2B • Updated
• 110
stepenZEN/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo
Text Generation
• 2B • Updated
• 4
• 3
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF
2B • Updated
• 230
• 5
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF
2B • Updated
• 471
adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF
Text Generation
• 2B • Updated
• 7
RDson/LIMO-R1-Distill-Qwen-7B
8B • Updated
mradermacher/LIMO-R1-Distill-Qwen-7B-GGUF
8B • Updated
• 134
prithivMLmods/Delta-Pavonis-Qwen-14B
Text Generation
• 15B • Updated
• 2
• 3
mradermacher/Delta-Pavonis-Qwen-14B-GGUF
15B • Updated
• 71
• 1
mradermacher/Delta-Pavonis-Qwen-14B-i1-GGUF
15B • Updated
• 165
• 1
prithivMLmods/Octantis-QwenR1-1.5B-Q8_0-GGUF
Text Generation
• 2B • Updated
• 26
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic
Text Generation
• 4B • Updated
• 63
• 4
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated
• 230
• 1
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated
• 130
• 2
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-i1-GGUF
4B • Updated
• 209
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic
3B • Updated
• 3
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated
• 49
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated
• 64
• 1
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-i1-GGUF
3B • Updated
• 187
• 2
DavidAU/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL
Text Generation
• 8B • Updated
• 55
• 5
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-GGUF
8B • Updated
• 347
• 1
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-i1-GGUF
8B • Updated
• 1.1k
• 1
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-2Bit
Text Generation
• 0.7B • Updated
• 71
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-3Bit
Text Generation
• 1.0B • Updated
• 48
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-4Bit
Text Generation
• 1B • Updated
• 67
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-5Bit
Text Generation
• 1B • Updated
• 37
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-6Bit
Text Generation
• 8B • Updated
• 43