-
-
-
-
-
-
Inference Providers
Active filters:
llama-3
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
Text Generation
•
50B
•
Updated
•
6.71k
•
146
meta-llama/Llama-3.1-8B-Instruct
Text Generation
•
8B
•
Updated
•
11.7M
•
•
4.41k
meta-llama/Meta-Llama-3-8B-Instruct
Text Generation
•
8B
•
Updated
•
1.07M
•
•
4.11k
meta-llama/Llama-3.3-70B-Instruct
Text Generation
•
71B
•
Updated
•
386k
•
•
2.46k
meta-llama/Llama-3.1-8B
Text Generation
•
8B
•
Updated
•
811k
•
•
1.72k
meta-llama/Llama-3.2-3B-Instruct
Text Generation
•
3B
•
Updated
•
1.99M
•
•
1.64k
meta-llama/Meta-Llama-3-8B
Text Generation
•
8B
•
Updated
•
413k
•
•
6.27k
DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
Text Generation
•
18B
•
Updated
•
99.5k
•
296
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5-FP8
Text Generation
•
50B
•
Updated
•
229
•
9
meta-llama/Llama-3.2-1B
Text Generation
•
1B
•
Updated
•
2.79M
•
2.02k
meta-llama/Llama-3.2-11B-Vision-Instruct
Image-Text-to-Text
•
11B
•
Updated
•
368k
•
•
1.5k
meta-llama/Llama-3.2-1B-Instruct
Text Generation
•
1B
•
Updated
•
3.45M
•
•
1.02k
nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1
Text Generation
•
5B
•
Updated
•
14.5k
•
103
QuantFactory/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF
Text Generation
•
8B
•
Updated
•
18.3k
•
104
bartowski/Llama-3.2-3B-Instruct-GGUF
Text Generation
•
3B
•
Updated
•
447k
•
161
beomi/Llama-3-Open-Ko-8B
Text Generation
•
8B
•
Updated
•
7.51k
•
152
meta-llama/Llama-3.1-70B-Instruct
Text Generation
•
71B
•
Updated
•
440k
•
•
834
bartowski/Llama-3.2-1B-Instruct-GGUF
Text Generation
•
1B
•
Updated
•
54.5k
•
126
DavidAU/Llama-3.2-8X4B-MOE-V2-Dark-Champion-Instruct-uncensored-abliterated-21B-GGUF
Text Generation
•
21B
•
Updated
•
12.3k
•
71
meta-llama/Meta-Llama-3-70B-Instruct
Text Generation
•
71B
•
Updated
•
63.2k
•
•
1.49k
aaditya/Llama3-OpenBioLLM-70B
Text Generation
•
Updated
•
25.2k
•
•
471
DavidAU/L3-8B-Stheno-v3.3-32K-Ultra-NEO-V1-IMATRIX-GGUF
Text Generation
•
8B
•
Updated
•
1.4k
•
14
meta-llama/Llama-3.2-3B
Text Generation
•
3B
•
Updated
•
320k
•
612
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
DavidAU/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF
Text Generation
•
25B
•
Updated
•
3.51k
•
30
nvidia/Llama-3.1-Nemotron-Nano-8B-v1
Text Generation
•
8B
•
Updated
•
125k
•
•
195
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Text Generation
•
253B
•
Updated
•
160k
•
•
327
aaditya/Llama3-OpenBioLLM-8B
Text Generation
•
Updated
•
7.3k
•
•
213
meta-llama/Llama-3.1-405B
Text Generation
•
406B
•
Updated
•
8.68k
•
937
meta-llama/Llama-3.1-405B-Instruct
Text Generation
•
406B
•
Updated
•
30.7k
•
•
576