-
-
-
-
-
-
Inference Providers
Active filters:
int4
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
11B
•
Updated
•
8
•
4
ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit
Text Generation
•
17B
•
Updated
•
4
•
1
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
2B
•
Updated
•
21.4k
•
29
angeloc1/llama3dot1SimilarProcesses4
Text Generation
•
8B
•
Updated
•
4
angeloc1/llama3dot1DifferentProcesses4
Text Generation
•
8B
•
Updated
•
4
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
59B
•
Updated
•
2
•
2
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
11B
•
Updated
•
11.3k
•
32
ModelCloud/EXAONE-3.0-7.8B-Instruct-gptq-4bit
2B
•
Updated
•
3
•
3
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w4a16
Text Generation
•
58B
•
Updated
•
154
•
12
angeloc1/llama3dot1FoodDel4v05
Text Generation
•
8B
•
Updated
•
3
zzzmahesh/Meta-Llama-3-8B-Instruct-quantized.w4a4
Text Generation
•
2B
•
Updated
•
10
•
1
ModelCloud/GRIN-MoE-gptq-4bit
6B
•
Updated
•
2
•
6
joshmiller656/Llama3.2-1B-AWQ-INT4
0.7B
•
Updated
•
19
Advantech-EIOT/intel_llama-3.1-8b-instruct
RedHatAI/Qwen2.5-7B-quantized.w4a16
Text Generation
•
2B
•
Updated
•
48
joshmiller656/Llama-3.1-Nemotron-70B-Instruct-AWQ-INT4
Text Generation
•
11B
•
Updated
•
2.17k
•
3
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
0.7B
•
Updated
•
946
•
2
jojo1899/llama-3_1-8b-instruct-ov-int4
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v2
Text Generation
•
0.7B
•
Updated
•
6
•
3
ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3
Text Generation
•
1B
•
Updated
•
846
•
5
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
•
12B
•
Updated
•
7
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v2.5
Text Generation
•
0.7B
•
Updated
•
1.02k
•
5
jojo1899/Phi-3.5-mini-instruct-ov-int4
ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
7B
•
Updated
•
26
•
15
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation
•
8B
•
Updated
•
5
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-quantized.w4a16
Text Generation
•
2B
•
Updated
•
7
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v1
Text Generation
•
7B
•
Updated
•
8
•
51
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v2
Text Generation
•
7B
•
Updated
•
4
•
16
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v3
Text Generation
•
7B
•
Updated
•
5
•
14
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
2B
•
Updated
•
4
•
3