-
-
-
-
-
-
Inference Providers
Active filters:
int4
RedHatAI/DeepSeek-R1-Distill-Qwen-32B-quantized.w4a16
Text Generation
•
6B
•
Updated
•
2.98k
•
5
ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
Image-Text-to-Text
•
5B
•
Updated
•
10.1k
•
38
Advantech-EIOT/intel_llama-2-chat-7b
Text Generation
•
Updated
•
3
RedHatAI/zephyr-7b-beta-marlin
Text Generation
•
1B
•
Updated
•
143
RedHatAI/TinyLlama-1.1B-Chat-v1.0-marlin
Text Generation
•
0.3B
•
Updated
•
6.38k
•
1
RedHatAI/OpenHermes-2.5-Mistral-7B-marlin
Text Generation
•
1B
•
Updated
•
153
•
2
RedHatAI/Nous-Hermes-2-Yi-34B-marlin
Text Generation
•
5B
•
Updated
•
7
•
5
ecastera/ecastera-eva-westlake-7b-spanish-int4-gguf
7B
•
Updated
•
10
•
2
softmax/Llama-2-70b-chat-hf-marlin
Text Generation
•
10B
•
Updated
•
3
softmax/falcon-180B-chat-marlin
Text Generation
•
26B
•
Updated
•
10
study-hjt/Meta-Llama-3-8B-Instruct-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
4
study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int4
Text Generation
•
11B
•
Updated
•
4
•
6
study-hjt/Meta-Llama-3-70B-Instruct-AWQ
Text Generation
•
11B
•
Updated
•
5
study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4
Text Generation
•
17B
•
Updated
•
5
•
2
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
5
study-hjt/Qwen1.5-110B-Chat-AWQ
Text Generation
•
17B
•
Updated
•
5
modelscope/Yi-1.5-34B-Chat-AWQ
Text Generation
•
5B
•
Updated
•
26
•
1
modelscope/Yi-1.5-6B-Chat-GPTQ
Text Generation
•
1B
•
Updated
•
11
modelscope/Yi-1.5-6B-Chat-AWQ
Text Generation
•
1B
•
Updated
•
7
modelscope/Yi-1.5-9B-Chat-GPTQ
Text Generation
•
2B
•
Updated
•
5
•
1
modelscope/Yi-1.5-9B-Chat-AWQ
Text Generation
•
2B
•
Updated
•
14
modelscope/Yi-1.5-34B-Chat-GPTQ
Text Generation
•
5B
•
Updated
•
5
•
1
jojo1899/Phi-3-mini-128k-instruct-ov-int4
Text Generation
•
Updated
•
4
jojo1899/Llama-2-13b-chat-hf-ov-int4
Text Generation
•
Updated
•
4
jojo1899/Mistral-7B-Instruct-v0.2-ov-int4
Text Generation
•
Updated
•
3
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
10
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
3B
•
Updated
•
73
•
4
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
2B
•
Updated
•
1.53k
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
2B
•
Updated
•
7
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
11B
•
Updated
•
8
•
4