RedHatAI/Phi-3-medium-128k-instruct-quantized.w8a8
Text Generation
•
14B
•
Updated
•
18
•
2
RedHatAI/Phi-3-medium-128k-instruct-quantized.w8a16
Text Generation
•
4B
•
Updated
•
7
•
2
RedHatAI/Phi-3-medium-128k-instruct-FP8
Text Generation
•
14B
•
Updated
•
7
•
5
RedHatAI/Qwen2.5-32B-Instruct-quantized.w8a16
9B
•
Updated
•
5
RedHatAI/Qwen2.5-7B-Instruct-quantized.w8a16
3B
•
Updated
•
8
RedHatAI/Qwen2.5-0.5B-Instruct-quantized.w8a16
0.4B
•
Updated
•
5
RedHatAI/Qwen2.5-72B-Instruct-quantized.w8a8
73B
•
Updated
•
12
RedHatAI/Qwen2.5-32B-Instruct-quantized.w8a8
33B
•
Updated
•
24
RedHatAI/Llama-3.2-1B-FP8
1B
•
Updated
•
30
RedHatAI/Qwen2.5-32B-quantized.w8a8
33B
•
Updated
•
12
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8
Text Generation
•
406B
•
Updated
•
2.57k
•
31
RedHatAI/Qwen2.5-3B-Instruct-quantized.w8a8
3B
•
Updated
•
36
RedHatAI/Qwen2.5-1.5B-Instruct-quantized.w8a8
2B
•
Updated
•
7
RedHatAI/SparseLlama-3-8B-pruned_50.2of4
Text Generation
•
8B
•
Updated
•
30
RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamic
Text Generation
•
89B
•
Updated
•
4.37k
•
10
RedHatAI/Llama-3.2-11B-Vision-Instruct-FP8-dynamic
Text Generation
•
11B
•
Updated
•
3.96k
•
24
RedHatAI/Phi-3.5-mini-instruct-FP8-KV
Text Generation
•
4B
•
Updated
•
40
•
2
RedHatAI/Meta-Llama-3-70B-Instruct-quantized.w4a16
Text Generation
•
11B
•
Updated
•
2.62k
•
2
RedHatAI/SmolLM-135M-q
Updated
RedHatAI/Mixtral-8x22B-Instruct-v0.1-AutoFP8
Text Generation
•
141B
•
Updated
•
8
•
3
RedHatAI/DeepSeek-Coder-V2-Base-FP8
Text Generation
•
236B
•
Updated
•
23
RedHatAI/DeepSeek-Coder-V2-Instruct-FP8
Text Generation
•
236B
•
Updated
•
1.25k
•
7
RedHatAI/Mistral-Nemo-Instruct-2407-FP8
Text Generation
•
12B
•
Updated
•
1.27k
•
18
RedHatAI/Qwen2-57B-A14B-Instruct-FP8
Text Generation
•
57B
•
Updated
•
1.41k
•
1
RedHatAI/Llama-2-7b-chat-hf-FP8
Text Generation
•
7B
•
Updated
•
1.15k
RedHatAI/Mistral-7B-Instruct-v0.3-FP8
Text Generation
•
7B
•
Updated
•
1.88k
•
2
RedHatAI/Qwen2-0.5B-Instruct-FP8
Text Generation
•
0.5B
•
Updated
•
1.41k
•
•
3
RedHatAI/Qwen2-1.5B-Instruct-FP8
Text Generation
•
2B
•
Updated
•
4.43k
•
RedHatAI/Qwen2-7B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
14.5k
•
2
RedHatAI/Qwen2-72B-Instruct-FP8
Text Generation
•
73B
•
Updated
•
1.52k
•
15