RedHatAI/starcoder2-3b-quantized.w8a16
Text Generation
•
1B
•
Updated
•
24
RedHatAI/starcoder2-15b-quantized.w8a16
Text Generation
•
4B
•
Updated
•
23
RedHatAI/Meta-Llama-3.1-70B-quantized.w8a8
Text Generation
•
71B
•
Updated
•
96
RedHatAI/Meta-Llama-3.1-405B-FP8
Text Generation
•
410B
•
Updated
•
14
RedHatAI/Meta-Llama-3.1-70B-quantized.w8a16
Text Generation
•
19B
•
Updated
•
21
RedHatAI/starcoder2-3b-FP8
Text Generation
•
3B
•
Updated
•
14
RedHatAI/starcoder2-7b-FP8
Text Generation
•
7B
•
Updated
•
13
RedHatAI/starcoder2-15b-FP8
Text Generation
•
16B
•
Updated
•
12
RedHatAI/Mistral-Nemo-Instruct-2407-quantized.w8a16
Text Generation
•
4B
•
Updated
•
56
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a16
Text Generation
•
3B
•
Updated
•
41
•
1
RedHatAI/Meta-Llama-3.1-70B-FP8
Text Generation
•
71B
•
Updated
•
1.23k
•
2
RedHatAI/Mistral-Large-Instruct-2407-FP8
Text Generation
•
123B
•
Updated
•
14
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a16
Text Generation
•
19B
•
Updated
•
1.16k
•
5
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
131k
•
43
RedHatAI/Mistral-7B-Instruct-v0.3-quantized.w8a8
Text Generation
•
7B
•
Updated
•
548
•
2
RedHatAI/Qwen2-72B-Instruct-quantized.w8a8
Text Generation
•
73B
•
Updated
•
21
•
1
RedHatAI/Meta-Llama-3-70B-Instruct-quantized.w8a8
Text Generation
•
71B
•
Updated
•
59
RedHatAI/Qwen2-7B-Instruct-quantized.w8a8
Text Generation
•
8B
•
Updated
•
29
RedHatAI/Phi-3-medium-128k-instruct-quantized.w4a16
Text Generation
•
2B
•
Updated
•
4.95k
•
3
RedHatAI/Qwen2-0.5B-Instruct-quantized.w8a8
Text Generation
•
0.6B
•
Updated
•
1.17k
RedHatAI/Phi-3-mini-128k-instruct-quantized.w4a16
Text Generation
•
0.7B
•
Updated
•
16
•
1
RedHatAI/Qwen2-1.5B-Instruct-quantized.w8a8
Text Generation
•
2B
•
Updated
•
1.18k
RedHatAI/Phi-3-mini-128k-instruct-quantized.w8a8
Text Generation
•
4B
•
Updated
•
74
RedHatAI/Meta-Llama-3-8B-Instruct-quantized.w8a8
Text Generation
•
8B
•
Updated
•
4.3k
•
2
RedHatAI/Llama-2-7b-chat-quantized.w8a8
Text Generation
•
7B
•
Updated
•
2.04k
•
1
RedHatAI/Phi-3-mini-128k-instruct-quantized.w8a16
Text Generation
•
1B
•
Updated
•
8
RedHatAI/Phi-3-mini-128k-instruct-FP8
Text Generation
•
4B
•
Updated
•
52
RedHatAI/Llama-3.2-3B-Instruct-FP8-dynamic
Text Generation
•
4B
•
Updated
•
1.26k
•
3
RedHatAI/Llama-3.2-1B-Instruct-FP8-dynamic
Text Generation
•
1B
•
Updated
•
19.8k
•
3
RedHatAI/gemma-2-9b-it-quantized.w8a8
Text Generation
•
10B
•
Updated
•
40
•
2