A collection of MoE+MLA models, serving as testing proxies for DeepSeek-V3/R1
Thien Tran
gaunernst
AI & ML interests
None yet
Organizations
None yet
Gemma 3 QAT INT4 (from Flax)
These are converted from the official QAT INT4 Flax checkpoints on Kaggle. Supported formats: AutoAWQ, GGUF
-
gaunernst/gemma-3-1b-it-int4-awq
Text Generation • Updated • 11.4k • 2 -
gaunernst/gemma-3-4b-it-int4-awq
Image-Text-to-Text • Updated • 37.5k • 6 -
gaunernst/gemma-3-12b-it-int4-awq
Image-Text-to-Text • 12B • Updated • 17.4k • 24 -
gaunernst/gemma-3-27b-it-int4-awq
Image-Text-to-Text • 6B • Updated • 22.4k • 39
Face Recognition Models
-
gaunernst/vit_small_patch8_gap_112.cosface_ms1mv3
Image Feature Extraction • Updated • 47 • 2 -
gaunernst/vit_tiny_patch8_112.cosface_ms1mv3
Image Feature Extraction • Updated • 5 • 2 -
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
Image Feature Extraction • Updated • 81 • 4 -
gaunernst/vit_tiny_patch8_112.adaface_ms1mv3
Image Feature Extraction • Updated • 67 • 2
LLMs 1B - 2B
Smallish LLM pre-training datasets
Llama3-compatible
-
nvidia/Llama-3.1-Minitron-4B-Width-Base
Text Generation • Updated • 2.56k • 193 -
nvidia/Llama-3.1-Minitron-4B-Depth-Base
Text Generation • 5B • Updated • 924 • 21 -
meta-llama/Llama-3.1-8B-Instruct
Text Generation • 8B • Updated • 7.62M • • 5.59k -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 1.32M • • 2.12k
Gemma 3 QAT INT4 (from GGUF)
Convert official Gemma 3 QAT GGUF to AutoAWQ and compressed-tensors format for ease of deployment
-
gaunernst/gemma-3-1b-it-qat-autoawq
Text Generation • Updated • 6 -
gaunernst/gemma-3-4b-it-qat-autoawq
Image-Text-to-Text • Updated • 152 • 2 -
gaunernst/gemma-3-12b-it-qat-autoawq
Image-Text-to-Text • 12B • Updated • 285 • 7 -
gaunernst/gemma-3-27b-it-qat-autoawq
Image-Text-to-Text • 27B • Updated • 946 • 12
Mini BERT models
https://arxiv.org/abs/1908.08962
LLMs < 1B
LLMs 2B - 4B
Llama2-compatible
DeepSeek testing
A collection of MoE+MLA models, serving as testing proxies for DeepSeek-V3/R1
Gemma 3 QAT INT4 (from GGUF)
Convert official Gemma 3 QAT GGUF to AutoAWQ and compressed-tensors format for ease of deployment
-
gaunernst/gemma-3-1b-it-qat-autoawq
Text Generation • Updated • 6 -
gaunernst/gemma-3-4b-it-qat-autoawq
Image-Text-to-Text • Updated • 152 • 2 -
gaunernst/gemma-3-12b-it-qat-autoawq
Image-Text-to-Text • 12B • Updated • 285 • 7 -
gaunernst/gemma-3-27b-it-qat-autoawq
Image-Text-to-Text • 27B • Updated • 946 • 12
Gemma 3 QAT INT4 (from Flax)
These are converted from the official QAT INT4 Flax checkpoints on Kaggle. Supported formats: AutoAWQ, GGUF
-
gaunernst/gemma-3-1b-it-int4-awq
Text Generation • Updated • 11.4k • 2 -
gaunernst/gemma-3-4b-it-int4-awq
Image-Text-to-Text • Updated • 37.5k • 6 -
gaunernst/gemma-3-12b-it-int4-awq
Image-Text-to-Text • 12B • Updated • 17.4k • 24 -
gaunernst/gemma-3-27b-it-int4-awq
Image-Text-to-Text • 6B • Updated • 22.4k • 39
Mini BERT models
https://arxiv.org/abs/1908.08962
Face Recognition Models
-
gaunernst/vit_small_patch8_gap_112.cosface_ms1mv3
Image Feature Extraction • Updated • 47 • 2 -
gaunernst/vit_tiny_patch8_112.cosface_ms1mv3
Image Feature Extraction • Updated • 5 • 2 -
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
Image Feature Extraction • Updated • 81 • 4 -
gaunernst/vit_tiny_patch8_112.adaface_ms1mv3
Image Feature Extraction • Updated • 67 • 2
LLMs < 1B
LLMs 1B - 2B
LLMs 2B - 4B
Smallish LLM pre-training datasets
Llama2-compatible
Llama3-compatible
-
nvidia/Llama-3.1-Minitron-4B-Width-Base
Text Generation • Updated • 2.56k • 193 -
nvidia/Llama-3.1-Minitron-4B-Depth-Base
Text Generation • 5B • Updated • 924 • 21 -
meta-llama/Llama-3.1-8B-Instruct
Text Generation • 8B • Updated • 7.62M • • 5.59k -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 1.32M • • 2.12k