QuantFactory/dolphin-2.9.4-gemma2-2b-GGUF

This is quantized version of cognitivecomputations/dolphin-2.9.4-gemma2-2b created using llama.cpp

Original Model Card

esf efs serdfs grgsr gsr g rsgs gfs rg srg seg srg srg sr gr f awef eawdf awd fae gage ea

efkjbsdjhfbakjdfbjokasfkjasbfkbjas kjdbfkjadesb kj dskjhfsdkjhvfskz skdjbjshdbfajsh kjsadbjhbsadfk

skajdbkasjfbakjsfb mhsgadbcjkhabsjhabs

sksdjhajskhgdjahsbgdjhasd ksajduhvWSMNWBDCKBWIUEB

Downloads last month
23
GGUF
Model size
2.61B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support