QuantFactory/Tulu-3.1-8B-SuperNova-Smart-GGUF
This is quantized version of bunnycore/Tulu-3.1-8B-SuperNova-Smart created using llama.cpp
Original Model Card
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method using bunnycore/Tulu-3.1-8B-SuperNova + bunnycore/Llama-3.1-8b-smart-lora as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: bunnycore/Tulu-3.1-8B-SuperNova+bunnycore/Llama-3.1-8b-smart-lora
dtype: bfloat16
merge_method: passthrough
models:
- model: bunnycore/Tulu-3.1-8B-SuperNova+bunnycore/Llama-3.1-8b-smart-lora
- Downloads last month
- 136
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support