metadata
license: apache-2.0
tags:
- gguf
- coding
- quantized
- Q6_K
- olympiccoder
- llama.cpp
- sychonix
inference: false
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF
model_type: llama
quantization_config:
quant_method: bitsandbytes
load_in_4bit: false
load_in_8bit: false
weight_dtype: float16
bnb_4bit_quant_type: nf4
๐ง OlympicCoder 7B Q6
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, coding challenges, and symbolic inference.
...
๐งฉ Model Details
- Model Name: OlympicCoder 7B Q6
- Quantization: Q6_K
- Format: GGUF
- Size: 6.25 GB
- Architecture: LLaMA-style 7B
- Compatibility:
llama.cpp
,KoboldCpp
,LM Studio
,text-generation-webui
๐ง Recommended Use
- ๐งฎ LeetCode / CodeForces-style problem solving
- ๐ป Competitive programming and algorithmic reasoning
- ๐ ๏ธ General-purpose code generation
๐ How to Use (with llama.cpp
)
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a Python function to solve the 2-sum problem."