sychonix's picture
Update README.md
5476772 verified
|
raw
history blame
1.09 kB
metadata
license: apache-2.0
tags:
  - gguf
  - coding
  - quantized
  - Q6_K
  - olympiccoder
  - llama.cpp
  - sychonix
inference: false
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF
model_type: llama
quantization_config:
  quant_method: bitsandbytes
  load_in_4bit: false
  load_in_8bit: false
  weight_dtype: float16
  bnb_4bit_quant_type: nf4

๐Ÿง  OlympicCoder 7B Q6

Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, coding challenges, and symbolic inference.

...

๐Ÿงฉ Model Details

  • Model Name: OlympicCoder 7B Q6
  • Quantization: Q6_K
  • Format: GGUF
  • Size: 6.25 GB
  • Architecture: LLaMA-style 7B
  • Compatibility: llama.cpp, KoboldCpp, LM Studio, text-generation-webui

๐Ÿ”ง Recommended Use

  • ๐Ÿงฎ LeetCode / CodeForces-style problem solving
  • ๐Ÿ’ป Competitive programming and algorithmic reasoning
  • ๐Ÿ› ๏ธ General-purpose code generation

๐Ÿš€ How to Use (with llama.cpp)

./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a Python function to solve the 2-sum problem."