Hatshepsut-Qwen3_QWQ-LCoT-4B
Hatshepsut-Qwen3_QWQ-LCoT-4B is a fine-tuned variant of the Qwen3-4B architecture, explicitly trained on QWQ Synthetic datasets with support for Least-to-Complexity-of-Thought (LCoT) prompting. This model is optimized for precise mathematical reasoning, logic-driven multi-step solutions, and structured technical outputs, while being compute-efficient and instruction-aligned.
GGUF : https://huggingface.co/prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B-Q4_K_M-GGUF
Key Features
LCoT Prompting Mastery Specifically tuned to handle Least-to-Complexity-of-Thought prompting, encouraging granular reasoning from simple to complex steps in problem solving.
QWQ-Based Precision Reasoning Built on the QWQ synthetic datasets, ensuring high-fidelity outputs in symbolic logic, algebraic manipulation, and mathematical word problems.
Code Understanding & Logic Generation Interprets and writes concise, logically sound code snippets in Python, C++, and JavaScript, with special focus on algorithmic steps and edge case handling.
Structured Output Control Outputs responses in JSON, Markdown, LaTeX, and table formats, ideal for educational material, notebooks, and structured reasoning chains.
Multilingual Reasoning Supports over 20 languages, enabling STEM-based problem solving and translation tasks across global languages.
Efficient 4B Parameter Footprint Lightweight yet powerful—suitable for researchers, educators, and developers running on mid-tier GPUs (e.g., A10, 3090, or L4).
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve using LCoT: If 3x - 7 = 2(x + 1), what is the value of x?"
messages = [
{"role": "system", "content": "You are a step-by-step reasoning assistant trained on QWQ datasets with LCoT support."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- LCoT-style multi-step problem solving
- Algebra, geometry, and logic question answering
- Code generation with algorithmic transparency
- Educational tools for math and programming
- Structured technical output in Markdown/LaTeX
- Multilingual STEM tutoring and reasoning
Limitations
- May be sensitive to poorly formatted prompts
- Less creative for open-domain or fictional tasks
- Smaller context window (compared to 14B+ variants)
- Early-stage reasoning errors may propagate if not prompted clearly
References
- [QWQ Synthetic Dataset]– Specialized reasoning corpus (experimental)
- LIMO: Less is More for Reasoning
- AIMO-2 Math Benchmark – OpenMathReasoning
- YaRN: Context Extension for LLMs
- Downloads last month
- 14