DFG.png

Hatshepsut-Qwen3_QWQ-LCoT-4B

Hatshepsut-Qwen3_QWQ-LCoT-4B is a fine-tuned variant of the Qwen3-4B architecture, explicitly trained on QWQ Synthetic datasets with support for Least-to-Complexity-of-Thought (LCoT) prompting. This model is optimized for precise mathematical reasoning, logic-driven multi-step solutions, and structured technical outputs, while being compute-efficient and instruction-aligned.

GGUF : https://huggingface.co/prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B-Q4_K_M-GGUF

Key Features

  1. LCoT Prompting Mastery Specifically tuned to handle Least-to-Complexity-of-Thought prompting, encouraging granular reasoning from simple to complex steps in problem solving.

  2. QWQ-Based Precision Reasoning Built on the QWQ synthetic datasets, ensuring high-fidelity outputs in symbolic logic, algebraic manipulation, and mathematical word problems.

  3. Code Understanding & Logic Generation Interprets and writes concise, logically sound code snippets in Python, C++, and JavaScript, with special focus on algorithmic steps and edge case handling.

  4. Structured Output Control Outputs responses in JSON, Markdown, LaTeX, and table formats, ideal for educational material, notebooks, and structured reasoning chains.

  5. Multilingual Reasoning Supports over 20 languages, enabling STEM-based problem solving and translation tasks across global languages.

  6. Efficient 4B Parameter Footprint Lightweight yet powerful—suitable for researchers, educators, and developers running on mid-tier GPUs (e.g., A10, 3090, or L4).

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Solve using LCoT: If 3x - 7 = 2(x + 1), what is the value of x?"

messages = [
    {"role": "system", "content": "You are a step-by-step reasoning assistant trained on QWQ datasets with LCoT support."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • LCoT-style multi-step problem solving
  • Algebra, geometry, and logic question answering
  • Code generation with algorithmic transparency
  • Educational tools for math and programming
  • Structured technical output in Markdown/LaTeX
  • Multilingual STEM tutoring and reasoning

Limitations

  • May be sensitive to poorly formatted prompts
  • Less creative for open-domain or fictional tasks
  • Smaller context window (compared to 14B+ variants)
  • Early-stage reasoning errors may propagate if not prompted clearly

References

  1. [QWQ Synthetic Dataset]– Specialized reasoning corpus (experimental)
  2. LIMO: Less is More for Reasoning
  3. AIMO-2 Math Benchmark – OpenMathReasoning
  4. YaRN: Context Extension for LLMs
Downloads last month
14
Safetensors
Model size
4.02B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(40)
this model
Quantizations
3 models

Datasets used to train prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B

Collection including prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B