Qwentile 螞 2.5 32B Instruct
Qwentile 螞 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models:
output_base_model: "maldv/Qwentile2.5-32B-Instruct"
output_dtype: "bfloat16"
finetune_merge:
- { "model": "a-m-team/AM-Thinking-v1", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "nvidia/OpenCodeReasoning-Nemotron-32B", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8, "is_input": true}
- { "model": "maldv/Loqwqtus2.5-32B-Instruct", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "trashpanda-org/QwQ-32B-Snowdrop-v0", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "ArliAI/QwQ-32B-ArliAI-RpR-v3", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8 }
In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwentile2.5-32B-Instruct); but with the Nemotron OpenCodeReasoning input layer.
What is this?
The latest in my series of Qwen 2.5 merges. Some really good models have been released recently, so I folded them in with Qwentile as the base. It should exhibit superior thinking skills, and perhaps even some code ability. I was satisfied with QReasoner2.5-32B-Instruct for advanced reasoning, but I suspect this will be an improvement.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwentile-labmda-2.5-32b-instruct,
title = {Qwentile 螞 2.5 32B Instruct},
url = {https://huggingface.co/maldv/QwentileLambda2.5-32B-Instruct},
author = {Praxis Maldevide},
month = {May},
year = {2025}
}
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support