Qwen2.5-7B-Instruct

Chat

Latent-Space Verification: Self-Correcting Implementation

This special version of Qwen2.5-7B-Instruct incorporates Latent-Space Verification based on the approach described in "Latent-Space Verification for Self-Correcting LLMs" (Warren, 2025). The verification mechanism embeds lightweight adapters (LoRA-style) into the hidden layers of the transformer to detect and correct factual inaccuracies before they emerge in the output.

Key Highlights

  • Minimal Parameter Overhead: Less than 0.1% additional parameters (about 6.3M for a 7.6B model).
  • Inside-the-Model Verification: The approach intercepts hidden states to detect/correct factual errors.
  • Improved Accuracy: Achieves up to ~10% absolute gains in factual consistency on certain benchmarks.
  • Architecture-Agnostic: Verification adapters can be placed in various model families with minimal changes.

Example Usage with Verification

from latent_verification import load_verification_model
from transformers import AutoTokenizer

# Load the verification-enhanced version
verified_model_name = "YourCustomOrg/Qwen2.5-7B-Instruct-Verification"
model = load_verification_model(verified_model_name)
tokenizer = AutoTokenizer.from_pretrained(verified_model_name)

prompt = "The capital of France is Marseilles, correct?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(inputs["input_ids"], max_new_tokens=50)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

In this snippet:

  1. load_verification_model ensures that the verification adapters are initialized and active.
  2. Any forward pass automatically applies latent-space checks, modifying hidden states to reduce factual inconsistencies.

How to Add Verification to Your Own Model

If you have an instruct model and wish to apply latent-space verification:

from transformers import AutoModelForCausalLM
from latent_verification import create_verification_model

base_model_name = "Qwen/Qwen2.5-7B-Instruct"
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)

# Add latent verification adapters
verified_model = create_verification_model(
    base_model=base_model,
    adapter_locations=[2, 5, 8, 11, 14, 17, 20, 27],  # Example: places adapters in these layers
    bottleneck_size=64,
    enable_cross_layer=True
)

# Fine-tune only the verification parameters (base model remains frozen)
# (See the repository's training scripts for full details)
verified_model.save_pretrained("YourCustomOrg/Qwen2.5-7B-Instruct-Verification")

You can then upload your verification-enhanced model to Hugging Face, or load it locally with load_verification_model.

For a complete overview of the method, including ablation, analysis, and advanced usage, see the research paper and implementation repository.


Evaluation & Performance

Detailed evaluation results are in the Latent-Space Verification paper. For GPU memory and throughput benchmarks, see here.

The verification mechanism can improve factual reliability by ~10% in many tasks while preserving or even enhancing the base model’s fluency. In practice, the overall GPU footprint remains almost identical, with a small overhead for verification steps.


Original Qwen2.5-7B-Instruct Introduction

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

  • Significantly more knowledge and greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
  • Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g., tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
  • Long-context Support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

This repo contains the instruction-tuned 7B Qwen2.5 model, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 7.61B
  • Number of Parameters (Non-Embedding): 6.53B
  • Number of Layers: 28
  • Number of Attention Heads (GQA): 28 for Q and 4 for KV
  • Context Length: Full 131,072 tokens and generation up to 8192 tokens
    • Please refer to this section for details on handling long texts.

For more information, please see our blog, GitHub, and Documentation.


Requirements

The code of Qwen2.5 is included in the latest Hugging Face transformers. We advise using the latest version of transformers.

With transformers<4.37.0, you will encounter the following error:

KeyError: 'qwen2'

Quickstart

Below is a minimal code snippet using the standard Qwen2.5 pipeline:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen2.5-7B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Processing Long Texts

The current config.json is set for a context length up to 32,768 tokens. To handle inputs exceeding 32,768 tokens, we use YaRN, a method for length extrapolation that preserves strong performance on long texts.

For supported frameworks, you can add this snippet to config.json to enable YaRN:

{
  ...,
  "rope_scaling": {
    "factor": 4.0,
    "original_max_position_embeddings": 32768,
    "type": "yarn"
  }
}

When deploying, we recommend vLLM. Please refer to our docs for usage details. Note that current vLLM only supports a static rope_scaling, which may affect shorter text performance if you enable very large factors.


Citation

If you find our work helpful, feel free to cite Qwen2.5 and Latent-Space Verification together.

Qwen2.5:

@misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} }

@article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and et al.}, journal={arXiv preprint arXiv:2407.10671}, year={2024} }

Latent-Space Verification:

@misc{warren2025latent, title={Latent-Space Verification for Self-Correcting LLMs}, author={Warren, Jacob}, year={2025}, publisher={GitHub}, journal={GitHub repository}, howpublished={\url{https://github.com/jacobwarren/Latent-Space-Verification-for-Self-Correcting-LLMs}} }

Downloads last month
20
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jacobpwarren/Qwen2.5-7B-Latent_Verification

Base model

Qwen/Qwen2.5-7B
Finetuned
(503)
this model
Quantizations
2 models