PEFT
Safetensors
llama-factory
lora
Generated from Trainer
DeepSWE-Verifier / README.md
michaelzhiluo's picture
Update README.md
16722b8 verified
metadata
library_name: peft
license: other
base_model: Qwen/Qwen3-14B
datasets:
  - r2e-edits/deepswe-swebv-eval-n16-verifier-v1
tags:
  - llama-factory
  - lora
  - generated_from_trainer
model-index:
  - name: verifier
    results: []
DeepSWE-Verifier
๐Ÿš€ Democratizing Reinforcement Learning for LLM Agents (RLLM) ๐ŸŒŸ

DeepSWE-Verifier Overview

DeepSWE-Verifier is "critic model" that aids DeepSWE-Preview, a coding agent, for test-time scaling. For each SWE-Bench problem, DeepSWE-Preview generates multiple solutions, which produces multiple code patches, while DeepSWE-Verifier chooses the best code patch.Pairing DeepSWE-Preview with DeepSWE-Verifier can increases SWE-Bench-Verified score by +10% (See Figure 1, Execution-Free Verifier).

DeepSWE-Verifier is a fine-tuned/SFT version of Qwen/Qwen3-14B

Discover more about DeepSWE-Preview's development and capabilities in our technical blog post.

Figure 1: SWE-Bench Verified Performance w.r.t. different TTS strategies. With hybrid TTS, DeepSWE-Preview achieves 59%, beating the current SOTA open-weights model (SkyWork + TTS, 47%) by 12%. We note that only using execution-based and execution-free verifiers is still effective and can bring 10+% performance.

Usage

See our reproduction script for DeepSWE's test-time scaling.

Serving DeepSWE-Verifier

We suggest using vLLM to serve:

# Stop previous server and start verifier model
export MAX_CONTEXT_LEN=76800
vllm serve Qwen/Qwen3-14B \
    --max-model-len $MAX_CONTEXT_LEN \
    --hf-overrides '{"max_position_embeddings": '$MAX_CONTEXT_LEN'}' \
    --enable-lora \
    --lora-modules verifier=agentica-org/DeepSWE-Preview \
    --port 8000 \
    --dtype bfloat16 \
    --max-lora-rank 64 \
    --tensor-parallel-size 8

Training

Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 8
  • total_eval_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 2.0

Framework versions

  • PEFT 0.12.0
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.1.0
  • Tokenizers 0.21.2