Ellora
Collection
Ellora: Enhancing LLMs with LoRA - Standardized Recipes for Capability Enhancement
β’
12 items
β’
Updated
β’
3
This LoRA adapter adds execution awareness capabilities to Qwen/Qwen3-4B-Thinking-2507. Inspired by Meta's CWM (Code World Model) research, it enables the model to predict and understand program execution step-by-step.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3-4B-Thinking-2507",
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-4B-Thinking-2507")
# Load execution world model LoRA
model = PeftModel.from_pretrained(model, "codelion/Qwen3-4B-execution-world-model-lora")
# Analyze code execution
prompt = """Analyze this code and predict its execution trace:
\`\`\`python
x = 10
y = x * 2
z = x + y
\`\`\`
Show variable states at each line."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
<execution_trace>
Line 1: State: {x=10}
Line 2: State: {x=10, y=20}
Line 3: State: {x=10, y=20, z=30}
</execution_trace>
codelion/execution-world-model-dataset
sys.settrace()
Part of the Ellora project - standardized recipes for enhancing LLM capabilities.
Base model
Qwen/Qwen3-4B-Thinking-2507