metadata
license: apache-2.0
datasets:
- open-r1/codeforces-cots_decontaminated
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
Model Card for NormolLM-Coder-7B (Change to correct name)
NormolLM-Coder-7B is a medium sized coding model, that achieves strong performance on benchmarks such as Live Code Bench and the new International Olympiad in Informatics benchmark.
Model description
- Model type: A 7B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
- Language(s) (NLP): Primarily English
- License: apache-2.0
- Finetuned from model: Qwen/Qwen2.5-Coder-7B-Instruct
Performance
Model | LCB | IOI |
---|---|---|
NormolLM-Coder-7B | 137 | 36.4 |
GPT-4o | 28.43 | |
Claude 3.7 Sonnet | 39.18 | |
QwQ-32B | 60.98 | |
DeepSeek-R1-Distill-Qwen-32B | 56.58 | |
DeepSeek-R1-Distill-Qwen-7B | 37.36 | |
Qwen2.5-Coder-32B | 28.31 | |
Qwen2.5-Coder-7B | 15.83 |
Usage
Here's how you can run the model using the pipeline()
function from 🤗 Transformers:
# pip install transformers
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="open-r1/NormolLM-coder-7b-v02.12", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|im_start|>user
#Write a python program to calculate the 10th fibonacci number<|im_end|>
#<|im_start|>assistant
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
Training procedure
Training hyper-parameters
The following hyperparameters were used during training:
- learning_rate: 4.0e-5
- train_batch_size: 2
- seed: 42
- packing: false
- distributed_type: deepspeed-zero-3
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- min_lr_rate: 0.1
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0