chenwuml commited on
Commit
188283a
·
1 Parent(s): ab97606

initial commit

Browse files
Files changed (1) hide show
  1. README.md +6 -12
README.md CHANGED
@@ -25,18 +25,6 @@ Starting from the [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-
25
 
26
  To assess CodeFu's genuine problem-solving abilities, we used [USACO benchmark](https://princeton-nlp.github.io/USACOBench/), which consists of 307 high-quality problems from the past [USA Computing Olympiad](https://usaco.org/) contests.
27
 
28
- For systematic and robust evaluation:
29
-
30
- 1. We used standardized code extraction logic across all model responses. This process identifies solution code by parsing either <code></code> tags or ```cpp code blocks, always selecting the final code block to ensure we capture each model's ultimate solution after any intermediate reasoning steps.
31
-
32
- 2. All solutions are executed with **strict time limit enforcement** - any code exceeding the problem's specified time limit is marked as incorrect, ensuring realistic competitive programming conditions.
33
-
34
- 3. All open-source models (including CodeFu-7B-v0.1) were tested using [vLLM](https://github.com/vllm-project/vllm) v0.6.3 with identical sampling parameters: a `temperature` of 0.8 and a `top_p` of 0.95. Claude-3.7-Sonnet was evaluated at a `temperature` of 1.0. We set the maximum output length (`max_tokens`) to 28,672 for all models to ensure sufficient length for reasoning and code solutions.
35
-
36
- Pass@1 results of GPT-4 and GPT-3.5 are copied from the [USACO 2024 benchmark](https://princeton-nlp.github.io/USACOBench/) as performance baselines.
37
-
38
- The table below compares CodeFu's performance to other reasoning/coding models:
39
-
40
  | Model | Size | USACO Pass@1 | Notes |
41
  |-------|------|-------------:|-------|
42
  | Claude-3.7-Sonnet | UNK | 31.9 | |
@@ -56,6 +44,12 @@ The table below compares CodeFu's performance to other reasoning/coding models:
56
  - ⚡ **Outperforms 32B base model** (13.7% vs 11.7% Pass@1)
57
  - 📈 **>10x improvement** over 7B base model (13.7% vs 1%)
58
 
 
 
 
 
 
 
59
  ### Result analysis
60
 
61
  We provide access to the complete CodeFu-7B-v0.1 evaluation results on the USACO benchmark as a [CSV file](codefu-7b-v0.1_usaco.csv.tgz) containing fields such as 'problem_name', 'prompt', 'response', 'response_length', 'solution_code', 'status', and 'score'.
 
25
 
26
  To assess CodeFu's genuine problem-solving abilities, we used [USACO benchmark](https://princeton-nlp.github.io/USACOBench/), which consists of 307 high-quality problems from the past [USA Computing Olympiad](https://usaco.org/) contests.
27
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  | Model | Size | USACO Pass@1 | Notes |
29
  |-------|------|-------------:|-------|
30
  | Claude-3.7-Sonnet | UNK | 31.9 | |
 
44
  - ⚡ **Outperforms 32B base model** (13.7% vs 11.7% Pass@1)
45
  - 📈 **>10x improvement** over 7B base model (13.7% vs 1%)
46
 
47
+ For systematic and robust evaluation, we used standardized code extraction logic across all model responses. This process identifies solution code by parsing either `<code></code>` tags or ```cpp code blocks, always selecting the final code block to ensure we capture each model's ultimate solution after any intermediate reasoning steps. GPT-3.5/4 scores are copied from the [USACO 2024 benchmark](https://princeton-nlp.github.io/USACOBench/) as baselines
48
+
49
+ All extracted code solutions are executed with **strict time limit enforcement** - any code exceeding the problem's specified time limit is marked as incorrect, ensuring realistic competitive programming conditions.
50
+
51
+ All open-weight models were tested using [vLLM](https://github.com/vllm-project/vllm) v0.6.3 with identical sampling parameters: a `temperature` of 0.8 and a `top_p` of 0.95. Claude-3.7-Sonnet was evaluated at a `temperature` of 1.0. We set the maximum output length (`max_tokens`) to 28,672 for all models to ensure sufficient length for reasoning and code solutions.
52
+
53
  ### Result analysis
54
 
55
  We provide access to the complete CodeFu-7B-v0.1 evaluation results on the USACO benchmark as a [CSV file](codefu-7b-v0.1_usaco.csv.tgz) containing fields such as 'problem_name', 'prompt', 'response', 'response_length', 'solution_code', 'status', and 'score'.