Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
MaksimSTW commited on
Commit
a598f22
·
verified ·
1 Parent(s): c74bbcb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -8
README.md CHANGED
@@ -6,21 +6,33 @@ license: mit
6
  We annotate the entire [**Open Reasoner Zero**]((https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B)) dataset with a **difficulty score** based on the performance of the [Qwen 2.5-MATH-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) model. This provides an adaptive signal for curriculum construction.
7
  Open Reasoner Zero is a curated a dataset of 57,000 reasoning-intensive problems used to train and evaluate reinforcement learning-based methods for large language models.
8
 
9
- ### Difficulty Scoring Method
10
 
11
- Difficulty scores are estimated using the Qwen 2.5-MATH-7B model with the following generation settings:
12
 
13
  - `temperature = 0.6`
14
  - `top_p = 0.9`
15
- - `max_tokens=4096`
16
- - Inference performed via [vLLM](https://github.com/vllm-project/vllm)
17
  - Each problem is attempted **128 times**
18
 
19
- The difficulty score for each problem is computed as:
20
 
21
  d_i = 100 × (1 - (# successes / 128))
22
 
23
- This scoring approach ensures a balanced estimation: a strong model would trivially succeed on all problems, undermining difficulty measurement, while a weak model would fail uniformly, limiting the usefulness of the signal. Qwen 2.5-MATH-7B was chosen for its **mid-range capabilities**, providing **informative gradients** in problem difficulty across the dataset.
 
 
 
24
 
25
- ## Contact
26
- Feel free to contact Taiwei Shi ([email protected]) if you have any questions.
 
 
 
 
 
 
 
 
 
 
6
  We annotate the entire [**Open Reasoner Zero**]((https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B)) dataset with a **difficulty score** based on the performance of the [Qwen 2.5-MATH-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) model. This provides an adaptive signal for curriculum construction.
7
  Open Reasoner Zero is a curated a dataset of 57,000 reasoning-intensive problems used to train and evaluate reinforcement learning-based methods for large language models.
8
 
9
+ ## Difficulty Scoring Method
10
 
11
+ Difficulty scores are estimated using the **Qwen 2.5-MATH-7B** model with the following generation settings:
12
 
13
  - `temperature = 0.6`
14
  - `top_p = 0.9`
15
+ - `max_tokens = 4096`
16
+ - Inference performed using [vLLM](https://github.com/vllm-project/vllm)
17
  - Each problem is attempted **128 times**
18
 
19
+ The difficulty score `d_i` for each problem is computed as:
20
 
21
  d_i = 100 × (1 - (# successes / 128))
22
 
23
+ This approach balances the evaluation signal:
24
+ - A **strong model** would trivially solve easy problems, compressing the difficulty scale.
25
+ - A **weak model** would fail uniformly, providing poor resolution.
26
+ - Qwen 2.5-MATH-7B was selected for its **mid-range capabilities**, offering meaningful gradients across a wide spectrum of problems.
27
 
28
+ ## Difficulty Estimation on Other Datasets
29
+
30
+ We also apply the same difficulty estimation procedure to the following datasets:
31
+
32
+ - [Open Reasoner Zero](https://huggingface.co/datasets/lime-nlp/orz_math_difficulty)
33
+ - [MATH](https://huggingface.co/datasets/lime-nlp/MATH_difficulty)
34
+ - [GSM8K](https://huggingface.co/datasets/lime-nlp/GSM8K_difficulty)
35
+
36
+ ## 📬 Contact
37
+
38
+ For questions or feedback, feel free to reach out to [**Taiwei Shi**](https://maksimstw.github.io/) at [[email protected]](mailto:[email protected]).