Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,9 @@ tags:
|
|
11 |
- code verification
|
12 |
---
|
13 |
|
14 |
-
# Model Card for ThinkPRM-
|
15 |
|
16 |
-
ThinkPRM-
|
17 |
|
18 |
Here's an example of the model output:
|
19 |
|
@@ -22,11 +22,11 @@ Here's an example of the model output:
|
|
22 |
|
23 |
### Model Description
|
24 |
|
25 |
-
ThinkPRM-
|
26 |
|
27 |
-
The model uses a standard language modeling objective, making it interpretable and allowing it to scale process verification compute by generating longer or multiple verification CoTs. It demonstrated superior performance compared to LLM-as-a-judge and discriminative PRM baselines (based on the same R1-Distill-Qwen-
|
28 |
|
29 |
-
- **Finetuned from model [optional]:** [R1-Distill-Qwen-
|
30 |
|
31 |
### Model Sources [optional]
|
32 |
|
@@ -36,7 +36,7 @@ The model uses a standard language modeling objective, making it interpretable a
|
|
36 |
|
37 |
### Direct Use
|
38 |
|
39 |
-
ThinkPRM-
|
40 |
- **Scoring Solutions:** Assigning step-level or overall scores to candidate solutions for ranking in Best-of-N sampling or guiding tree search in reasoning tasks.
|
41 |
- **Generating Verification Rationales/CoTs:** Producing detailed chain-of-thought verifications that explain *why* a particular step is correct or incorrect, aiding interpretability.
|
42 |
- **Standalone Verification:** Evaluating the correctness of a given problem-solution pair.
|
@@ -54,7 +54,7 @@ The model has been evaluated on mathematical reasoning (MATH, AIME), scientific
|
|
54 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
from vllm import LLM, SamplingParams
|
56 |
|
57 |
-
model_id = "launch/ThinkPRM-
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
59 |
llm = LLM(model=model_id, max_model_len=16384)
|
60 |
|
|
|
11 |
- code verification
|
12 |
---
|
13 |
|
14 |
+
# Model Card for ThinkPRM-1.5B
|
15 |
|
16 |
+
ThinkPRM-1.5B is a generative Process Reward Model (PRM) based on the R1-Distill-Qwen-1.5B architecture. It is fine-tuned to perform step-by-step verification of reasoning processes (like mathematical solutions) by generating an explicit verification chain-of-thought (CoT) that involves labeling every step. It is designed to be highly data-efficient, requiring significantly less supervision data than traditional discriminative PRMs while achieving strong performance.
|
17 |
|
18 |
Here's an example of the model output:
|
19 |
|
|
|
22 |
|
23 |
### Model Description
|
24 |
|
25 |
+
ThinkPRM-1.5B provides step-level verification scores by generating natural language critiques and correctness judgments for each step in a given solution prefix. It leverages the underlying reasoning capabilities of the base Large Reasoning Model (LRM) and enhances them through fine-tuning on a small (1K examples) dataset of synthetically generated verification CoTs. These synthetic CoTs were produced by prompting QwQ-32B-Preview and filtered against ground-truth step labels from the PRM800K dataset to ensure quality.
|
26 |
|
27 |
+
The model uses a standard language modeling objective, making it interpretable and allowing it to scale process verification compute by generating longer or multiple verification CoTs. It demonstrated superior performance compared to LLM-as-a-judge and discriminative PRM baselines (based on the same R1-Distill-Qwen-1.5B model but trained on ~100x more labels) on benchmarks including ProcessBench, MATH-500, AIME '24, GPQA-Diamond, and LiveCodeBench.
|
28 |
|
29 |
+
- **Finetuned from model [optional]:** [R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
|
30 |
|
31 |
### Model Sources [optional]
|
32 |
|
|
|
36 |
|
37 |
### Direct Use
|
38 |
|
39 |
+
ThinkPRM-1.5B is intended for verifying the correctness of step-by-step reasoning processes. Primary uses include:
|
40 |
- **Scoring Solutions:** Assigning step-level or overall scores to candidate solutions for ranking in Best-of-N sampling or guiding tree search in reasoning tasks.
|
41 |
- **Generating Verification Rationales/CoTs:** Producing detailed chain-of-thought verifications that explain *why* a particular step is correct or incorrect, aiding interpretability.
|
42 |
- **Standalone Verification:** Evaluating the correctness of a given problem-solution pair.
|
|
|
54 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
from vllm import LLM, SamplingParams
|
56 |
|
57 |
+
model_id = "launch/ThinkPRM-1.5B" # Replace with actual model ID on Hub
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
59 |
llm = LLM(model=model_id, max_model_len=16384)
|
60 |
|