m-serious commited on
Commit
ad722df
·
verified ·
1 Parent(s): 4e0715c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -3
README.md CHANGED
@@ -1,3 +1,78 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ulab-ai/Time-Bench
5
+ base_model:
6
+ - Qwen/Qwen2.5-3B-Instruct
7
+ tags:
8
+ - temporal-reasoning
9
+ - reinforcement-learning
10
+ - large-language-models
11
+ paperswithcode:
12
+ arxiv_id: 2505.13508
13
+ model_index:
14
+ - name: Time-R1-S1P1
15
+ ---
16
+ <center>
17
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/d6YiWBndm7WzANfl3e1qi.png" alt="Output Examples" width="600">
18
+ </center>
19
+
20
+ <div align="center">
21
+ <a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> 📊 <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">🚀 <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">📖 <strong>Paper</strong></a>
22
+ </div>
23
+
24
+ # Time-R1 Model Series
25
+
26
+ This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation.
27
+
28
+ These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench).
29
+
30
+ ## Model Checkpoints
31
+
32
+ We provide several checkpoints representing different stages of the Time-R1 training process:
33
+
34
+ ### Stage 1: Temporal Comprehension Models
35
+
36
+ These models are trained to develop foundational temporal understanding.
37
+
38
+ * **[Time-R1-S1P1](https://huggingface.co/ulab-ai/Time-R1-S1P1):** Checkpoint after Phase 1 of Stage 1 training.
39
+ * *Focus: Foundational logic on easy timestamp inference tasks.*
40
+ * **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training.
41
+ * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.*
42
+ * **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint θ₁, after Phase 3 (full Stage 1 training).
43
+ * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.*
44
+ * **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model θ₁', trained for Stage 1 without the dynamic reward design.
45
+ * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.*
46
+
47
+ ### Stage 2: Future Event Time Prediction Model
48
+
49
+ This model builds upon Stage 1 capabilities to predict future event timings.
50
+
51
+ * **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint θ₂, after Stage 2 training.
52
+ * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*
53
+
54
+ Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations.
55
+
56
+ ## How to Use
57
+
58
+ For loading and using these models, please refer to the example scripts and documentation provided in our [GitHub repository](https://github.com/ulab-uiuc/Time-R1).
59
+
60
+ Typically, you can load the models using the Hugging Face `transformers` library:
61
+
62
+ ```python
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
+ # Example for one of the models (replace with the specific model name)
65
+ model_name = "ulab-ai/Time-R1-Theta1" # Or your specific Hugging Face model path
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForCausalLM.from_pretrained(model_name)
68
+ # Further usage instructions would go here or in the repository
69
+ ```
70
+
71
+ ## Citations
72
+ ```bibtex
73
+ @article{liu2025time,
74
+ title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
75
+ author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
76
+ journal={arXiv preprint arXiv:2505.13508},
77
+ year={2025}
78
+ }