File size: 3,754 Bytes
ad722df
 
 
6d34a5d
 
 
ad722df
 
 
 
 
6d34a5d
 
 
ad722df
6d34a5d
ad722df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8fea093
ad722df
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
base_model:
- Qwen/Qwen2.5-3B-Instruct
datasets:
- ulab-ai/Time-Bench
license: apache-2.0
tags:
- temporal-reasoning
- reinforcement-learning
- large-language-models
paperswithcode:
  arxiv_id: 2505.13508
pipeline_tag: text-generation
library_name: transformers
---

<center>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/d6YiWBndm7WzANfl3e1qi.png" alt="Output Examples" width="600">
</center>

<div align="center">
<a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> ๐Ÿ“Š <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">๐Ÿš€ <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">๐Ÿ“– <strong>Paper</strong></a>
</div>

# Time-R1 Model Series

This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation.

These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench).

## Model Checkpoints

We provide several checkpoints representing different stages of the Time-R1 training process:

### Stage 1: Temporal Comprehension Models

These models are trained to develop foundational temporal understanding.

* **[Time-R1-S1P1](https://huggingface.co/ulab-ai/Time-R1-S1P1):** Checkpoint after Phase 1 of Stage 1 training.
   * *Focus: Foundational logic on easy timestamp inference tasks.*
* **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training.
   * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.*
* **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint ฮธโ‚, after Phase 3 (full Stage 1 training).
   * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.*
* **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model ฮธโ‚', trained for Stage 1 without the dynamic reward design.
   * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.*

### Stage 2: Future Event Time Prediction Model

This model builds upon Stage 1 capabilities to predict future event timings.

* **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint ฮธโ‚‚, after Stage 2 training.
   * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*

Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations.

## How to Use

For loading and using these models, please refer to the example scripts and documentation provided in our [GitHub repository](https://github.com/ulab-uiuc/Time-R1).

Typically, you can load the models using the Hugging Face `transformers` library:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Example for one of the models (replace with the specific model name)
model_name = "ulab-ai/Time-R1-Theta2" # Or your specific Hugging Face model path
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Further usage instructions would go here or in the repository
```

## Citations
```bibtex
@article{liu2025time,
  title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
  author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
  journal={arXiv preprint arXiv:2505.13508},
  year={2025}
}