File size: 3,449 Bytes
7ca173e
21ae9a9
 
 
c57c89a
 
 
 
21ae9a9
 
 
c57c89a
 
 
 
21ae9a9
 
434c168
 
73c962b
05ab4a2
bd07be9
dc40ab5
73c962b
434c168
 
 
 
a16895a
73c962b
 
 
 
434c168
21ae9a9
 
7ca173e
 
 
21ae9a9
 
 
 
 
7ca173e
21ae9a9
 
 
7ca173e
21ae9a9
 
 
 
 
 
dc40ab5
 
21ae9a9
 
85f4876
21ae9a9
 
 
7ca173e
21ae9a9
 
 
 
 
 
7ca173e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91

---
license: apache-2.0
tags:
- math
- rl
- qwen3
- dapomath17k
library_name: transformers
pipeline_tag: text-generation
language: en
datasets:
- BytedTsinghua-SIA/DAPO-Math-17k
base_model:
- Qwen/Qwen3-8B-Base
---

# On Predictability of Reinforcement Learning Dynamics for Large Language Models


![Overview](overview.png)



This repository provides one of the models used in our paper **"On Predictability of Reinforcement Learning Dynamics for Large Language Models"** for evaluating and predicting reinforcement learning (RL) dynamics in large language models (LLMs).  

Recent advances in LLM reasoning capabilities are largely driven by RL, yet the parameter dynamics during RL training remain poorly understood. Our work identifies two key properties of RL-induced parameter updates: **Rank-1 Dominance**, where the top singular subspace of the parameter update matrix captures nearly all reasoning improvements, and **Rank-1 Linear Dynamics**, where this subspace evolves linearly across training, allowing accurate prediction from early checkpoints. Based on these insights, we propose **AlphaRL**, a plug-in acceleration framework that extrapolates final parameter updates from a short early training window, achieving up to 2.5ร— speedup while retaining over 96% of reasoning performance.  

This model is one of the training checkpoints used in our paper and is provided to support research on evaluating and predicting parameter dynamics during RL training of LLMs. The full codebase is available at: [AlphaRL GitHub](https://github.com/caiyuchen-ustc/Alpha-RL).





## ๐Ÿ”ง Prompt Format (Chat Template)

During Inference, each question is formatted as:

{question} Please reason step by step, and put your final answer within boxed{}.

Then wrapped using the chat template:

```python
prompt = tokenizer.apply_chat_template(
    [{{"content": question_with_instruction, "role": "user"}}],
    tokenize=False,
    add_generation_prompt=True,
)
```

## ๐Ÿงช Example Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("caiyuchen/DAPO-step-0")
tokenizer = AutoTokenizer.from_pretrained("caiyuchen/DAPO-step-0")

question = "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates. Enter your answer in the form $(r,\theta),$ where $r > 0$ and $0 \le \theta < 2 \pi.$"
question_with_instruction = question + "Please reason step by step, and put your final answer within \boxed{{}}"

# Apply chat template
prompt = tokenizer.apply_chat_template(
    [{{"content": question_with_instruction, "role": "user"}}],
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## ๐Ÿ“Ž Reference

If you find this model useful, please consider citing our paper:

๐Ÿ”— **Paper Link**: https://huggingface.co/papers/2510.00553

```bibtex
@misc{cai2025predictabilityreinforcementlearningdynamics,
      title={On Predictability of Reinforcement Learning Dynamics for Large Language Models}, 
      author={Yuchen Cai and Ding Cao and Xin Xu and Zijun Yao and Yuqing Huang and Zhenyu Tan and Benyi Zhang and Guiquan Liu and Junfeng Fang},
      year={2025},
      eprint={2510.00553},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2510.00553}, 
}