Post-training—spanning Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), and beyond is no longer a final adaptation step. It has become a compute-intensive, first-class phase that increasingly determines the capabilities, safety, and efficiency of large language models. Yet, despite its growing importance, post-training at scale remains poorly understood.
Recent frontier models allocate a substantial and rapidly growing fraction of total compute to post-training, while academic efforts are only beginning to develop principled, scalable methodologies. Unlike pre-training, where scaling laws and design trade-offs are well studied, post-training lacks a comparable scientific framework.
The Workshop on Scaling Post-Training for LLMs (SPOT) aims to address this gap by focusing on the foundational principles that make post-training scale, across algorithms, data, architectures, systems, and objectives. Rather than targeting specific applications, SPOT centers on understanding the design choices, bottlenecks, and trade-offs that govern efficiency, stability, and asymptotic performance in large-scale post-training.
Topics of interest include:
SPOT brings together academic and industrial researchers to share experiences, identify open challenges, and move toward a principled science of post-training at scale.
To be announced.
We are looking forward to hosting an exciting set of invited speakers from diverse research backgrounds!
| Time | Arrangement |
|---|---|
| 08:50 - 09:00 | Introduction & Opening Remarks |
| 09:00 - 09:35 | Invited Talk #1 |
| 09:35 - 10:10 | Invited Talk #2 |
| 10:10 - 10:40 | Oral Presentations (3 presentations, 10 mins each) |
| 10:40 - 11:00 | Coffee Break |
| 11:00 - 11:35 | Invited Talk #3 |
| 11:35 - 12:30 | Poster Session 1 |
| 12:30 - 13:30 | Lunch (provided by ICLR conference) |
| 13:30 - 14:05 | Invited Talk #4 |
| 14:05 - 14:40 | Invited Talk #5 |
| 14:40 - 15:10 | Oral Presentations (3 presentations, 10 mins each) |
| 15:10 - 15:45 | Invited Talk #5 |
| 15:45 - 16:45 | Panel Discussion: Prospects & Pitfalls of Scaling Post-Training of LLMs |
| 16:45 - 17:45 | Poster Session 2 |
| 17:45 - 18:00 | Paper Awards and Closing Remarks |