ICLR 2026 Workshop on Scaling Post-training for LLMs

SPOT @ ICLR'26

ICLR Conference Logo

About the Workshop

Post-training—spanning Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), and beyond is no longer a final adaptation step. It has become a compute-intensive, first-class phase that increasingly determines the capabilities, safety, and efficiency of large language models. Yet, despite its growing importance, post-training at scale remains poorly understood.

Recent frontier models allocate a substantial and rapidly growing fraction of total compute to post-training, while academic efforts are only beginning to develop principled, scalable methodologies. Unlike pre-training, where scaling laws and design trade-offs are well studied, post-training lacks a comparable scientific framework.

The Workshop on Scaling Post-Training for LLMs (SPOT) aims to address this gap by focusing on the foundational principles that make post-training scale, across algorithms, data, architectures, systems, and objectives. Rather than targeting specific applications, SPOT centers on understanding the design choices, bottlenecks, and trade-offs that govern efficiency, stability, and asymptotic performance in large-scale post-training.

Topics of interest include:

  • Algorithms: SFT and RL formulations, reward modeling, and objectives that enable stable and compute-efficient scaling.
  • Systems: Infrastructure challenges unique to post-training, including rollout latency, training–inference mismatches, distributed execution, and KV-cache management.
  • Architectures: The impact of design choices such as MoE, sparse attention, and model specialization on sample efficiency and generalization.
  • Data: The role of data curation and synthetic data in scaling post-training and shaping generalization.
  • Environments & Safety: Post-training in multimodal, embodied, and real-world settings, with emphasis on evaluation fidelity and safety.
  • Feedback & Verification: Efficient, dense feedback mechanisms as evaluation costs increasingly rival training costs.

SPOT brings together academic and industrial researchers to share experiences, identify open challenges, and move toward a principled science of post-training at scale.

Call for Papers

To be announced.

News
  • December 1, 2025
    SPOT is among the 40 accepted ICLR workshops! Gearing up for Rio!
  • October 10, 2025
    Workshop proposal submitted to ICLR 2026!
Important Dates
  • Paper Submission Deadline January 30, 2026
  • Author Notification February 27, 2026
  • Camera-Ready Deadline* April 01, 2026
  • Workshop Date To be announced

Invited Speakers/Panelists

We are looking forward to hosting an exciting set of invited speakers from diverse research backgrounds!

Nouha Dziri
Nouha Dziri
Research Scientist at AllenAI
Rishabh Agarwal
Rishabh Agarwal
Founding Member at Periodic Labs
Aakanksha Chowdhery
Aakanksha Chowdhery
Adjunct Professor at Stanford University
Researcher at Reflection AI
Sanmi Koyejo
Sanmi Koyejo
Assistant Professor at Stanford University
Jiantao Jiao
Jiantao Jiao
Assistant Professor at UC Berkeley
Director of Research & Distinguished Scientist at NVIDIA
Junyang Lin
Junyang Lin
Core maintainer of Qwen Team
Sami Jaghouar
Sami Jaghouar
Founding Research Engineer at Prime Intellect
Yuandong Tian
Yuandong Tian
Research Scientist Director at Meta FAIR
Cho-Jui Hsieh
Cho-Jui Hsieh
Associate Professor at UCLA
Research Scientist at Google

Tentative Workshop Schedule

Time Arrangement
08:50 - 09:00 Introduction & Opening Remarks
09:00 - 09:35 Invited Talk #1
09:35 - 10:10 Invited Talk #2
10:10 - 10:40 Oral Presentations (3 presentations, 10 mins each)
10:40 - 11:00 Coffee Break
11:00 - 11:35 Invited Talk #3
11:35 - 12:30 Poster Session 1
12:30 - 13:30 Lunch (provided by ICLR conference)
13:30 - 14:05 Invited Talk #4
14:05 - 14:40 Invited Talk #5
14:40 - 15:10 Oral Presentations (3 presentations, 10 mins each)
15:10 - 15:45 Invited Talk #5
15:45 - 16:45 Panel Discussion: Prospects & Pitfalls of Scaling Post-Training of LLMs
16:45 - 17:45 Poster Session 2
17:45 - 18:00 Paper Awards and Closing Remarks

Organizers

Devvrit Khatri
Devvrit Khatri
UT Austin
Sewon Min
Sewon Min
UC Berkeley
Rishabh Tiwari
Rishabh Tiwari
UC Berkeley
Nan Rosemary Ke
Nan Rosemary Ke
Google DeepMind
Gagan Jain
Gagan Jain
Microsoft
Lovish Madaan
Lovish Madaan
Meta, UCL
Kurt Keutzer
Kurt Keutzer
UC Berkeley
Prateek Jain
Prateek Jain
Google DeepMind

Advisors

Ion Stoica
Ion Stoica
UC Berkeley
Inderjit Dhillon
Inderjit Dhillon
Google, UT Austin