ICLR 2026 Workshop on Scaling Post-training for LLMs

SPOT @ ICLR'26

ICLR Conference Logo

About the Workshop

Post-training—spanning Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), and beyond is no longer a final adaptation step. It has become a compute-intensive, first-class phase that increasingly determines the capabilities, safety, and efficiency of large language models. Yet, despite its growing importance, post-training at scale remains poorly understood.

Recent frontier models allocate a substantial and rapidly growing fraction of total compute to post-training, while academic efforts are only beginning to develop principled, scalable methodologies. Unlike pre-training, where scaling laws and design trade-offs are well studied, post-training lacks a comparable scientific framework.

The Workshop on Scaling Post-Training for LLMs (SPOT) aims to address this gap by focusing on the foundational principles that make post-training scale, across algorithms, data, architectures, systems, and objectives. Rather than targeting specific applications, SPOT centers on understanding the design choices, bottlenecks, and trade-offs that govern efficiency, stability, and asymptotic performance in large-scale post-training.

Call for Papers

We are excited to invite submissions to the ICLR 2026 Workshop on Scaling Post-Training for LLMs (SPOT). SPOT aims to bring together academic and industrial researchers to share experiences, identify open challenges, and move toward a principled science of post-training at scale.

Submission Guidelines

Topics of interest include, but are not limited to:

  • Algorithms: SFT and RL formulations, reward modeling, and objectives that enable stable and compute-efficient scaling.
  • Systems: Infrastructure challenges unique to post-training, including rollout latency, training–inference mismatches, distributed execution, and KV-cache management.
  • Architectures: The impact of design choices such as MoE, sparse attention, and model specialization on sample efficiency and generalization.
  • Data: The role of data curation and synthetic data in scaling post-training and shaping generalization.
  • Environments & Safety: Post-training in multimodal, embodied, and real-world settings, with emphasis on evaluation fidelity and safety.
  • Feedback & Verification: Efficient, dense feedback mechanisms as evaluation costs increasingly rival training costs.

Submission Format

  • Submissions must be original, unpublished work, and not under review elsewhere.
  • Long Papers should be at most 8 pages long (excluding references and appendices).
  • Short/Tiny Papers should be at most 4 pages long (excluding references and appendices).
  • Submissions must be in LaTeX format using the ICLR 2026 conference style.
  • All submissions will be reviewed in a double-blind manner, and authors must anonymize their manuscripts
  • We encourage at least one author per submission to serve as a reviewer for our workshop.

Accepted Submissions

Accepted papers will be presented as talks or posters during the workshop. The workshop will select the best paper(s) to recognize outstanding contributions in the field.

Non-Proceedings Policy

This workshop does not produce formal proceedings. Accepted submissions will appear on OpenReview, but authors remain free to submit and publish their work elsewhere in the future.

Submission Portal

Submit your paper through the OpenReview Submission Portal .

For inquiries, please contact us at spoticlr@gmail.com.

News
  • January 5, 2026
    Call for Papers is up! The submission deadline is February 5, 2026 (AOE).
  • December 1, 2025
    SPOT is among the 40 accepted ICLR workshops! Gearing up for Rio!
Important Dates
  • Paper Submission Deadline February 05, 2026
  • Author Notification March 01, 2026
  • Camera-Ready Deadline* April 01, 2026
  • Workshop Date April 26/27, 2026
Important Information About Tiny Papers

This year, ICLR is discontinuing the separate “Tiny Papers” track and instead requires each workshop to accept short (4 pages in ICLR format) paper submissions, with an eye towards inclusion. Authors of these papers will be earmarked for potential funding from ICLR. A separate application for Financial Assistance is required to evaluate eligibility. The application for Financial Assistance will open at the beginning of February and close on March 2, 2025. For more details, visit Call For Tiny Papers.

Invited Speakers/Panelists

We are looking forward to hosting an exciting set of invited speakers from diverse research backgrounds!

Nouha Dziri
Nouha Dziri
Research Scientist at AllenAI
Rishabh Agarwal
Rishabh Agarwal
Founding Member at Periodic Labs
Aakanksha Chowdhery
Aakanksha Chowdhery
Adjunct Professor at Stanford University
Researcher at Reflection AI
Sanmi Koyejo
Sanmi Koyejo
Assistant Professor at Stanford University
Jiantao Jiao
Jiantao Jiao
Assistant Professor at UC Berkeley
Director of Research & Distinguished Scientist at NVIDIA
Junyang Lin
Junyang Lin
Core maintainer of Qwen Team
Sami Jaghouar
Sami Jaghouar
Founding Research Engineer at Prime Intellect
Yuandong Tian
Yuandong Tian
Research Scientist Director at Meta FAIR
Cho-Jui Hsieh
Cho-Jui Hsieh
Associate Professor at UCLA
Research Scientist at Google

Tentative Workshop Schedule

Time Arrangement
08:50 - 09:00 Introduction & Opening Remarks
09:00 - 09:35 Invited Talk #1
09:35 - 10:10 Invited Talk #2
10:10 - 10:40 Oral Presentations (3 presentations, 10 mins each)
10:40 - 11:00 Coffee Break
11:00 - 11:35 Invited Talk #3
11:35 - 12:30 Poster Session 1
12:30 - 13:30 Lunch (provided by ICLR conference)
13:30 - 14:05 Invited Talk #4
14:05 - 14:40 Invited Talk #5
14:40 - 15:10 Oral Presentations (3 presentations, 10 mins each)
15:10 - 15:45 Invited Talk #5
15:45 - 16:45 Panel Discussion: Prospects & Pitfalls of Scaling Post-Training of LLMs
16:45 - 17:45 Poster Session 2
17:45 - 18:00 Paper Awards and Closing Remarks

Organizers

Devvrit Khatri
Devvrit Khatri
UT Austin
Sewon Min
Sewon Min
UC Berkeley
Rishabh Tiwari
Rishabh Tiwari
UC Berkeley
Nan Rosemary Ke
Nan Rosemary Ke
Google DeepMind
Gagan Jain
Gagan Jain
Microsoft
Lovish Madaan
Lovish Madaan
Meta, UCL
Kurt Keutzer
Kurt Keutzer
UC Berkeley
Prateek Jain
Prateek Jain
Google DeepMind

Advisors

Ion Stoica
Ion Stoica
UC Berkeley
Inderjit Dhillon
Inderjit Dhillon
Google, UT Austin