BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260130T110000
DTEND;TZID=America/Los_Angeles:20260130T120000
DTSTAMP:20260404T033018
CREATED:20251014T200143Z
LAST-MODIFIED:20260304T210210Z
UID:7663-1769770800-1769774400@tilos.ai
SUMMARY:[CANCELED] Optimization for ML and AI Seminar: Fantastic Pretraining Optimizers and Where to Find Them
DESCRIPTION:Tengyu Ma\, Stanford \nAbstract: AdamW has long been the dominant optimizer in language model pretraining\, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two issues\, we conduct a systematic study of ten deep learning optimizers across four model scales (0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum). We find that fair and informative comparisons require rigorous hyperparameter tuning and evaluations across a range of model scales and data-to-model ratios\, performed at the end of training. First\, optimal hyperparameters for one optimizer may be suboptimal for another\, making blind hyperparameter transfer unfair. Second\, the actual speedup of many proposed optimizers over well-tuned baselines is lower than claimed and decreases with model size to only 1.1x for 1.2B parameter models. Thirdly\, comparing intermediate checkpoints before reaching the target training budgets can be misleading\, as rankings between two optimizers can flip during training due to learning rate decay. Through our thorough investigation\, we find that all the fastest optimizers such as Muon and Soap\, use matrices as preconditioners—multiplying gradients with matrices rather than entry-wise scalars. However\, the speedup of matrix-based optimizers is inversely proportional to model scale\, decreasing from 1.4x over AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models. \n\nTengyu Ma is an assistant professor of computer science at Stanford University. His research interests broadly include topics in machine learning\, algorithms and their theory\, such as deep learning\, (deep) reinforcement learning\, pre-training / foundation models\, robustness\, non-convex optimization\, distributed optimization\, and high-dimensional statistics. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-tengyu-ma-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/ma-tengyu-e1760473083457.jpg
END:VEVENT
END:VCALENDAR