BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260109T110000
DTEND;TZID=America/Los_Angeles:20260109T120000
DTSTAMP:20260404T032906
CREATED:20251014T195932Z
LAST-MODIFIED:20260304T210221Z
UID:7661-1767956400-1767960000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Randomized linear algebra with subspace injections
DESCRIPTION:Joel Tropp\, Caltech \nAbstract: To achieve the greatest possible speed\, practitioners regularly implement randomized algorithms for low-rank approximation and least-squares regression with structured dimension reduction maps. This talk outlines a new perspective on structured dimension reduction\, based on the injectivity properties of the dimension reduction map. This approach provides sharper bounds for sparse dimension reduction maps\, and it leads to exponential improvements for tensor-product dimension reduction. Empirical evidence confirms that these types of structured random matrices offer exemplary performance for a range of synthetic problems and contemporary scientific applications. \nJoint work with Chris Camaño\, Ethan Epperly\, and Raphael Meyer; available at arXiv:2508.21189. \n\nJoel A. Tropp is Steele Family Professor of Applied & Computational Mathematics at the California Institute of Technology. His research centers on applied mathematics\, machine learning\, data science\, numerical algorithms\, and random matrix theory. Some of his best-known contributions include matching pursuit algorithms\, randomized SVD algorithms\, matrix concentration inequalities\, and statistical phase transitions. Prof. Tropp attained the Ph.D. degree in Computational Applied Mathematics at the University of Texas at Austin in 2004\, and he joined Caltech in 2007. He won the PECASE in 2008\, and he was recognized as a Highly Cited Researcher in Computer Science each year from 2014–2018. He is co-founder of the SIAM Journal on Mathematics of Data Science (SIMODS)\, and he was co-chair of the inaugural 2020 SIAM Conference on the Mathematics of Data Science. Prof. Tropp was elected SIAM Fellow in 2019\, IEEE Fellow in 2020\, and IMS Fellow in 2024. He received the 2025 Richard P. Feynman Prize for Excellence in Teaching at Caltech. He is an invited speaker at the 2026 International Congress of Mathematicians (ICM).
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-joel-tropp-caltech/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/tropp-joel-e1760471957302.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260128T110000
DTEND;TZID=America/Los_Angeles:20260128T120000
DTSTAMP:20260404T032906
CREATED:20251031T211533Z
LAST-MODIFIED:20260227T213734Z
UID:7725-1769598000-1769601600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Safety\, Representations\, and Generative Learning in Dynamical Systems
DESCRIPTION:Koushil Sreenath\, UC Berkeley \nAbstract: This talk explores the interplay between model-based guarantees and learning-based flexibility in the control of dynamical systems. I begin with safety-critical control using control barrier functions (CBFs)\, highlighting that while CBFs enforce state constraints\, they may induce unstable internal dynamics. I introduce conditions under which CBF-based safety filters ensure boundedness of the full system state. I then transition to learning representations of hybrid dynamical systems. I present a framework that learns continuous neural representations by exploiting the geometric structure induced by guards and resets\, enabling accurate flow prediction without explicit mode switching. Finally\, I discuss generative learning approaches for control\, emphasizing guided diffusion models that jointly represent states and actions. Through applications to agile humanoid locomotion\, motion synthesis\, and dynamic manipulation\, I demonstrate how generative models can produce versatile\, long-horizon behaviors while respecting physical constraints. Together\, these results highlight how structure\, geometry\, and learning can bridge safety guarantees and expressive control in complex dynamical systems. \n\nKoushil Sreenath is an Associate Professor of Mechanical Engineering\, at UC Berkeley. He received a Ph.D. degree in Electrical Engineering and Computer Science and a M.S. degree in Applied Mathematics from the University of Michigan at Ann Arbor\, MI\, in 2011. He was a Postdoctoral Scholar at the GRASP Lab at University of Pennsylvania from 2011 to 2013 and an Assistant Professor at Carnegie Mellon University from 2013 to 2017. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion was featured on The Discovery Channel\, CNN\, ESPN\, FOX\, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum\, New Scientist\, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book. He received the NSF CAREER\, Hellman Fellow\, Google Faculty Research Award in Robotics\, and Best Paper Awards at Learning for Dynamics and Control (L4DC) and Robotics: Science and Systems (RSS).
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-koushil-sreenath-uc-berkeley/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/sreenath-koushil-1-e1769450413875.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260130T110000
DTEND;TZID=America/Los_Angeles:20260130T120000
DTSTAMP:20260404T032906
CREATED:20251014T200143Z
LAST-MODIFIED:20260304T210210Z
UID:7663-1769770800-1769774400@tilos.ai
SUMMARY:[CANCELED] Optimization for ML and AI Seminar: Fantastic Pretraining Optimizers and Where to Find Them
DESCRIPTION:Tengyu Ma\, Stanford \nAbstract: AdamW has long been the dominant optimizer in language model pretraining\, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two issues\, we conduct a systematic study of ten deep learning optimizers across four model scales (0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum). We find that fair and informative comparisons require rigorous hyperparameter tuning and evaluations across a range of model scales and data-to-model ratios\, performed at the end of training. First\, optimal hyperparameters for one optimizer may be suboptimal for another\, making blind hyperparameter transfer unfair. Second\, the actual speedup of many proposed optimizers over well-tuned baselines is lower than claimed and decreases with model size to only 1.1x for 1.2B parameter models. Thirdly\, comparing intermediate checkpoints before reaching the target training budgets can be misleading\, as rankings between two optimizers can flip during training due to learning rate decay. Through our thorough investigation\, we find that all the fastest optimizers such as Muon and Soap\, use matrices as preconditioners—multiplying gradients with matrices rather than entry-wise scalars. However\, the speedup of matrix-based optimizers is inversely proportional to model scale\, decreasing from 1.4x over AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models. \n\nTengyu Ma is an assistant professor of computer science at Stanford University. His research interests broadly include topics in machine learning\, algorithms and their theory\, such as deep learning\, (deep) reinforcement learning\, pre-training / foundation models\, robustness\, non-convex optimization\, distributed optimization\, and high-dimensional statistics. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-tengyu-ma-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/ma-tengyu-e1760473083457.jpg
END:VEVENT
END:VCALENDAR