Recorded Talks: Optimization for ML and AI Seminar Series
Stochastic-Gradient and Diagonal-Scaling Algorithms for Constrained Optimization and Learning
Frank E. Curtis, Lehigh University
I will motivate and provide an overview of recent efforts in my research group on the design and analysis of stochastic-gradient-based algorithms for solving constrained optimization problems. I will focus in particular on our motivation for informed supervised learning, where constraints in the training problem can be used to impose prior knowledge on the properties that should be possessed by a trained prediction model. In addition, I will provide a detailed look at our newest extensions of heavy-ball and Adam schemes from the unconstrained to the equality-constrained setting, for which we have shown state-of-the-art convergence guarantees. I will demonstrate the impressive practical performance of our methods using a few informed supervised learning problems.
Frank E. Curtis is a Professor in the Department of Industrial and Systems Engineering at Lehigh University, where he has been employed since 2009. He received a bachelor’s degree from the College of William and Mary in 2003 with a double major in Computer Science and Mathematics, received a master’s degree in 2004 and Ph.D. degree in 2007 from the Department of Industrial Engineering and Management Science at Northwestern University, and spent two years as a Postdoctoral Researcher in the Courant Institute of Mathematical Sciences at New York University from 2007 until 2009. His research focuses on the design, analysis, and implementation of numerical methods for solving large-scale nonlinear optimization problems. He received an Early Career Award from the Advanced Scientific Computing Research (ASCR) program of the U.S. Department of Energy (DoE), and has received funding from various programs of the U.S. National Science Foundation (NSF), including through a TRIPODS Phase I grant awarded to him and his collaborators at Lehigh, Northwestern, and Boston University. He has also received funding from the U.S. Office of Naval Research (ONR) and DoE’s Advanced Research Projects Agency-Energy (ARPA-E). He received, along with Leon Bottou (Meta AI) and Jorge Nocedal (Northwestern), the 2021 SIAM/MOS Lagrange Prize in Continuous Optimization. He was awarded, with James V. Burke (U. of Washington), Adrian Lewis (Cornell), and Michael Overton (NYU), the 2018 INFORMS Computing Society Prize. He and team members Daniel Molzahn (Georgia Tech), Andreas Waechter (Northwestern), Ermin Wei (Northwestern), and Elizabeth Wong (UC San Diego) were awarded second place in the ARPA-E Grid Optimization Competition in 2020. He currently serves as Area Editor for Continuous Optimization for Mathematics of Operations Research and serves as an Associate Editor for Mathematical Programming, SIAM Journal on Optimization, Operations Research, IMA Journal of Numerical Analysis, and Mathematical Programming Computation. He previously served as the Vice Chair for Nonlinear Programming for the INFORMS Optimization Society, and is currently very active in professional societies and groups related to mathematical optimization, including INFORMS, the Mathematics Optimization Society, and the SIAM Activity Group on Optimization.
Training Neural Networks at Any Scale
Volkan Cevher, EPFL
At the heart of deep learning’s transformative impact lies the concept of scale–encompassing both data and computational resources, as well as their interaction with neural network architectures. Scale, however, presents critical challenges, such as increased instability during training and prohibitively expensive model-specific tuning. Given the substantial resources required to train such models, formulating high-confidence scaling hypotheses backed by rigorous theoretical research has become paramount.
To bridge theory and practice, the talk explores a key mathematical ingredient of scaling in tandem with scaling theory: the numerical solution algorithms commonly employed in deep learning, spanning domains from vision to language models. We unify these algorithms under a common master template, making their foundational principles transparent. In doing so, we reveal the interplay between adaptation to smoothness structures via online learning and the exploitation of optimization geometry through non-Euclidean norms. Our exposition moves beyond simply building larger models–it emphasizes strategic scaling, offering insights that promise to advance the field while economizing on resources.
Volkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara, Turkey, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta, GA in 2005. He was a Research Scientist with the University of Maryland, College Park from 2006-2007 and also with Rice University in Houston, TX, from 2008-2009. Currently, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include machine learning, signal processing theory, optimization theory and methods, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research award in 2018, the IEEE Signal Processing Society Best Paper Award in 2016, a Best Paper Award at CAMSAP in 2015, a Best Paper Award at SPARS in 2009, and an ERC CG in 2016 as well as an ERC StG in 2011.
High-dimensional Optimization with Applications to Compute-Optimal Neural Scaling Laws
Courtney Paquette (McGill University)
Given the massive scale of modern ML models, we now only get a single shot to train them effectively. This restricts our ability to test multiple architectures and hyper-parameter configurations. Instead, we need to understand how these models scale, allowing us to experiment with smaller problems and then apply those insights to larger-scale models. In this talk, I will present a framework for analyzing scaling laws in stochastic learning algorithms using a power-law random features model (PLRF), leveraging high-dimensional probability and random matrix theory. I will then use this scaling law to address the compute-optimal question: How should we choose model size and hyper-parameters to achieve the best possible performance in the most compute-efficient manner? Then using this PLRF model, I will devise a new momentum-based algorithm that (provably) improves the scaling law exponent. Finally, I will present some numerical experiments on LSTMs that show how this new stochastic algorithm can be applied to real data to improve the compute-optimal exponent.
Courtney Paquette is an assistant professor at McGill University in the Mathematics and Statistics department, a CIFAR AI Chair (MILA), and an active member of the Montreal Machine Learning Optimization Group (MTL MLOpt) at MILA. Her research broadly focuses on designing and analyzing algorithms for large-scale optimization problems, motivated by applications in data science, and using techniques that draw from a variety of fields, including probability, complexity theory, and convex and nonsmooth analysis. Dr. Paquette is a lead organizer of the OPT-ML Workshop at NeurIPS since 2020, and a lead organizer (and original creator) of the High-dimensional Learning Dynamics (HiLD) Workshop at ICML.


