Recorded Talks: Foundations of AI and Optimization
TILOS Seminar: Learning in the Presence of Distribution Shifts: How does the Geometry of Perturbations Play a Role?
Hamed Hassani, Assistant Professor, University of Pennsylvania
In this talk, we will focus on the emerging field of (adversarially) robust machine learning. The talk will be self-contained and no particular background on robust learning will be needed. Recent progress in this field has been accelerated by the observation that despite unprecedented performance on clean data, modern learning models remain fragile to seemingly innocuous changes such as small, norm-bounded additive perturbations. Moreover, recent work in this field has looked beyond norm-bounded perturbations and has revealed that various other types of distributional shifts in the data can significantly degrade performance. However, in general our understanding of such shifts is in its infancy and several key questions remain unaddressed.
The goal of this talk is to explain why robust learning paradigms have to be designed—and sometimes rethought—based on the geometry of the input perturbations. We will cover a wide range of perturbation geometries from simple norm-bounded perturbations, to sparse, natural, and more general distribution shifts. As we will show, the geometry of the perturbations necessitates fundamental modifications to the learning procedure as well as the architecture in order to ensure robustness. In the first part of the talk, we will discuss our recent theoretical results on robust learning with respect to various geometries, along with fundamental tradeoffs between robustness and accuracy, phase transitions, etc. The remaining portion of the talk will be about developing practical robust training algorithms and evaluating the resulting (robust) deep networks against state-of-the-art methods on naturally-varying, real-world datasets.
TILOS Seminar: The Connections Between Discrete Geometric Mechanics, Information Geometry, Accelerated Optimization and Machine Learning
Melvin Leok, Professor of Mathematics, UC San Diego
Geometric mechanics describes Lagrangian and Hamiltonian mechanics geometrically, and information geometry formulates statistical estimation, inference, and machine learning in terms of geometry. A divergence function is an asymmetric distance between two probability densities that induces differential geometric structures and yields efficient machine learning algorithms that minimize the duality gap. The connection between information geometry and geometric mechanics will yield a unified treatment of machine learning and structure-preserving discretizations. In particular, the divergence function of information geometry can be viewed as a discrete Lagrangian, which is a generating function of a symplectic map, that arise in discrete variational mechanics. This identification allows the methods of backward error analysis to be applied, and the symplectic map generated by a divergence function can be associated with the exact time-h flow map of a Hamiltonian system on the space of probability distributions. We will also discuss how time-adaptive Hamiltonian variational integrators can be used to discretize the Bregman Hamiltonian, whose flow generalizes the differential equation that describes the dynamics of the Nesterov accelerated gradient descent method.
Melvin Leok is professor of mathematics and co-director of the CSME graduate program at the UC San Diego. His research interests are in computational geometric mechanics, computational geometric control theory, discrete geometry, and structure-preserving numerical schemes, and particularly how these subjects relate to systems with symmetry. He received his Ph.D. in 2004 from the California Institute of Technology in Control and Dynamical Systems under the direction of Jerrold Marsden. He is a three-time NAS Kavli Frontiers of Science Fellow, a Simons Fellow in Mathematics, and has received the DoD Newton Award for Transformative Ideas, the NSF Faculty Early Career Development (CAREER) award, the SciCADE New Talent Prize, the SIAM Student Paper Prize, and the Leslie Fox Prize (second prize) in Numerical Analysis. He has given plenary talks at Foundations of Computational Mathematics, NUMDIFF, and the IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear Control. He serves on the editorial boards of the Journal of Nonlinear Science, the Journal of Geometric Mechanics, and the Journal of Computational Dynamics, and has served on the editorial boards of the SIAM Journal on Control and Optimization, and the LMS Journal of Computation and Mathematics.
TILOS Seminar: MCMC vs. Variational Inference for Credible Learning and Decision Making at Scale
Yian Ma, Assistant Professor, UC San Diego
Professor Ma will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. Further, he will discuss an optimization perspective on the infinite dimensional probability space, where MCMC leverages stochastic sample paths while variational inference projects the probabilities onto a finite dimensional parameter space. Three ingredients will be the focus of this discussion: non-convexity, acceleration, and stochasticity. This line of work is motivated by epidemic prediction, where we need uncertainty quantification for credible predictions and informed decision making with complex models and evolving data.
Yian Ma is an assistant professor at the Halıcıoğlu Data Science Institute and an affiliated faculty member at the Computer Science and Engineering Department of University of California San Diego. Prior to UC San Diego, he spent a year as a visiting faculty at Google Research. Before that, he was a post-doctoral fellow at EECS, UC Berkeley. Professor Ma completed his Ph.D. at University of Washington and obtained my bachelor's degree at Shanghai Jiao Tong University.
His current research primarily revolves around scalable inference methods for credible machine learning. This involves designing Bayesian inference methods to quantify uncertainty in the predictions of complex models; understanding computational and statistical guarantees of inference algorithms; and leveraging these scalable algorithms to learn from time series data and perform sequential decision making tasks.
TILOS Seminar: A Mixture of Past, Present, and Future
Arya Mazumdar, Associate Professor, UC San Diego
The problems of heterogeneity pose major challenges in extracting meaningful information from data as well as in the subsequent decision making or prediction tasks. Heterogeneity brings forward some very fundamental theoretical questions of machine learning. For unsupervised learning, a standard technique is the use of mixture models for statistical inference. However for supervised learning, labels can be generated via a mixture of functional relationships. We will provide a survey of results on parameter learning in mixture models, some unexpected connections with other problems, and some interesting future directions.