Recorded Talks: TILOS Seminar Series
TILOS Seminar: On Policy Optimization Methods for Control
Maryam Fazel, Professor, University of Washington
Policy Optimization methods enjoy wide practical use in reinforcement learning (RL) for applications ranging from robotic manipulation to game-playing, partly because they are easy to implement and allow for richly parameterized policies. Yet their theoretical properties, from optimality to statistical complexity, are still not fully understood. To help develop a theoretical basis for these methods, and to bridge the gap between RL and control theoretic approaches, recent work has studied whether gradient-based policy optimization can succeed in designing feedback control policies.
In this talk, we start by showing the convergence and optimality of these methods for linear dynamical systems with quadratic costs, where despite nonconvexity, convergence to the optimal policy occurs under mild assumptions. Next, we make a connection between convex parameterizations in control theory on one hand, and the Polyak-Lojasiewicz property of the nonconvex cost function, on the other. Such a connection between the nonconvex and convex landscapes provides a unified view towards extending the results to more complex control problems.
Maryam Fazel is the Moorthy Family Professor of Electrical and Computer Engineering at the University of Washington, with adjunct appointments in Computer Science and Engineering, Mathematics, and Statistics. Maryam received her MS and PhD from Stanford University, and her BS from Sharif University of Technology in Iran, and was a postdoctoral scholar at Caltech before joining UW. She is a recipient of the NSF Career Award, UWEE Outstanding Teaching Award, and a UAI conference Best Student Paper Award with her student. She directs the Institute for Foundations of Data Science (IFDS), a multi-site NSF TRIPODS Institute. Her current research interests are in the area of optimization in machine learning and control.
TILOS Seminar: Non-convex Optimization for Linear Quadratic Gaussian (LQG) Control
Yang Zheng, Assistant Professor, UC San Diego
Recent studies have started to apply machine learning techniques to the control of unknown dynamical systems. They have achieved impressive empirical results. However, the convergence behavior, statistical properties, and robustness performance of these approaches are often poorly understood due to the non-convex nature of the underlying control problems. In this talk, we revisit the Linear Quadratic Gaussian (LQG) control and present recent progress towards its landscape analysis from a non-convex optimization perspective. We view the LQG cost as a function of the controller parameters and study its analytical and geometrical properties. Due to the inherent symmetry induced by similarity transformations, the LQG landscape is very rich yet complicated. We show that 1) the set of stabilizing controllers has at most two path-connected components, and 2) despite the nonconvexity, all minimal stationary points (controllable and observable controllers) are globally optimal. Based on the special non-convex optimization landscape, we further introduce a novel perturbed policy gradient (PGD) method to escape a large class of suboptimal stationary points (including high-order saddles). These results shed some light on the performance analysis of direct policy gradient methods for solving the LQG problem. The talk is based on our recent papers: https://arxiv.org/abs/2102.04393 and https://arxiv.org/abs/2204.00912.
Yang Zheng is an assistant professor in the ECE department at UC San Diego. Yang Zheng received the DPhil (Ph.D.) degree in Engineering Science from the University of Oxford in 2019. He received the B.E. and M.S. degrees from Tsinghua University in 2013 and 2015, respectively. From February 2019 to August 2020, he was a postdoctoral researcher at Harvard University. He was a research associate at Imperial College London in 2021.
Dr. Zheng’s research interests include learning, optimization, and control of network systems, and their applications to cyber-physical systems, autonomous vehicles, and traffic systems. His work has been acknowledged by several awards, including the 2019 European Ph.D. Award on Control for Complex and Heterogeneous Systems, the Best Student Paper Award Finalist at the 2019 European Control Conference, the Best Student Paper Award at the 17th IEEE International Conference on Intelligent Transportation Systems, and the Best Paper Award at the 14th Intelligent Transportation Systems Asia-Pacific Forum. He received the National Scholarship, Outstanding Graduate at Tsinghua University, the Clarendon Scholarship at the University of Oxford, and the Chinese Government Award for Outstanding Self-financed Students Abroad.
TILOS Seminar: Machine Learning for Design Methodology and EDA Optimization
Haoxing Ren, NVIDIA
In this talk, I will first illustrate how ML helps improve design quality as well as design productivity from design methodology perspective with examples in digital and analog designs. Then I will discuss the potential of applying ML to solve challenging EDA optimization problems, focusing on three promising ML techniques: reinforcement learning (RL), physics-based modeling and self-supervised learning (SSL). RL learns to optimize the problem by converting the EDA problem objectives into environment rewards. It can be applied to both directly solve the EDA problem or be part of a conventional EDA algorithm. Physics-based modeling enables more accurate and transferable learning for EDA problems. SSL learns the optimized EDA solution data manifold. Conditioned on the problem input, it can directly produce the solution. I will illustrate the applications of these techniques in standard cell layout, computational lithography, and gate sizing problems. Finally, I will outline three main approaches to integrate ML and conventional EDA algorithms together and the importance of adopting GPU computing to EDA.
Haoxing Ren (Mark) leads the Design Automation research group at NVIDIA Research. His research interests are machine learning applications in design automation and GPU accelerated EDA. Before joining NVIDIA in 2016, he spent 15 years at IBM Microelectronics and IBM Research working on physical design and logic synthesis tools and methodologies for IBM microprocessor and ASIC designs. He received several IBM technical achievement awards including the IBM Corporate Award for his work on improving microprocessor design productivity. He published many papers in the field of design automation including several book chapters in logic synthesis and physical design. He also received the best paper awards at ISPD’2013, DAC’2019 and TCAD’2021. He earned his PhD in Computer Engineering from University of Texas at Austin in 2006.
TILOS Seminar: How to use Machine Learning for Combinatorial Optimization
Sherief Reda, Professor, Brown University and Principal Research Scientist at Amazon
Combinatorial optimization methods are routinely used in many scientific fields to identify optimal solutions among a large but finite set of possible solutions for problems of interests. Given the recent success of machine learning techniques in classification of natural signals (e.g., voice, image, text), it is natural to ask how machine learning methods can be used to improve the quality of solution or the runtime of combinatorial optimization algorithms? In this talk I will provide a general taxonomy and research directions for the use of machine learning techniques in combinatorial optimization. I will illustrate these directions using a number of case studies from my group's research, which include (1) improving the quality of results of integer linear programming (ILP) solver using deep metric learning, and (2) using reinforcement learning techniques to optimize the size of graphs arising in digital circuit design.
Sherief Reda is a Full Professor at the School of Engineering and Computer Science Department at Brown University and a Principal Research Scientist at Amazon. He joined Brown University in 2006 after receiving his Ph.D. in computer science and engineering from University of California, San Diego. He has over 135 research articles in the areas of energy-efficient computing, electronic design automation and combinatorial optimization, as well as several patents. Professor Reda received a number of research acknowledgments and awards, including eight best paper nominations, three best paper awards, and a National Science Foundation CAREER award. He has been a PI or co-PI on more than $21.1M of funded projects from federal agencies and industry corporations. He is a senior member of IEEE.
The FPGA Physical Design Flow Through the Eyes of Machine Learning
Dr. Ismail Bustany, Fellow, AMD
The FPGA physical design (PD) flow has innate features that differentiate it from its sibling, the ASIC PD flow. FPGA device families service a wide range of applications, have much longer lifespans in production use, and bring templatized logic layout and routing interconnect fabrics whose characteristics are captured by detailed device models and simpler timing and routing models (e.g. buffered interconnect and abstracted routing resources). Furthermore, the FPGA PD flow is a “one-stop shop” from synthesis to bitstream generation. This avails complete access to annotate, instrument, and harvest netlist and design features. These key differences provide rich opportunities to exploit both device data and design application specific contexts in optimizing various components of the PD flow. In this talk, I will present examples for the application of ML in device modeling and parameter optimization, draw attention to exciting research opportunities for applying the “learning to optimize” paradigm to solving the placement and routing problems, and share some practical learnings.
Dr. Ismail Bustany is a Fellow at AMD, where he works on physical design implementation and MLCAD . He has served on the technical program committees for the ISPD, ISQED, and DAC. He was the 2019 ISPD general chair. He currently serves on the organizing committees for the ICCAD and SLIP. He organized the 2014 and 2015 ISPD detailed routing-driven placement contests and co-organized the 2017 ICCAD detailed placement contest. His research interests include physical design, computationally efficient optimization algorithms, MLCAD, sparse matrix computations/acceleration, and partitioning algorithms. He earned his B.S. in CSE from UC San Diego and M.S./Ph.D. in EECS from UC Berkeley.
TILOS Seminar: Reasoning Numerically
Sicun Gao, Assistant Professor, UC San Diego
Highly-nonlinear continuous functions have become a pervasive model of computation. Despite newsworthy progress, the practical success of “intelligent” computing is still restricted by our ability to answer questions regarding their quality and dependability: How do we rigorously know that a system will do exactly what we want it to do and nothing else? For traditional software and hardware systems that primarily use digital and rule-based designs, automated reasoning has provided the fundamental principles and widely-used tools for ensuring their quality in all stages of design and engineering. However, the rigid symbolic formulations of typical automated reasoning methods often make them unsuitable for dealing with computation units that are driven by numerical and data-driven approaches. I will overview some of our attempts in bridging this gap. I will highlight how the core challenge of NP-hardness is shared across discrete and continuous domains, and how it motivates us to seek the unification of symbolic, numerical, and statistical perspectives towards better understanding and handling of the curse of dimensionality.
Sicun Gao is an Assistant Professor in Computer Science and Engineering at the University of California, San Diego. He works on search and optimization algorithms for improving the quality of automation and autonomous systems. He is a recipient of the Air Force Young Investigator Award, Amazon Research Award, NSF Career Award, and Silver Medal for the Kurt Godel Research Prize. He received his PhD from Carnegie Mellon University and was a postdoctoral researcher at CMU and MIT.
TILOS Seminar: Deep Generative Models and Inverse Problems
Alexandros G. Dimakis, Professor, The University of Texas at Austin
Sparsity has given us MP3, JPEG, MPEG, Faster MRI and many fun mathematical problems. Deep generative models like GANs, VAEs, invertible flows and Score-based models are modern data-driven generalizations of sparse structure. We will start by presenting the CSGM framework by Bora et al. to solve inverse problems like denoising, filling missing data, and recovery from linear projections using an unsupervised method that relies on a pre-trained generator. We generalize compressed sensing theory beyond sparsity, extending Restricted Isometries to sets created by deep generative models. Our recent results include establishing theoretical results for Langevin sampling from full-dimensional generative models, generative models for MRI reconstruction and fairness guarantees for inverse problems.
Alexandros G. Dimakis is a Professor at the ECE department at UT Austin and the co-director of the National AI Institute on the Foundations of Machine Learning (IFML). He received his Ph.D. from UC Berkeley and the Diploma degree from the National Technical University of Athens. He received several awards including the James Massey Award, NSF Career, a Google research award, the UC Berkeley Eli Jury dissertation award and several best paper awards. He served as an Associate editor for IEEE Transactions on Information Theory and as an Area Chair for major Machine Learning conferences (NeurIPS, ICML, AAAI) and as the chair of the Technical Committee for MLSys 2021. He was selected as an IEEE Fellow for contributions to distributed coding and learning. His research interests include information theory, coding theory and machine learning.
TILOS Seminar: Learning in the Presence of Distribution Shifts: How does the Geometry of Perturbations Play a Role?
Hamed Hassani, Assistant Professor, University of Pennsylvania
In this talk, we will focus on the emerging field of (adversarially) robust machine learning. The talk will be self-contained and no particular background on robust learning will be needed. Recent progress in this field has been accelerated by the observation that despite unprecedented performance on clean data, modern learning models remain fragile to seemingly innocuous changes such as small, norm-bounded additive perturbations. Moreover, recent work in this field has looked beyond norm-bounded perturbations and has revealed that various other types of distributional shifts in the data can significantly degrade performance. However, in general our understanding of such shifts is in its infancy and several key questions remain unaddressed.
The goal of this talk is to explain why robust learning paradigms have to be designed—and sometimes rethought—based on the geometry of the input perturbations. We will cover a wide range of perturbation geometries from simple norm-bounded perturbations, to sparse, natural, and more general distribution shifts. As we will show, the geometry of the perturbations necessitates fundamental modifications to the learning procedure as well as the architecture in order to ensure robustness. In the first part of the talk, we will discuss our recent theoretical results on robust learning with respect to various geometries, along with fundamental tradeoffs between robustness and accuracy, phase transitions, etc. The remaining portion of the talk will be about developing practical robust training algorithms and evaluating the resulting (robust) deep networks against state-of-the-art methods on naturally-varying, real-world datasets.
TILOS Seminar: The Connections Between Discrete Geometric Mechanics, Information Geometry, Accelerated Optimization and Machine Learning
Melvin Leok, Professor of Mathematics, UC San Diego
Geometric mechanics describes Lagrangian and Hamiltonian mechanics geometrically, and information geometry formulates statistical estimation, inference, and machine learning in terms of geometry. A divergence function is an asymmetric distance between two probability densities that induces differential geometric structures and yields efficient machine learning algorithms that minimize the duality gap. The connection between information geometry and geometric mechanics will yield a unified treatment of machine learning and structure-preserving discretizations. In particular, the divergence function of information geometry can be viewed as a discrete Lagrangian, which is a generating function of a symplectic map, that arise in discrete variational mechanics. This identification allows the methods of backward error analysis to be applied, and the symplectic map generated by a divergence function can be associated with the exact time-h flow map of a Hamiltonian system on the space of probability distributions. We will also discuss how time-adaptive Hamiltonian variational integrators can be used to discretize the Bregman Hamiltonian, whose flow generalizes the differential equation that describes the dynamics of the Nesterov accelerated gradient descent method.
Melvin Leok is professor of mathematics and co-director of the CSME graduate program at the UC San Diego. His research interests are in computational geometric mechanics, computational geometric control theory, discrete geometry, and structure-preserving numerical schemes, and particularly how these subjects relate to systems with symmetry. He received his Ph.D. in 2004 from the California Institute of Technology in Control and Dynamical Systems under the direction of Jerrold Marsden. He is a three-time NAS Kavli Frontiers of Science Fellow, a Simons Fellow in Mathematics, and has received the DoD Newton Award for Transformative Ideas, the NSF Faculty Early Career Development (CAREER) award, the SciCADE New Talent Prize, the SIAM Student Paper Prize, and the Leslie Fox Prize (second prize) in Numerical Analysis. He has given plenary talks at Foundations of Computational Mathematics, NUMDIFF, and the IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear Control. He serves on the editorial boards of the Journal of Nonlinear Science, the Journal of Geometric Mechanics, and the Journal of Computational Dynamics, and has served on the editorial boards of the SIAM Journal on Control and Optimization, and the LMS Journal of Computation and Mathematics.
TILOS Seminar: MCMC vs. Variational Inference for Credible Learning and Decision Making at Scale
Yian Ma, Assistant Professor, UC San Diego
Professor Ma will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. Further, he will discuss an optimization perspective on the infinite dimensional probability space, where MCMC leverages stochastic sample paths while variational inference projects the probabilities onto a finite dimensional parameter space. Three ingredients will be the focus of this discussion: non-convexity, acceleration, and stochasticity. This line of work is motivated by epidemic prediction, where we need uncertainty quantification for credible predictions and informed decision making with complex models and evolving data.
Yian Ma is an assistant professor at the Halıcıoğlu Data Science Institute and an affiliated faculty member at the Computer Science and Engineering Department of University of California San Diego. Prior to UC San Diego, he spent a year as a visiting faculty at Google Research. Before that, he was a post-doctoral fellow at EECS, UC Berkeley. Professor Ma completed his Ph.D. at University of Washington and obtained my bachelor's degree at Shanghai Jiao Tong University.
His current research primarily revolves around scalable inference methods for credible machine learning. This involves designing Bayesian inference methods to quantify uncertainty in the predictions of complex models; understanding computational and statistical guarantees of inference algorithms; and leveraging these scalable algorithms to learn from time series data and perform sequential decision making tasks.
Real-time Sampling and Estimation
Shirin Saeedi Bidokhti, Assistant Professor, University of Pennsylvania
The Internet of Things (IoT) and social networks have provided unprecedented information platforms. The information is often governed by processes that evolve over time and/or space (e.g., on an underlying graph) and they may not be stationary or stable. We seek to devise efficient strategies to collect real-time information for timely estimation and inference. This is critical for learning and control. In the first part of the talk, we focus on the problem of real-time sampling and estimation of autoregressive Markov processes over random access channels. For the class of policies in which decision making has to be independent of the source realizations, we make a bridge with the recent notion of Age of Information (AoI) to devise novel distributed policies that utilize local AoI for decision making. We also provide strong guarantees for the performance of the proposed policies. More generally, allowing decision making to be dependent on the source realizations, we propose distributed policies that improve upon the state of the art by a factor of approximately six. Furthermore, we numerically show the surprising result that despite being decentralized, our proposed policy has a performance very close to that of centralized scheduling.
In the second part of the talk, we go beyond time-evolving processes by looking at spread processes that are defined over time as well as an underlying network. We consider the spread of an infectious disease such as COVID-19 in a network of people and design sequential testing (and isolation) strategies to contain the spread. To this end, we develop a probabilistic framework to sequentially learn nodes’ probabilities of infection (using test observations) by an efficient backward-forward update algorithm that first infers about the state of the relevant nodes in the past before propagating that forward into future. We further argue that if nodes’ probabilities of infection were accurately known at each time, exploitation-based policies that test the most likely nodes are myopically optimal in a relevant class of policies. However, when our belief about the probabilities is wrong, exploitation can be arbitrarily bad, as we provably show, while a policy that combines exploitation with random testing can contain the spread faster. Accordingly, we propose exploration policies in which nodes are tested probabilistically based on our estimated probabilities of infection Using simulations, we show in several interesting settings how exploration helps contain the spread by detecting more infected nodes, in a timely manner, and by providing a more accurate estimate of the nodes’ probabilities of infection.
Shirin Saeedi Bidokhti is an assistant professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania (UPenn). She received her M.Sc. and Ph.D. degrees in Computer and Communication Sciences from the Swiss Federal Institute of Technology (EPFL). Prior to joining UPenn, she was a postdoctoral scholar at Stanford University and the Technical University of Munich. She has also held short-term visiting positions at ETH Zurich, University of California at Los Angeles, and the Pennsylvania State University. Her research interests broadly include the design and analysis of network strategies that are scalable, practical, and efficient for use in Internet of Things (IoT) applications, information transfer on networks, as well as data compression techniques for big data. She is a recipient of the 2022 IT society Goldsmith lecturer award, 2021 NSF-CAREER award, 2019 NSF-CRII Research Initiative award and the prospective researcher and advanced postdoctoral fellowships from the Swiss National Science Foundation.
TILOS Seminar: Closing the Virtuous Cycle of AI for IC and IC for AI
David Pan, Professor, University of Texas at Austin
The recent artificial intelligence (AI) boom has been primarily driven by three confluence forces: algorithms, big-data, and computing power enabled by modern integrated circuits (ICs), including specialized AI accelerators. This talk will present a closed-loop perspective for synergistic AI and agile IC design with two main themes, AI for IC and IC for AI. As semiconductor technology enters the era of extreme scaling and heterogeneous integration, IC design and manufacturing complexities become extremely high. More intelligent and agile IC design technologies are needed than ever to optimize performance, power, manufacturability, design cost, etc., and deliver equivalent scaling to Moore’s Law. This talk will present some recent results leveraging modern AI and machine learning advancement with domain-specific customizations for agile IC design and manufacturing, including open-sourced DREAMPlace (DAC’19 and TCAD’21 Best Paper Awards), DARPA-funded MAGICAL project for analog IC design automation, and LithoGAN for design-technology co-optimization. Meanwhile on the IC for AI frontier, customized ICs, including those with beyond-CMOS technologies, can drastically improve AI performance and energy efficiency by orders of magnitude. I will present our recent results on hardware and software co-design for optical neural networks and photonic ICs (which won the 2021 ACM Student Research Competition Grand Finals 1st Place). Closing the virtuous cycle between AI and IC holds great potential to significantly advance the state-of-the-art of each other.
TILOS Seminar: A Mixture of Past, Present, and Future
Arya Mazumdar, Associate Professor, UC San Diego
The problems of heterogeneity pose major challenges in extracting meaningful information from data as well as in the subsequent decision making or prediction tasks. Heterogeneity brings forward some very fundamental theoretical questions of machine learning. For unsupervised learning, a standard technique is the use of mixture models for statistical inference. However for supervised learning, labels can be generated via a mixture of functional relationships. We will provide a survey of results on parameter learning in mixture models, some unexpected connections with other problems, and some interesting future directions.