Extended Convex Lifting for Policy Optimization in Control

Yang Zheng, UC San Diego

Direct policy search has achieved great empirical success in reinforcement learning. Many recent studies have revisited its theoretical foundation for continuous control, which reveals elegant nonconvex geometry in various benchmark problems. In this talk, we introduce an Extended Convex Lifting (ECL) framework, which reveals hidden convexity in classical optimal and robust control problems from a modern optimization perspective. Our ECL offers a bridge between nonconvex policy optimization and convex reformulations. Despite non-convexity and non-smoothness, the existence of an ECL not only reveals that minimizing the original function is equivalent to a convex problem, but also certifies a class of first-order non-degenerate stationary points to be globally optimal. This ECL framework encompasses many benchmark control problems, including LQR, LQG, state-feedback, and output-feedback H-infinity robust control. We believe that the ECL framework may be of independent interest for analyzing nonconvex problems beyond control.


Yang Zheng is an Assistant Professor in the ECE Department at UC San Diego. His research focuses on control theory, convex and nonconvex optimization, and their applications to autonomous vehicles and traffic systems. He received his DPhil (Ph.D.) in Engineering Science from the University of Oxford in 2019, and his B.E. and M.S. degrees from Tsinghua University in 2013 and 2015, respectively. His work has been recognized with several awards, including the 2019 European Ph.D. Award on Control for Complex and Heterogeneous Systems, the 2022 Best Paper Award from IEEE Transactions on Control of Network Systems, the 2023 Best Graduate Teacher Award from UC San Diego’s ECE Department, the 2024 NSF CAREER Award, and the 2025 Donald P. Eckman Award from the American Automatic Control Council.


You may also like

Page 1 of 4

Leave A Reply

Your email address will not be published. Required fields are marked *