BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260206T110000
DTEND;TZID=America/Los_Angeles:20260206T120000
DTSTAMP:20260404T010838
CREATED:20251014T201307Z
LAST-MODIFIED:20260304T210204Z
UID:7668-1770375600-1770379200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Extended Convex Lifting for Policy Optimization in Control
DESCRIPTION:Yang Zheng\, UC San Diego \nAbstract: Direct policy search has achieved great empirical success in reinforcement learning. Many recent studies have revisited its theoretical foundation for continuous control\, which reveals elegant nonconvex geometry in various benchmark problems. In this talk\, we introduce an Extended Convex Lifting (ECL) framework\, which reveals hidden convexity in classical optimal and robust control problems from a modern optimization perspective. Our ECL offers a bridge between nonconvex policy optimization and convex reformulations. Despite non-convexity and non-smoothness\, the existence of an ECL not only reveals that minimizing the original function is equivalent to a convex problem\, but also certifies a class of first-order non-degenerate stationary points to be globally optimal. This ECL framework encompasses many benchmark control problems\, including LQR\, LQG\, state-feedback\, and output-feedback H-infinity robust control. We believe that the ECL framework may be of independent interest for analyzing nonconvex problems beyond control. \n\nYang Zheng is an Assistant Professor in the ECE Department at UC San Diego. His research focuses on control theory\, convex and nonconvex optimization\, and their applications to autonomous vehicles and traffic systems. He received his DPhil (Ph.D.) in Engineering Science from the University of Oxford in 2019\, and his B.E. and M.S. degrees from Tsinghua University in 2013 and 2015\, respectively. His work has been recognized with several awards\, including the 2019 European Ph.D. Award on Control for Complex and Heterogeneous Systems\, the 2022 Best Paper Award from IEEE Transactions on Control of Network Systems\, the 2023 Best Graduate Teacher Award from UC San Diego’s ECE Department\, the 2024 NSF CAREER Award\, and the 2025 Donald P. Eckman Award from the American Automatic Control Council.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-yang-zheng-uc-san-diego/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/zheng-yang-scaled-e1769464299795.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260209T110000
DTEND;TZID=America/Los_Angeles:20260209T120000
DTSTAMP:20260404T010838
CREATED:20260202T183947Z
LAST-MODIFIED:20260304T205925Z
UID:8053-1770634800-1770638400@tilos.ai
SUMMARY:TILOS-MICS Seminar: AI-Driven Design Automation for Multi-Chip Integration in AI Chips
DESCRIPTION:Sung-Kyu Lim\, University of Southern California \nAbstract: Multi-chip integration has become a standard approach in AI training and is rapidly gaining traction in edge learning applications. Leveraging 2.5D and 3D IC architecture enables substantial improvements in energy efficiency and latency by optimizing inter chip data transfer. At the core of this transformation lies the automation of design and simulation for heterogeneous AI chips\, shifting from manual engineering to algorithm driven methodologies. This evolution is being accelerated by advanced electronic design automation (EDA) tools powered by AI. My research group develops novel AI driven algorithms that enhance or replace traditional design automation techniques\, with a focus on enabling next generation heterogeneous AI systems. In this talk\, I will present our recent innovations and explore the critical challenges that lie ahead in applying AI algorithms to EDA for high performance AI chip design. \n\nDr. Sung Kyu Lim is Dean’s Professor of Electrical and Computer Engineering at the University of Southern California\, joining in Fall 2025 after over two decades at Georgia Tech. He received his B.S.\, M.S.\, and Ph.D. in Computer Science from UCLA. His research focuses on the architecture\, design\, and electronic design automation (EDA) of 2.5D and 3D integrated circuits\, with over 450 publications. Dr. Lim is an IEEE Fellow and recipient of major awards including multiple Best Paper Awards (DAC 2023\, TCAD 2022)\, and several Georgia Tech teaching honors. From 2022 to 2024\, he served as a Program Manager at DARPA’s Microsystems Technology Office.
URL:https://tilos.ai/event/tilos-seminar-with-sung-kyu-lim-usc/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/lim-sungkyu-scaled-e1770057488135.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260211T110000
DTEND;TZID=America/Los_Angeles:20260211T120000
DTSTAMP:20260404T010838
CREATED:20250828T192042Z
LAST-MODIFIED:20260227T212830Z
UID:7265-1770807600-1770811200@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Kinetic Theory Perspective of Foundation Models for Physics
DESCRIPTION:Maarten de Hoop\, Rice University \nAbstract: We present a kinetic theory perspective of foundation models for physics. We begin with providing a mathematical framework for analyzing transformers. To uniformly address their expressivity\, we consider the case that the mappings are conditioned on a context represented by a probability distribution of tokens. That is\, transformers become mappings between probability measures. The relevant notion of smoothness then corresponds to continuity in terms of the Wasserstein distance between such contexts. We demonstrate that deep transformers are universal and can approximate continuous in-context mappings to arbitrary precision\, uniformly over compact token domains. We then characterize the conditions on mappings between measures that enable these to be represented in terms of in-context mappings as transformers. The solution map of the Vlasov equation\, which is of nonlocal transport type\, for interacting particle systems in the mean-field regime for the Cauchy problem satisfies the conditions; conversely\, we prove that the measure-theoretic self-attention has the properties that ensure that the infinite depth\, mean-field transformer can be identified with a Vlasov flow. Extending this framework from interactions to collisions leads to a further development of structured architectures inspired by Lattice Boltzmann Models\, while flow motivates a design based on self-warping. \n\nProfessor Maarten V. de Hoop\, Simons Chair in Computational and Applied Mathematics and Earth Science at Rice University\, is internationally recognized for his contributions to the mathematical foundations of seismology\, wave propagation\, and inverse problems. His research bridges microlocal and harmonic analysis\, scattering theory\, and structured numerical methods with applications to seismic imaging\, geophysical inversion\, and large-scale computational modeling of acoustic\, elastic\, and electromagnetic phenomena. De Hoop has been a pioneer in developing techniques to extract subtle information from massive\, complex seismic datasets\, advancing our ability to probe the Earth’s interior with unprecedented resolution\, and more recently has integrated deep learning and data-driven discovery with rigorous mathematical frameworks to open new frontiers in the analysis of multiscale wave phenomena and inverse spectral problems. He is the recipient of the J. Clarence Karcher Award from the Society of Exploration Geophysicists and the Young Scientists Award from the International Society for Analysis\, its Applications and Computation\, has been elected a Fellow of the Institute of Physics and an External Member of the Finnish Academy of Science and Letters\, and has served as associate editor for Inverse Problems\, Inverse Problems and Imaging\, and the International Journal on Geomathematics.
URL:https://tilos.ai/event/tilos-seminar-with-maarten-de-hoop/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2025/08/dehoop-maarten-e1756406140690.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260220T100000
DTEND;TZID=America/Los_Angeles:20260220T110000
DTSTAMP:20260404T010838
CREATED:20251124T183900Z
LAST-MODIFIED:20260224T215057Z
UID:7904-1771581600-1771585200@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Neuromorphic LLMs
DESCRIPTION:Jason Eshraghian\, UC Santa Cruz \nAbstract: This talk will show you what neuromorphic computing can do when an academic lab accidentally pulls $2-million of GPU-hours. We will showcase a series of frontier reasoning LLMs developed out of an academic lab\, from data curation and pre-training to post-training and alignment. These models surpass leading LLMs from Meta\, Google\, and other heavily-resourced labs in the ~10-billion parameter regime\, despite being 5x smaller. \nWe have deployed several models on neuromorphic hardware at just 2 watts\, bringing state-of-the-art reasoning from the datacenter to the edge. Along the way\, we dispel a series of widely-held assumptions about large-scale neuromorphic computation\, revealing how it fundamentally differs from conventional deep learning\, and why that difference matters. \n\nJason Eshraghian is an Assistant Professor and Fulbright Scholar in the Department of Electrical and Computer Engineering at the University of California\, Santa Cruz. He is the developer of snnTorch\, a Python library with over 500\,000 downloads for training spiking neural networks. He is a dual-appointed IEEE CAS and EMBS Distinguished Lecturer\, an Associate Editor of APL Machine Learning\, the Chair of the IEEE Neural Systems and Applications Technical Committee\, has been the recipient of seven IEEE Best Paper Awards\, a Scientific Advisory Board Member of BrainChip and leads the Neuromorphic Agents Team at Conscium.
URL:https://tilos.ai/event/tilos-hdsi-seminar-neuromorphic-llms/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/11/eshraghian-jason-e1764009503674.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260227T110000
DTEND;TZID=America/Los_Angeles:20260227T120000
DTSTAMP:20260404T010838
CREATED:20251003T192706Z
LAST-MODIFIED:20260304T205819Z
UID:7637-1772190000-1772193600@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: (De)regularized Wasserstein Gradient Flows via Reproducing Kernels
DESCRIPTION:Bharath Sriperumbudur\, Pennsylvania State University \nAbstract: Wasserstein gradient flows have become a popular tool in machine learning with applications in sampling\, variational inference\, generative modeling\, and reinforcement learning\, among others. The Wasserstein gradient flow (WGF) involves minimizing a probability functional over the Wasserstein space (by taking into account the intrinsic geometry of the Wasserstein space). In this work\, we introduce approximate/regularized Wasserstein gradient flows in two different settings: (a) approximate the probability functional and (b) approximate the Wasserstein geometry. In (a)\, we consider the probability functional to be chi^2-divergence\, whose WGF is difficult to implement. To this end\, we propose a (de)-regularization of the Maximum Mean Discrepancy (DrMMD) as an approximation of chi^2-divergence and develop an approximate WGF\, which is easy to implement and has applications in generative modeling. On the other hand\, in the setting of (b)\, we use Kullback-Leibler divergence as the probability functional and develop an approximation to the Wassertein geometry\, which allows for an efficient implementation than that of the exact WGF\, with applications in sampling. In both settings\, we present a variety of theoretical results that relate the approximate flow to the exact flow and demonstrate the superiority of the approximate flows via numerical simulations. \n\nBharath Sriperumbudur is a professor in the Department of Statistics (with a courtesy appointment in the Department of Mathematics) at the Pennsylvania State University. His research interests include non-parametric statistics\, machine learning\, statistical learning theory\, optimal transport and gradient flows\, regularization and inverse problems\, reproducing kernel spaces in probability and statistics\, functional and topological data analysis.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-bharath-sriperumbudur-penn-state/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/sriperumbudur-bharath-e1759519613665.jpg
END:VEVENT
END:VCALENDAR