BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20200308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20201101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220518T100000
DTEND;TZID=America/Los_Angeles:20220518T110000
DTSTAMP:20260403T165126
CREATED:20250904T173915Z
LAST-MODIFIED:20250904T173915Z
UID:7344-1652868000-1652871600@tilos.ai
SUMMARY:TILOS Seminar: Deep Generative Models and Inverse Problems
DESCRIPTION:Alexandros G. Dimakis\, Professor\, The University of Texas at Austin \nAbstract: Sparsity has given us MP3\, JPEG\, MPEG\, Faster MRI and many fun mathematical problems. Deep generative models like GANs\, VAEs\, invertible flows and Score-based models are modern data-driven generalizations of sparse structure. We will start by presenting the CSGM framework by Bora et al. to solve inverse problems like denoising\, filling missing data\, and recovery from linear projections using an unsupervised method that relies on a pre-trained generator. We generalize compressed sensing theory beyond sparsity\, extending Restricted Isometries to sets created by deep generative models. Our recent results include establishing theoretical results for Langevin sampling from full-dimensional generative models\, generative models for MRI reconstruction and fairness guarantees for inverse problems.
URL:https://tilos.ai/event/tilos-seminar-deep-generative-models-and-inverse-problems/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/dimakis-alexandros-e1711660493749-oAsHBv.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220511T143000
DTEND;TZID=America/Los_Angeles:20220511T153000
DTSTAMP:20260403T165126
CREATED:20250904T173748Z
LAST-MODIFIED:20250904T173748Z
UID:7345-1652279400-1652283000@tilos.ai
SUMMARY:TILOS-OPTML++ Seminar: Constant Regret in Online Decision-Making
DESCRIPTION:Siddhartha Banerjee\, Cornell University \nAbstract: I will present a class of finite-horizon control problems\, where we see a random stream of arrivals\, need to select actions in each step\, and where the final objective depends only on the aggregate type-action counts; this includes many widely-studied control problems including online resource-allocation\, dynamic pricing\, generalized assignment\, online bin packing\, and bandits with knapsacks. For such settings\, I will introduce a unified algorithmic paradigm\, and provide a simple yet general condition under which these algorithms achieve constant regret\, i.e.\, additive loss compared to the hindsight optimal solution which is independent of the horizon and state-space. These results stem from an elementary coupling argument\, which may prove useful for many other questions in online decision-making. Time permitting\, I will illustrate this by showing how we can use this technique to incorporate side information and historical data in these settings\, and achieve constant regret with as little as a single data trace.
URL:https://tilos.ai/event/tilos-optml-seminar-constant-regret-in-online-decision-making/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/09/banerjee-siddhartha-e1757007458539.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220427T143000
DTEND;TZID=America/Los_Angeles:20220427T153000
DTSTAMP:20260403T165126
CREATED:20250904T173650Z
LAST-MODIFIED:20250904T173650Z
UID:7347-1651069800-1651073400@tilos.ai
SUMMARY:TILOS-OPTML++ Seminar: Equilibrium Computation\, Deep Multi-Agent Learning\, and Multi-Agent Reinforcement Learning
DESCRIPTION:Constantinos Daskalakis\, MIT
URL:https://tilos.ai/event/tilos-optml-seminar-equilibrium-computation-deep-multi-agent-learning-and-multi-agent-reinforcement-learning/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/daskalakis-constantinos.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220420T100000
DTEND;TZID=America/Los_Angeles:20220420T110000
DTSTAMP:20260403T165126
CREATED:20250904T173311Z
LAST-MODIFIED:20250904T173311Z
UID:7348-1650448800-1650452400@tilos.ai
SUMMARY:TILOS Seminar: Learning in the Presence of Distribution Shifts: How does the Geometry of Perturbations Play a Role?
DESCRIPTION:Hamed Hassani\, Assistant Professor\, University of Pennsylvania \nAbstract: In this talk\, we will focus on the emerging field of (adversarially) robust machine learning. The talk will be self-contained and no particular background on robust learning will be needed. Recent progress in this field has been accelerated by the observation that despite unprecedented performance on clean data\, modern learning models remain fragile to seemingly innocuous changes such as small\, norm-bounded additive perturbations. Moreover\, recent work in this field has looked beyond norm-bounded perturbations and has revealed that various other types of distributional shifts in the data can significantly degrade performance. However\, in general our understanding of such shifts is in its infancy and several key questions remain unaddressed. \nThe goal of this talk is to explain why robust learning paradigms have to be designed—and sometimes rethought—based on the geometry of the input perturbations. We will cover a wide range of perturbation geometries from simple norm-bounded perturbations\, to sparse\, natural\, and more general distribution shifts. As we will show\, the geometry of the perturbations necessitates fundamental modifications to the learning procedure as well as the architecture in order to ensure robustness. In the first part of the talk\, we will discuss our recent theoretical results on robust learning with respect to various geometries\, along with fundamental tradeoffs between robustness and accuracy\, phase transitions\, etc. The remaining portion of the talk will be about developing practical robust training algorithms and evaluating the resulting (robust) deep networks against state-of-the-art methods on naturally-varying\, real-world datasets.
URL:https://tilos.ai/event/tilos-seminar-learning-in-the-presence-of-distribution-shifts-how-does-the-geometry-of-perturbations-play-a-role/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/02/hassani-hamed-e1757007159953.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220316T100000
DTEND;TZID=America/Los_Angeles:20220316T110000
DTSTAMP:20260403T165126
CREATED:20250903T185121Z
LAST-MODIFIED:20250903T185121Z
UID:7368-1647424800-1647428400@tilos.ai
SUMMARY:TILOS Seminar: The Connections Between Discrete Geometric Mechanics\, Information Geometry\, Accelerated Optimization and Machine Learning
DESCRIPTION:Melvin Leok\, Department of Mathematics\, UC San Diego \nAbstract: Geometric mechanics describes Lagrangian and Hamiltonian mechanics geometrically\, and information geometry formulates statistical estimation\, inference\, and machine learning in terms of geometry. A divergence function is an asymmetric distance between two probability densities that induces differential geometric structures and yields efficient machine learning algorithms that minimize the duality gap. The connection between information geometry and geometric mechanics will yield a unified treatment of machine learning and structure-preserving discretizations. In particular\, the divergence function of information geometry can be viewed as a discrete Lagrangian\, which is a generating function of a symplectic map\, that arise in discrete variational mechanics. This identification allows the methods of backward error analysis to be applied\, and the symplectic map generated by a divergence function can be associated with the exact time-h flow map of a Hamiltonian system on the space of probability distributions. We will also discuss how time-adaptive Hamiltonian variational integrators can be used to discretize the Bregman Hamiltonian\, whose flow generalizes the differential equation that describes the dynamics of the Nesterov accelerated gradient descent method. \n\nMelvin Leok is professor of mathematics and co-director of the CSME graduate program at the University of California\, San Diego. His research interests are in computational geometric mechanics\, computational geometric control theory\, discrete geometry\, and structure-preserving numerical schemes\, and particularly how these subjects relate to systems with symmetry. He received his Ph.D. in 2004 from the California Institute of Technology in Control and Dynamical Systems under the direction of Jerrold Marsden. He is a three-time NAS Kavli Frontiers of Science Fellow\, a Simons Fellow in Mathematics\, and has received the DoD Newton Award for Transformative Ideas\, the NSF Faculty Early Career Development (CAREER) award\, the SciCADE New Talent Prize\, the SIAM Student Paper Prize\, and the Leslie Fox Prize (second prize) in Numerical Analysis. He has given plenary talks at Foundations of Computational Mathematics\, NUMDIFF\, and the IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear Control. He serves on the editorial boards of the Journal of Nonlinear Science\, the Journal of Geometric Mechanics\, and the Journal of Computational Dynamics\, and has served on the editorial boards of the SIAM Journal on Control and Optimization\, and the LMS Journal of Computation and Mathematics.
URL:https://tilos.ai/event/tilos-seminar-the-connections-between-discrete-geometric-mechanics-information-geometry-accelerated-optimization-and-machine-learning/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2021/09/mleok_300x240.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220216T100000
DTEND;TZID=America/Los_Angeles:20220216T110000
DTSTAMP:20260403T165126
CREATED:20250904T173427Z
LAST-MODIFIED:20250904T173427Z
UID:7349-1645005600-1645009200@tilos.ai
SUMMARY:TILOS Seminar: MCMC vs. Variational Inference for Credible Learning and Decision Making at Scale
DESCRIPTION:Yian Ma\, Assistant Professor\, UC San Diego \nAbstract: Professor Ma will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. Further\, he will discuss an optimization perspective on the infinite dimensional probability space\, where MCMC leverages stochastic sample paths while variational inference projects the probabilities onto a finite dimensional parameter space. Three ingredients will be the focus of this discussion: non-convexity\, acceleration\, and stochasticity. This line of work is motivated by epidemic prediction\, where we need uncertainty quantification for credible predictions and informed decision making with complex models and evolving data.
URL:https://tilos.ai/event/tilos-seminar-mcmc-vs-variational-inference-for-credible-learning-and-decision-making-at-scale/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/04/ma-yian-square-e1757007256728.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220119T100000
DTEND;TZID=America/Los_Angeles:20220119T110000
DTSTAMP:20260403T165126
CREATED:20250904T173520Z
LAST-MODIFIED:20250904T173520Z
UID:7350-1642586400-1642590000@tilos.ai
SUMMARY:TILOS Seminar: Real-time Sampling and Estimation: From IoT Markov Processes to Disease Spread Processes
DESCRIPTION:Shirin Saeedi Bidokhti\, Assistant Professor\, University of Pennsylvania \nAbstract: The Internet of Things (IoT) and social networks have provided unprecedented information platforms. The information is often governed by processes that evolve over time and/or space (e.g.\, on an underlying graph) and they may not be stationary or stable. We seek to devise efficient strategies to collect real-time information for timely estimation and inference. This is critical for learning and control.\nIn the first part of the talk\, we focus on the problem of real-time sampling and estimation of autoregressive Markov processes over random access channels. For the class of policies in which decision making has to be independent of the source realizations\, we make a bridge with the recent notion of Age of Information (AoI) to devise novel distributed policies that utilize local AoI for decision making. We also provide strong guarantees for the performance of the proposed policies. More generally\, allowing decision making to be dependent on the source realizations\, we propose distributed policies that improve upon the state of the art by a factor of approximately six. Furthermore\, we numerically show the surprising result that despite being decentralized\, our proposed policy has a performance very close to that of centralized scheduling. \nIn the second part of the talk\, we go beyond time-evolving processes by looking at spread processes that are defined over time as well as an underlying network. We consider the spread of an infectious disease such as COVID-19 in a network of people and design sequential testing (and isolation) strategies to contain the spread. To this end\, we develop a probabilistic framework to sequentially learn nodes’ probabilities of infection (using test observations) by an efficient backward-forward update algorithm that first infers about the state of the relevant nodes in the past before propagating that forward into future. We further argue that if nodes’ probabilities of infection were accurately known at each time\, exploitation-based policies that test the most likely nodes are myopically optimal in a relevant class of policies. However\, when our belief about the probabilities is wrong\, exploitation can be arbitrarily bad\, as we provably show\, while a policy that combines exploitation with random testing can contain the spread faster. Accordingly\, we propose exploration policies in which nodes are tested probabilistically based on our estimated probabilities of infection  Using simulations\, we show in several interesting settings how exploration helps contain the spread by detecting more infected nodes\, in a timely manner\, and by providing a more accurate estimate of the nodes’ probabilities of infection.
URL:https://tilos.ai/event/tilos-seminar-real-time-sampling-and-estimation-from-iot-markov-processes-to-disease-spread-processes/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2021/09/ShirinSaeediBidokhti300x240.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20211215T100000
DTEND;TZID=America/Los_Angeles:20211215T110000
DTSTAMP:20260403T165126
CREATED:20250903T191352Z
LAST-MODIFIED:20250903T191352Z
UID:7363-1639562400-1639566000@tilos.ai
SUMMARY:TILOS Seminar: Closing the Virtuous Cycle of AI for IC and IC for AI
DESCRIPTION:David Pan\, Professor\, The University of Texas at Austin \nAbstract: The recent artificial intelligence (AI) boom has been primarily driven by three confluence forces: algorithms\, big-data\, and computing power enabled by modern integrated circuits (ICs)\, including specialized AI accelerators. This talk will present a closed-loop perspective for synergistic AI and agile IC design with two main themes\, AI for IC and IC for AI. As semiconductor technology enters the era of extreme scaling and heterogeneous integration\, IC design and manufacturing complexities become extremely high. More intelligent and agile IC design technologies are needed than ever to optimize performance\, power\, manufacturability\, design cost\, etc.\, and deliver equivalent scaling to Moore’s Law. This talk will present some recent results leveraging modern AI and machine learning advancement with domain-specific customizations for agile IC design and manufacturing\, including open-sourced DREAMPlace (DAC’19 and TCAD’21 Best Paper Awards)\, DARPA-funded MAGICAL project for analog IC design automation\, and LithoGAN for design-technology co-optimization. Meanwhile on the IC for AI frontier\, customized ICs\, including those with beyond-CMOS technologies\, can drastically improve AI performance and energy efficiency by orders of magnitude. I will present our recent results on hardware and software co-design for optical neural networks and photonic ICs (which won the 2021 ACM Student Research Competition Grand Finals 1st Place). Closing the virtuous cycle between AI and IC holds great potential to significantly advance the state-of-the-art of each other.
URL:https://tilos.ai/event/tilos-seminar-closing-the-virtuous-cycle-of-ai-for-ic-and-ic-for-ai/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2021/09/Pan-David300x240.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20211117T100000
DTEND;TZID=America/Los_Angeles:20211117T110000
DTSTAMP:20260403T165126
CREATED:20250903T191205Z
LAST-MODIFIED:20250903T191205Z
UID:7364-1637143200-1637146800@tilos.ai
SUMMARY:TILOS Seminar: A Mixture of Past\, Present\, and Future
DESCRIPTION:Arya Mazumdar\, Associate Professor\, UC San Diego \nAbstract: The problems of heterogeneity pose major challenges in extracting meaningful information from data as well as in the subsequent decision making or prediction tasks. Heterogeneity brings forward some very fundamental theoretical questions of machine learning. For unsupervised learning\, a standard technique is the use of mixture models for statistical inference. However for supervised learning\, labels can be generated via a mixture of functional relationships. We will provide a survey of results on parameter learning in mixture models\, some unexpected connections with other problems\, and some interesting future directions.
URL:https://tilos.ai/event/tilos-seminar-a-mixture-of-past-present-and-future/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/03/aryamazumdar_headshot-e1756926709113.jpg
END:VEVENT
END:VCALENDAR