BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260206T110000
DTEND;TZID=America/Los_Angeles:20260206T120000
DTSTAMP:20260403T104812
CREATED:20251014T201307Z
LAST-MODIFIED:20260304T210204Z
UID:7668-1770375600-1770379200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Extended Convex Lifting for Policy Optimization in Control
DESCRIPTION:Yang Zheng\, UC San Diego \nAbstract: Direct policy search has achieved great empirical success in reinforcement learning. Many recent studies have revisited its theoretical foundation for continuous control\, which reveals elegant nonconvex geometry in various benchmark problems. In this talk\, we introduce an Extended Convex Lifting (ECL) framework\, which reveals hidden convexity in classical optimal and robust control problems from a modern optimization perspective. Our ECL offers a bridge between nonconvex policy optimization and convex reformulations. Despite non-convexity and non-smoothness\, the existence of an ECL not only reveals that minimizing the original function is equivalent to a convex problem\, but also certifies a class of first-order non-degenerate stationary points to be globally optimal. This ECL framework encompasses many benchmark control problems\, including LQR\, LQG\, state-feedback\, and output-feedback H-infinity robust control. We believe that the ECL framework may be of independent interest for analyzing nonconvex problems beyond control. \n\nYang Zheng is an Assistant Professor in the ECE Department at UC San Diego. His research focuses on control theory\, convex and nonconvex optimization\, and their applications to autonomous vehicles and traffic systems. He received his DPhil (Ph.D.) in Engineering Science from the University of Oxford in 2019\, and his B.E. and M.S. degrees from Tsinghua University in 2013 and 2015\, respectively. His work has been recognized with several awards\, including the 2019 European Ph.D. Award on Control for Complex and Heterogeneous Systems\, the 2022 Best Paper Award from IEEE Transactions on Control of Network Systems\, the 2023 Best Graduate Teacher Award from UC San Diego’s ECE Department\, the 2024 NSF CAREER Award\, and the 2025 Donald P. Eckman Award from the American Automatic Control Council.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-yang-zheng-uc-san-diego/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/zheng-yang-scaled-e1769464299795.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260209T110000
DTEND;TZID=America/Los_Angeles:20260209T120000
DTSTAMP:20260403T104812
CREATED:20260202T183947Z
LAST-MODIFIED:20260304T205925Z
UID:8053-1770634800-1770638400@tilos.ai
SUMMARY:TILOS-MICS Seminar: AI-Driven Design Automation for Multi-Chip Integration in AI Chips
DESCRIPTION:Sung-Kyu Lim\, University of Southern California \nAbstract: Multi-chip integration has become a standard approach in AI training and is rapidly gaining traction in edge learning applications. Leveraging 2.5D and 3D IC architecture enables substantial improvements in energy efficiency and latency by optimizing inter chip data transfer. At the core of this transformation lies the automation of design and simulation for heterogeneous AI chips\, shifting from manual engineering to algorithm driven methodologies. This evolution is being accelerated by advanced electronic design automation (EDA) tools powered by AI. My research group develops novel AI driven algorithms that enhance or replace traditional design automation techniques\, with a focus on enabling next generation heterogeneous AI systems. In this talk\, I will present our recent innovations and explore the critical challenges that lie ahead in applying AI algorithms to EDA for high performance AI chip design. \n\nDr. Sung Kyu Lim is Dean’s Professor of Electrical and Computer Engineering at the University of Southern California\, joining in Fall 2025 after over two decades at Georgia Tech. He received his B.S.\, M.S.\, and Ph.D. in Computer Science from UCLA. His research focuses on the architecture\, design\, and electronic design automation (EDA) of 2.5D and 3D integrated circuits\, with over 450 publications. Dr. Lim is an IEEE Fellow and recipient of major awards including multiple Best Paper Awards (DAC 2023\, TCAD 2022)\, and several Georgia Tech teaching honors. From 2022 to 2024\, he served as a Program Manager at DARPA’s Microsystems Technology Office.
URL:https://tilos.ai/event/tilos-seminar-with-sung-kyu-lim-usc/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/lim-sungkyu-scaled-e1770057488135.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260211T110000
DTEND;TZID=America/Los_Angeles:20260211T120000
DTSTAMP:20260403T104812
CREATED:20250828T192042Z
LAST-MODIFIED:20260227T212830Z
UID:7265-1770807600-1770811200@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Kinetic Theory Perspective of Foundation Models for Physics
DESCRIPTION:Maarten de Hoop\, Rice University \nAbstract: We present a kinetic theory perspective of foundation models for physics. We begin with providing a mathematical framework for analyzing transformers. To uniformly address their expressivity\, we consider the case that the mappings are conditioned on a context represented by a probability distribution of tokens. That is\, transformers become mappings between probability measures. The relevant notion of smoothness then corresponds to continuity in terms of the Wasserstein distance between such contexts. We demonstrate that deep transformers are universal and can approximate continuous in-context mappings to arbitrary precision\, uniformly over compact token domains. We then characterize the conditions on mappings between measures that enable these to be represented in terms of in-context mappings as transformers. The solution map of the Vlasov equation\, which is of nonlocal transport type\, for interacting particle systems in the mean-field regime for the Cauchy problem satisfies the conditions; conversely\, we prove that the measure-theoretic self-attention has the properties that ensure that the infinite depth\, mean-field transformer can be identified with a Vlasov flow. Extending this framework from interactions to collisions leads to a further development of structured architectures inspired by Lattice Boltzmann Models\, while flow motivates a design based on self-warping. \n\nProfessor Maarten V. de Hoop\, Simons Chair in Computational and Applied Mathematics and Earth Science at Rice University\, is internationally recognized for his contributions to the mathematical foundations of seismology\, wave propagation\, and inverse problems. His research bridges microlocal and harmonic analysis\, scattering theory\, and structured numerical methods with applications to seismic imaging\, geophysical inversion\, and large-scale computational modeling of acoustic\, elastic\, and electromagnetic phenomena. De Hoop has been a pioneer in developing techniques to extract subtle information from massive\, complex seismic datasets\, advancing our ability to probe the Earth’s interior with unprecedented resolution\, and more recently has integrated deep learning and data-driven discovery with rigorous mathematical frameworks to open new frontiers in the analysis of multiscale wave phenomena and inverse spectral problems. He is the recipient of the J. Clarence Karcher Award from the Society of Exploration Geophysicists and the Young Scientists Award from the International Society for Analysis\, its Applications and Computation\, has been elected a Fellow of the Institute of Physics and an External Member of the Finnish Academy of Science and Letters\, and has served as associate editor for Inverse Problems\, Inverse Problems and Imaging\, and the International Journal on Geomathematics.
URL:https://tilos.ai/event/tilos-seminar-with-maarten-de-hoop/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2025/08/dehoop-maarten-e1756406140690.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260220T100000
DTEND;TZID=America/Los_Angeles:20260220T110000
DTSTAMP:20260403T104812
CREATED:20251124T183900Z
LAST-MODIFIED:20260224T215057Z
UID:7904-1771581600-1771585200@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Neuromorphic LLMs
DESCRIPTION:Jason Eshraghian\, UC Santa Cruz \nAbstract: This talk will show you what neuromorphic computing can do when an academic lab accidentally pulls $2-million of GPU-hours. We will showcase a series of frontier reasoning LLMs developed out of an academic lab\, from data curation and pre-training to post-training and alignment. These models surpass leading LLMs from Meta\, Google\, and other heavily-resourced labs in the ~10-billion parameter regime\, despite being 5x smaller. \nWe have deployed several models on neuromorphic hardware at just 2 watts\, bringing state-of-the-art reasoning from the datacenter to the edge. Along the way\, we dispel a series of widely-held assumptions about large-scale neuromorphic computation\, revealing how it fundamentally differs from conventional deep learning\, and why that difference matters. \n\nJason Eshraghian is an Assistant Professor and Fulbright Scholar in the Department of Electrical and Computer Engineering at the University of California\, Santa Cruz. He is the developer of snnTorch\, a Python library with over 500\,000 downloads for training spiking neural networks. He is a dual-appointed IEEE CAS and EMBS Distinguished Lecturer\, an Associate Editor of APL Machine Learning\, the Chair of the IEEE Neural Systems and Applications Technical Committee\, has been the recipient of seven IEEE Best Paper Awards\, a Scientific Advisory Board Member of BrainChip and leads the Neuromorphic Agents Team at Conscium.
URL:https://tilos.ai/event/tilos-hdsi-seminar-neuromorphic-llms/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/11/eshraghian-jason-e1764009503674.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260227T110000
DTEND;TZID=America/Los_Angeles:20260227T120000
DTSTAMP:20260403T104812
CREATED:20251003T192706Z
LAST-MODIFIED:20260304T205819Z
UID:7637-1772190000-1772193600@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: (De)regularized Wasserstein Gradient Flows via Reproducing Kernels
DESCRIPTION:Bharath Sriperumbudur\, Pennsylvania State University \nAbstract: Wasserstein gradient flows have become a popular tool in machine learning with applications in sampling\, variational inference\, generative modeling\, and reinforcement learning\, among others. The Wasserstein gradient flow (WGF) involves minimizing a probability functional over the Wasserstein space (by taking into account the intrinsic geometry of the Wasserstein space). In this work\, we introduce approximate/regularized Wasserstein gradient flows in two different settings: (a) approximate the probability functional and (b) approximate the Wasserstein geometry. In (a)\, we consider the probability functional to be chi^2-divergence\, whose WGF is difficult to implement. To this end\, we propose a (de)-regularization of the Maximum Mean Discrepancy (DrMMD) as an approximation of chi^2-divergence and develop an approximate WGF\, which is easy to implement and has applications in generative modeling. On the other hand\, in the setting of (b)\, we use Kullback-Leibler divergence as the probability functional and develop an approximation to the Wassertein geometry\, which allows for an efficient implementation than that of the exact WGF\, with applications in sampling. In both settings\, we present a variety of theoretical results that relate the approximate flow to the exact flow and demonstrate the superiority of the approximate flows via numerical simulations. \n\nBharath Sriperumbudur is a professor in the Department of Statistics (with a courtesy appointment in the Department of Mathematics) at the Pennsylvania State University. His research interests include non-parametric statistics\, machine learning\, statistical learning theory\, optimal transport and gradient flows\, regularization and inverse problems\, reproducing kernel spaces in probability and statistics\, functional and topological data analysis.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-bharath-sriperumbudur-penn-state/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/sriperumbudur-bharath-e1759519613665.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260313T100000
DTEND;TZID=America/Los_Angeles:20260313T110000
DTSTAMP:20260403T104812
CREATED:20251014T200527Z
LAST-MODIFIED:20260313T183553Z
UID:7665-1773396000-1773399600@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Transformers Learn Generalizable Chain-of-Thought Reasoning via Gradient Descent
DESCRIPTION:Yuejie Chi\, Yale \nAbstract: Transformers have demonstrated remarkable chain-of-thought reasoning capabilities\, yet\, the underlying mechanisms by which they acquire and extrapolate these capabilities remain limited. This talk presents a theoretical analysis of transformers trained via gradient descent for symbolic reasoning and state tracking tasks with increasing problem complexity. Our analysis reveals the coordination of multi-head attention to solve multiple subtasks in a single autoregressive path\, and the bootstrapping of inherently sequential reasoning through recursive self-training curriculum. Our optimization-based guarantees demonstrate that even shallow multi-head transformers\, with chain-of-thought\, can be trained to effectively solve problems that would otherwise require deeper architectures. \n\nYuejie Chi is the Charles C. and Dorothea S. Dilley Professor of Statistics and Data Science at Yale University\, with a secondary appointment in Computer Science\, and a member of the Yale Institute for Foundations of Data Science. Before joining Yale\, Dr. Chi was the Sense of Wonder Group Endowed Professor of Electrical and Computer Engineering in AI Systems at Carnegie Melon University\, with affiliation in MLD and CyLab. She also spent some time as a visiting researcher at Meta’s Fundamental AI Research (FAIR). Dr. Yue’s research interests lie in the theoretical and algorithmic foundations of data science\, generative AI\, reinforcement learning\, and signal processing\, motivated by applications in scientific and engineering domains. Her current focus is on improving the performance\, efficiency and reliability of generative AI and decision making\, driven by data-intensive but resource-constrained scenarios.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-transformers-learn-generalizable-chain-of-thought-reasoning-via-gradient-descent/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/chi-yuejie-e1760472307997.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260320T130000
DTEND;TZID=America/Los_Angeles:20260320T160000
DTSTAMP:20260403T104812
CREATED:20260223T191220Z
LAST-MODIFIED:20260324T225859Z
UID:8096-1774011600-1774022400@tilos.ai
SUMMARY:TILOS-SDSU ExpandAI Workshop
DESCRIPTION:Agenda\n1:00 – 1:10 pm: Welcome and opening remarks \n1:10 – 1:30 pm: Invited talk by Dr. Lily Weng\, Assistant Professor\, Halıcıoğlu Data Science Institute and Department of Computer Science and Engineering\, UC San Diego \n1:30 – 1:50 pm: Invited talk by Dr. Reza Akhavian\, Associate Professor and Jim Ryan Endowed Chair in Construction Engineering and Management\, San Diego State University \n1:50 – 2:10 pm: Invited talk by Dr. Rose Yu\, Associate Professor of Computer Science and Engineering\, UC San Diego \n2:10 – 2:20 pm: Coffee Break \n2:20 – 2:40 pm: Invited talk by Dr. Baris Aksanli\, Associate Professor of Computer Science and Engineering\, San Diego State University \n2:40 – 3:00 pm: Invited talk by Dr. Yusu Wang\, Professor\, Halıcıoğlu Data Science Institute and Department of Computer Science and Engineering\, UC San Diego \n3:00 – 3:20 pm: Invited talk by Dr. Salimeh Sekeh\, Associate Professor of Computer Science\, San Diego State University \n3:20 – 3:30 pm: Lightning talks by poster presenters \n3:30 – 4:00 pm: Poster Session and Networking with Refreshments
URL:https://tilos.ai/event/tilos-sdsu-expandai-workshop/
LOCATION:Qualcomm Conference Center (Jacobs Hall first floor)\, 9736 Engineers Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2026/02/Untitled-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260325T110000
DTEND;TZID=America/Los_Angeles:20260325T120000
DTSTAMP:20260403T104812
CREATED:20260310T175540Z
LAST-MODIFIED:20260326T215133Z
UID:8191-1774436400-1774440000@tilos.ai
SUMMARY:TILOS-SDSU Seminar: Autopilots Need Parachutes: Reliability Lessons from LLM-Automated Embedded AI Systems
DESCRIPTION:Roberto Morabito\, EURECOM \nAbstract: Embedded AI systems are becoming increasingly complex to develop and maintain\, requiring specialized workflows that span data processing\, model conversion\, optimization\, and deployment across heterogeneous hardware platforms. Recently\, large language models have emerged as a promising tool to automate parts of this lifecycle. In this talk\, I present recent work investigating the use of generative AI models as orchestration agents for embedded machine learning pipelines. Using an automated system that leverages LLMs to generate and iteratively refine software artifacts for embedded platforms\, we evaluate the feasibility of automating key stages of the AI lifecycle. Our empirical results reveal both the promise and the limitations of this approach. Generative models can significantly accelerate development workflows. However\, they also introduce instability\, iterative failure modes\, and unpredictable operational costs. I will discuss the main failure patterns observed in practice and outline research directions aimed at improving reliability through hybrid reasoning frameworks and system-level feedback mechanisms. \n\nRoberto Morabito is an Assistant Professor in the Networked Systems group of the Communication Systems Department at EURECOM\, France\, and a Docent at the University of Helsinki. Before joining EURECOM\, he was a Senior Researcher in the Department of Computer Science at the University of Helsinki. Earlier in his career\, he spent eight years at Ericsson Research Finland\, where he worked on cloud platforms\, IoT systems\, and cyber-physical systems. He received his PhD in Networking Technology from Aalto University in 2019 and was a postdoctoral researcher at the EDGE Lab\, School of Electrical and Computer Engineering\, Princeton University. His research lies at the intersection of networked systems\, edge computing\, and distributed AI\, focusing on the design and lifecycle management of AI systems operating under computing and networking resource constraints.
URL:https://tilos.ai/event/tilos-sdsu-seminar-autopilots-need-parachutes-reliability-lessons-from-llm-automated-embedded-ai-systems/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/03/morabito-roberto-e1773165764846.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260327T100000
DTEND;TZID=America/Los_Angeles:20260327T110000
DTSTAMP:20260403T104812
CREATED:20260317T231250Z
LAST-MODIFIED:20260331T142721Z
UID:8222-1774605600-1774609200@tilos.ai
SUMMARY:TILOS-Optimization for ML and AI Seminar: Implicit bias results for Muon\, Adam\, and Friends
DESCRIPTION:Matus Telgarsky\, New York University \nAbstract: This talk will give both an empirical overview and a few simple bonds controlling the optimization path\, or implicit bias\, of modern optimization methods such as Adam and Muon (and Friends). The talk will begin with empirical results demonstrating the implicit bias phenomenon with shallow networks and also transformers combined with chain-of-thought. The talk will then briefly survey a few mathematical implicit bias analyses of nonlinear networks\, which unfortunately do not carry through to transformers. As such\, the talk concludes with a technical portion presenting another approach to analyzing these optimization methods in the linear case\, providing generic implicit bias results for them\, and empirically demonstrating hope that this particular methodology can carry over to the nonlinear case. \n\nMatus Telgarsky is an Associate Professor of Computer Science at the Courant Institute of Math at NYU\, specializing in deep learning theory. The highlight of his academic career was completing a PhD under Sanjoy Dasgupta at UC San Diego. Adventures since then include co-chairing the Midwest ML Symposium in 2017 with Po-Ling Loh\, and chairing two semester-long Simons Institute Programs at UC Berkeley. Accolades include a 2018 NSF Career Award and delivering a COLT 2025 keynote.
URL:https://tilos.ai/event/tilos-optimization-for-ml-and-ai-seminar-implicit-bias-results-for-muon-adam-and-friends/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/03/telgarsky-matus-e1773789078482.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260408T110000
DTEND;TZID=America/Los_Angeles:20260408T120000
DTSTAMP:20260403T104812
CREATED:20251008T180712Z
LAST-MODIFIED:20260330T151101Z
UID:7641-1775646000-1775649600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Engineering Interpretable and Faithful AI Systems
DESCRIPTION:René Vidal\, University of Pennsylvania \nAbstract: Large Language Models (LLMs) and Vision Language Models (VLMs) have achieved remarkable performance across a wide range of tasks. However\, their growing deployment has exposed fundamental limitations in faithfulness\, safety\, and transparency. In this talk\, I will present a unified perspective on addressing these challenges through principled model interventions and interpretable decision-making frameworks. I first introduce Information Pursuit (IP)\, an interpretable-by-design prediction framework that replaces opaque reasoning with a sequence of informative\, user-interpretable queries\, yielding concise explanations alongside accurate predictions. I then present Parsimonious Concept Engineering (PaCE)\, an approach that improves faithfulness and alignment by selectively removing undesirable internal activations\, mitigating hallucinations and biased language while preserving linguistic competence. Results across text\, vision\, and medical tasks illustrate how these ideas advance transparency without sacrificing performance. Together\, these contributions point toward a broader direction for building AI systems that are powerful\, faithful\, and aligned with human values. \n\nRené Vidal is the Penn Integrates Knowledge and Rachleff University Professor of Electrical and Systems Engineering and Radiology at the University of Pennsylvania\, where he directs the Center for Innovation in Data Engineering and Science (IDEAS) and serves as Co-Chair of Penn AI. He is also an Amazon Scholar\, Affiliated Chief Scientist at NORCE\, and former Associate Editor-in-Chief of IEEE Transactions on Pattern Analysis and Machine Intelligence. Professor Vidal’s research advances the mathematical foundations of deep learning and trustworthy AI\, with broad impact across computer vision and biomedical data science. His contributions have been recognized with major honors\, including the IEEE Edward J. McCluskey Technical Achievement Award\, the D’Alembert Faculty Award\, the J.K. Aggarwal Prize\, the ONR Young Investigator Award\, the NSF CAREER Award\, and best paper awards in machine learning\, computer vision\, signal processing\, control\, and medical robotics. He is a Fellow of ACM\, AIMBE\, IEEE\, and IAPR\, and a Sloan Fellow. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-engineering-interpretable-and-faithful-ai-systems/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/rene-vidal-e1759946821354.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260410
DTEND;VALUE=DATE:20260411
DTSTAMP:20260403T104812
CREATED:20260217T204006Z
LAST-MODIFIED:20260304T223925Z
UID:8073-1775779200-1775865599@tilos.ai
SUMMARY:2026 Robotics Summit: The Next 25 Years of Robotics
DESCRIPTION:
URL:https://tilos.ai/event/2026-robotics-summit-the-next-25-years-of-robotics/
LOCATION:University of Pennsylvania School of Engineering and Applied Science\, Philadelphia\, PA\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2026/02/robotics-summit.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260410T100000
DTEND;TZID=America/Los_Angeles:20260410T110000
DTSTAMP:20260403T104812
CREATED:20250923T164943Z
LAST-MODIFIED:20260326T182945Z
UID:7602-1775815200-1775818800@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: A survey of the mixing times of the Proximal Sampler algorithm
DESCRIPTION:Andre Wibisono\, Yale University \nAbstract: Sampling is a fundamental algorithmic task with many connections to optimization. In this talk\, we survey a recent algorithm for sampling known as the Proximal Sampler\, which can be seen as a proximal discretization of the continuous-time Langevin dynamics\, and achieves the current state-of-the-art iteration complexity for sampling in discrete time. We survey the mixing time guarantees of the Proximal Sampler algorithm and show they match the guarantees for the Langevin dynamics. When the target distribution satisfies log-concavity or isoperimetry\, the Proximal Sampler has rapid convergence guarantees. We illustrate the proof technique via the strong data processing inequality along the Gaussian channel and its time reversal under isoperimetry. \n\nAndre Wibisono is an assistant professor in the Department of Computer Science at Yale University\, with a secondary appointment in the Department of Statistics & Data Science. His research interests are in the design and analysis of algorithms for machine learning\, in particular for problems in optimization\, sampling\, and game theory. He received his BS degrees in Mathematics and in Computer Science from MIT\, his MEng in Computer Science from MIT\, his MA in Statistics from UC Berkeley\, and his PhD in Computer Science from UC Berkeley. He has done postdoctoral research at the University of Wisconsin-Madison and at the Georgia Institute of Technology. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-a-survey-of-the-mixing-times-of-the-proximal-sampler-algorithm/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/wibisono-andre-e1758646059816.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260427
DTEND;VALUE=DATE:20260428
DTSTAMP:20260403T104812
CREATED:20260121T215144Z
LAST-MODIFIED:20260402T165134Z
UID:8026-1777248000-1777334399@tilos.ai
SUMMARY:ICLR 2026 Workshop: Principled Design for Trustworthy AI - Interpretability\, Robustness\, and Safety across Modalities
DESCRIPTION:Modern AI systems\, particularly large language models\, vision-language models\, and deep vision networks\, are increasingly deployed in high-stakes settings such as healthcare\, autonomous driving\, and legal decisions. Yet\, their lack of transparency\, fragility to distributional shifts between train/test environments\, and representation misalignment in emerging tasks and data/feature modalities raise serious concerns about their trustworthiness. \nThis workshop focuses on developing trustworthy AI systems by principled design: models that are interpretable\, robust\, and aligned across the full lifecycle – from training and evaluation to inference-time behavior and deployment. We aim to unify efforts across modalities (language\, vision\, audio\, and time series) and across technical areas of trustworthiness spanning interpretability\, robustness\, uncertainty\, and safety.
URL:https://tilos.ai/event/iclr-2026-workshop-principled-design-for-trustworthy-ai-interpretability-robustness-and-safety-across-modalities/
LOCATION:ICLR 2026\, Riocentro Convention and Event Center\, Rio de Janiero\, Brazil
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/01/rio.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260506T110000
DTEND;TZID=America/Los_Angeles:20260506T120000
DTSTAMP:20260403T104812
CREATED:20251013T161935Z
LAST-MODIFIED:20251014T195232Z
UID:7644-1778065200-1778068800@tilos.ai
SUMMARY:TILOS-HDSI Seminar with Ellen Vitercik (Stanford)
DESCRIPTION:Title and abstract TBA… \n\nEllen Vitercik is an Assistant Professor at Stanford with a joint appointment between the Management Science and Engineering department and the Computer Science department. Her research interests include machine learning\, algorithm design\, discrete and combinatorial optimization\, and the interface between economics and computation. Before joining Stanford\, Dr. Vitercik was a Miller fellow at UC Berkeley\, hosted by Michael Jordan and Jennifer Chayes. She received a PhD in Computer Science from Carnegie Mellon University\, advised by Nina Balcan and Tuomas Sandholm. Dr. Vitercik has been recognized by a Schmidt Sciences AI2050 Early Career Fellowship and an NSF CAREER award. Her thesis won the SIGecom Doctoral Dissertation Award\, the CMU School of Computer Science Distinguished Dissertation Award\, and the Honorable Mention Victor Lesser Distinguished Dissertation Award. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-ellen-vitercik-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/vitericik-ellen-e1760372346890.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260513T110000
DTEND;TZID=America/Los_Angeles:20260513T120000
DTSTAMP:20260403T104812
CREATED:20260223T175317Z
LAST-MODIFIED:20260310T183326Z
UID:8092-1778670000-1778673600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: ComPO: Preference Alignment via Comparison Oracles
DESCRIPTION:Tianyi Lin\, Columbia University \nDirect alignment methods are increasingly used for aligning large language models (LLMs) with human preferences. However\, these methods suffer from the likelihood displacement\, which can be driven by noisy preference pairs that induce similar likelihood for preferred and dis-preferred responses. To address this issue\, we consider doing derivative-free optimization based on comparison oracles. First\, we propose a new preference alignment method via comparison oracles and provide convergence guarantees for its basic mechanism. Second\, we improve our method using some heuristics and conduct the experiments to demonstrate the flexibility and compatibility of practical mechanisms in improving the performance of LLMs using noisy preference pairs. Evaluations are conducted across multiple base and instruction-tuned models with different benchmarks. Experimental results show the effectiveness of our method as an alternative to addressing the limitations of existing methods. A highlight of our work is that we evidence the importance of designing specialized methods for preference pairs with distinct likelihood margins. \n\nTianyi Lin is an assistant professor in the Department of Industrial Engineering and Operations Research (IEOR) at Columbia University. His research interests lie in generative artificial intelligence\, optimization for machine learning\, game theory\, social and economic network\, and optimal transport. He obtained his Ph.D. in Electrical Engineering and Computer Science at UC Berkeley\, where he was advised by Professor Michael Jordan and was associated with the Berkeley Artificial Intelligence Research (BAIR) group. From 2023 to 2024\, he was a postdoctoral researcher at the Laboratory for Information & Decision Systems (LIDS) at Massachusetts Institute of Technology\, working with Professor Asuman Ozdaglar. Prior to that\, he received a B.S. in Mathematics from Nanjing University\, a M.S. in Pure Mathematics and Statistics from University of Cambridge and a M.S. in Operations Research from UC Berkeley. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-compo-preference-alignment-via-comparison-oracles/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/lin-tianyi-e1771869179855.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260520T110000
DTEND;TZID=America/Los_Angeles:20260520T120000
DTSTAMP:20260403T104812
CREATED:20260227T004426Z
LAST-MODIFIED:20260227T004426Z
UID:8112-1779274800-1779278400@tilos.ai
SUMMARY:TILOS-HDSI Seminar with Andrej Risteski (Carnegie Mellon)
DESCRIPTION:Title and abstract TBA… \n\nAndrej Risteski is an Associate Professor at the Machine Learning Department in Carnegie Mellon University. Prior to that\, he was a Norbert Wiener Research Fellow jointly in the Applied Math department and IDSS at MIT. Dr. Risteski received his PhD in the Computer Science Department at Princeton University under the advisement of Sanjeev Arora. \nDr. Risteski’s research interests lie in the intersection of machine learning\, statistics\, and theoretical computer science\, spanning topics like (probabilistic) generative models\, algorithmic tools for learning and inference\, representation and self-supervised learning\, out-of-distribution generalization and applications of neural approaches to natural language processing and scientific domains. The broad goal of his research is principled and mathematical understanding of statistical and algorithmic problems arising in modern machine learning paradigms. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-andrej-risteski-carnegie-mellon/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2026/02/risteski-andrej-e1772152946152.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260603
DTEND;VALUE=DATE:20260604
DTSTAMP:20260403T104812
CREATED:20260224T210719Z
LAST-MODIFIED:20260324T200137Z
UID:8102-1780444800-1780531199@tilos.ai
SUMMARY:CVPR 2026 Workshop: Trustworthy\, Robust\, Uncertainty-Aware\, and Explainable Visual Intelligence and Beyond (TRUE-V)
DESCRIPTION:Contemporary vision models and vision–language models are increasingly deployed in high-stakes domains\, yet remain opaque\, fragile\, and difficult to align across tasks and modalities. This workshop aim to foster dialogue on the urgent need for transparent\, reliable\, and safe computer vision systems\, especially in critical domains such as healthcare\, transportation\, and legal decision making. It brings together research on interpretability\, robustness\, uncertainty\, and alignment under a unified design paradigm\, encouraging cross-disciplinary exchange on shared technical and societal challenges. By promoting responsible design and deployment\, the workshop seeks to advance forward-looking solutions for visual intelligence that enhance accountability and public trust.
URL:https://tilos.ai/event/cvpr-2026-workshop/
LOCATION:IEEE/CVF Conference on Computer Vision and Pattern Recognition\, Denver\, CO\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/CVPR_Denver_2026.jpg
END:VEVENT
END:VCALENDAR