BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260327T100000
DTEND;TZID=America/Los_Angeles:20260327T110000
DTSTAMP:20260404T035523
CREATED:20260317T231250Z
LAST-MODIFIED:20260331T142721Z
UID:8222-1774605600-1774609200@tilos.ai
SUMMARY:TILOS-Optimization for ML and AI Seminar: Implicit bias results for Muon\, Adam\, and Friends
DESCRIPTION:Matus Telgarsky\, New York University \nAbstract: This talk will give both an empirical overview and a few simple bonds controlling the optimization path\, or implicit bias\, of modern optimization methods such as Adam and Muon (and Friends). The talk will begin with empirical results demonstrating the implicit bias phenomenon with shallow networks and also transformers combined with chain-of-thought. The talk will then briefly survey a few mathematical implicit bias analyses of nonlinear networks\, which unfortunately do not carry through to transformers. As such\, the talk concludes with a technical portion presenting another approach to analyzing these optimization methods in the linear case\, providing generic implicit bias results for them\, and empirically demonstrating hope that this particular methodology can carry over to the nonlinear case. \n\nMatus Telgarsky is an Associate Professor of Computer Science at the Courant Institute of Math at NYU\, specializing in deep learning theory. The highlight of his academic career was completing a PhD under Sanjoy Dasgupta at UC San Diego. Adventures since then include co-chairing the Midwest ML Symposium in 2017 with Po-Ling Loh\, and chairing two semester-long Simons Institute Programs at UC Berkeley. Accolades include a 2018 NSF Career Award and delivering a COLT 2025 keynote.
URL:https://tilos.ai/event/tilos-optimization-for-ml-and-ai-seminar-implicit-bias-results-for-muon-adam-and-friends/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/03/telgarsky-matus-e1773789078482.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260408T110000
DTEND;TZID=America/Los_Angeles:20260408T120000
DTSTAMP:20260404T035523
CREATED:20251008T180712Z
LAST-MODIFIED:20260330T151101Z
UID:7641-1775646000-1775649600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Engineering Interpretable and Faithful AI Systems
DESCRIPTION:René Vidal\, University of Pennsylvania \nAbstract: Large Language Models (LLMs) and Vision Language Models (VLMs) have achieved remarkable performance across a wide range of tasks. However\, their growing deployment has exposed fundamental limitations in faithfulness\, safety\, and transparency. In this talk\, I will present a unified perspective on addressing these challenges through principled model interventions and interpretable decision-making frameworks. I first introduce Information Pursuit (IP)\, an interpretable-by-design prediction framework that replaces opaque reasoning with a sequence of informative\, user-interpretable queries\, yielding concise explanations alongside accurate predictions. I then present Parsimonious Concept Engineering (PaCE)\, an approach that improves faithfulness and alignment by selectively removing undesirable internal activations\, mitigating hallucinations and biased language while preserving linguistic competence. Results across text\, vision\, and medical tasks illustrate how these ideas advance transparency without sacrificing performance. Together\, these contributions point toward a broader direction for building AI systems that are powerful\, faithful\, and aligned with human values. \n\nRené Vidal is the Penn Integrates Knowledge and Rachleff University Professor of Electrical and Systems Engineering and Radiology at the University of Pennsylvania\, where he directs the Center for Innovation in Data Engineering and Science (IDEAS) and serves as Co-Chair of Penn AI. He is also an Amazon Scholar\, Affiliated Chief Scientist at NORCE\, and former Associate Editor-in-Chief of IEEE Transactions on Pattern Analysis and Machine Intelligence. Professor Vidal’s research advances the mathematical foundations of deep learning and trustworthy AI\, with broad impact across computer vision and biomedical data science. His contributions have been recognized with major honors\, including the IEEE Edward J. McCluskey Technical Achievement Award\, the D’Alembert Faculty Award\, the J.K. Aggarwal Prize\, the ONR Young Investigator Award\, the NSF CAREER Award\, and best paper awards in machine learning\, computer vision\, signal processing\, control\, and medical robotics. He is a Fellow of ACM\, AIMBE\, IEEE\, and IAPR\, and a Sloan Fellow. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-engineering-interpretable-and-faithful-ai-systems/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/rene-vidal-e1759946821354.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260410T100000
DTEND;TZID=America/Los_Angeles:20260410T110000
DTSTAMP:20260404T035523
CREATED:20250923T164943Z
LAST-MODIFIED:20260326T182945Z
UID:7602-1775815200-1775818800@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: A survey of the mixing times of the Proximal Sampler algorithm
DESCRIPTION:Andre Wibisono\, Yale University \nAbstract: Sampling is a fundamental algorithmic task with many connections to optimization. In this talk\, we survey a recent algorithm for sampling known as the Proximal Sampler\, which can be seen as a proximal discretization of the continuous-time Langevin dynamics\, and achieves the current state-of-the-art iteration complexity for sampling in discrete time. We survey the mixing time guarantees of the Proximal Sampler algorithm and show they match the guarantees for the Langevin dynamics. When the target distribution satisfies log-concavity or isoperimetry\, the Proximal Sampler has rapid convergence guarantees. We illustrate the proof technique via the strong data processing inequality along the Gaussian channel and its time reversal under isoperimetry. \n\nAndre Wibisono is an assistant professor in the Department of Computer Science at Yale University\, with a secondary appointment in the Department of Statistics & Data Science. His research interests are in the design and analysis of algorithms for machine learning\, in particular for problems in optimization\, sampling\, and game theory. He received his BS degrees in Mathematics and in Computer Science from MIT\, his MEng in Computer Science from MIT\, his MA in Statistics from UC Berkeley\, and his PhD in Computer Science from UC Berkeley. He has done postdoctoral research at the University of Wisconsin-Madison and at the Georgia Institute of Technology. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-a-survey-of-the-mixing-times-of-the-proximal-sampler-algorithm/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/wibisono-andre-e1758646059816.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260506T110000
DTEND;TZID=America/Los_Angeles:20260506T120000
DTSTAMP:20260404T035523
CREATED:20251013T161935Z
LAST-MODIFIED:20251014T195232Z
UID:7644-1778065200-1778068800@tilos.ai
SUMMARY:TILOS-HDSI Seminar with Ellen Vitercik (Stanford)
DESCRIPTION:Title and abstract TBA… \n\nEllen Vitercik is an Assistant Professor at Stanford with a joint appointment between the Management Science and Engineering department and the Computer Science department. Her research interests include machine learning\, algorithm design\, discrete and combinatorial optimization\, and the interface between economics and computation. Before joining Stanford\, Dr. Vitercik was a Miller fellow at UC Berkeley\, hosted by Michael Jordan and Jennifer Chayes. She received a PhD in Computer Science from Carnegie Mellon University\, advised by Nina Balcan and Tuomas Sandholm. Dr. Vitercik has been recognized by a Schmidt Sciences AI2050 Early Career Fellowship and an NSF CAREER award. Her thesis won the SIGecom Doctoral Dissertation Award\, the CMU School of Computer Science Distinguished Dissertation Award\, and the Honorable Mention Victor Lesser Distinguished Dissertation Award. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-ellen-vitercik-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/vitericik-ellen-e1760372346890.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260513T110000
DTEND;TZID=America/Los_Angeles:20260513T120000
DTSTAMP:20260404T035523
CREATED:20260223T175317Z
LAST-MODIFIED:20260310T183326Z
UID:8092-1778670000-1778673600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: ComPO: Preference Alignment via Comparison Oracles
DESCRIPTION:Tianyi Lin\, Columbia University \nDirect alignment methods are increasingly used for aligning large language models (LLMs) with human preferences. However\, these methods suffer from the likelihood displacement\, which can be driven by noisy preference pairs that induce similar likelihood for preferred and dis-preferred responses. To address this issue\, we consider doing derivative-free optimization based on comparison oracles. First\, we propose a new preference alignment method via comparison oracles and provide convergence guarantees for its basic mechanism. Second\, we improve our method using some heuristics and conduct the experiments to demonstrate the flexibility and compatibility of practical mechanisms in improving the performance of LLMs using noisy preference pairs. Evaluations are conducted across multiple base and instruction-tuned models with different benchmarks. Experimental results show the effectiveness of our method as an alternative to addressing the limitations of existing methods. A highlight of our work is that we evidence the importance of designing specialized methods for preference pairs with distinct likelihood margins. \n\nTianyi Lin is an assistant professor in the Department of Industrial Engineering and Operations Research (IEOR) at Columbia University. His research interests lie in generative artificial intelligence\, optimization for machine learning\, game theory\, social and economic network\, and optimal transport. He obtained his Ph.D. in Electrical Engineering and Computer Science at UC Berkeley\, where he was advised by Professor Michael Jordan and was associated with the Berkeley Artificial Intelligence Research (BAIR) group. From 2023 to 2024\, he was a postdoctoral researcher at the Laboratory for Information & Decision Systems (LIDS) at Massachusetts Institute of Technology\, working with Professor Asuman Ozdaglar. Prior to that\, he received a B.S. in Mathematics from Nanjing University\, a M.S. in Pure Mathematics and Statistics from University of Cambridge and a M.S. in Operations Research from UC Berkeley. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-compo-preference-alignment-via-comparison-oracles/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/lin-tianyi-e1771869179855.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260520T110000
DTEND;TZID=America/Los_Angeles:20260520T120000
DTSTAMP:20260404T035523
CREATED:20260227T004426Z
LAST-MODIFIED:20260227T004426Z
UID:8112-1779274800-1779278400@tilos.ai
SUMMARY:TILOS-HDSI Seminar with Andrej Risteski (Carnegie Mellon)
DESCRIPTION:Title and abstract TBA… \n\nAndrej Risteski is an Associate Professor at the Machine Learning Department in Carnegie Mellon University. Prior to that\, he was a Norbert Wiener Research Fellow jointly in the Applied Math department and IDSS at MIT. Dr. Risteski received his PhD in the Computer Science Department at Princeton University under the advisement of Sanjeev Arora. \nDr. Risteski’s research interests lie in the intersection of machine learning\, statistics\, and theoretical computer science\, spanning topics like (probabilistic) generative models\, algorithmic tools for learning and inference\, representation and self-supervised learning\, out-of-distribution generalization and applications of neural approaches to natural language processing and scientific domains. The broad goal of his research is principled and mathematical understanding of statistical and algorithmic problems arising in modern machine learning paradigms. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-andrej-risteski-carnegie-mellon/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2026/02/risteski-andrej-e1772152946152.png
END:VEVENT
END:VCALENDAR