BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260410T100000
DTEND;TZID=America/Los_Angeles:20260410T110000
DTSTAMP:20260423T131346
CREATED:20250923T164943Z
LAST-MODIFIED:20260413T174404Z
UID:7602-1775815200-1775818800@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: A survey of the mixing times of the Proximal Sampler algorithm
DESCRIPTION:Andre Wibisono\, Yale University \nAbstract: Sampling is a fundamental algorithmic task with many connections to optimization. In this talk\, we survey a recent algorithm for sampling known as the Proximal Sampler\, which can be seen as a proximal discretization of the continuous-time Langevin dynamics\, and achieves the current state-of-the-art iteration complexity for sampling in discrete time. We survey the mixing time guarantees of the Proximal Sampler algorithm and show they match the guarantees for the Langevin dynamics. When the target distribution satisfies log-concavity or isoperimetry\, the Proximal Sampler has rapid convergence guarantees. We illustrate the proof technique via the strong data processing inequality along the Gaussian channel and its time reversal under isoperimetry. \n\nAndre Wibisono is an assistant professor in the Department of Computer Science at Yale University\, with a secondary appointment in the Department of Statistics & Data Science. His research interests are in the design and analysis of algorithms for machine learning\, in particular for problems in optimization\, sampling\, and game theory. He received his BS degrees in Mathematics and in Computer Science from MIT\, his MEng in Computer Science from MIT\, his MA in Statistics from UC Berkeley\, and his PhD in Computer Science from UC Berkeley. He has done postdoctoral research at the University of Wisconsin-Madison and at the Georgia Institute of Technology.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-a-survey-of-the-mixing-times-of-the-proximal-sampler-algorithm/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/wibisono-andre-e1758646059816.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260410
DTEND;VALUE=DATE:20260411
DTSTAMP:20260423T131346
CREATED:20260217T204006Z
LAST-MODIFIED:20260304T223925Z
UID:8073-1775779200-1775865599@tilos.ai
SUMMARY:2026 Robotics Summit: The Next 25 Years of Robotics
DESCRIPTION:
URL:https://tilos.ai/event/2026-robotics-summit-the-next-25-years-of-robotics/
LOCATION:University of Pennsylvania School of Engineering and Applied Science\, Philadelphia\, PA\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2026/02/robotics-summit.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260327T100000
DTEND;TZID=America/Los_Angeles:20260327T110000
DTSTAMP:20260423T131346
CREATED:20260317T231250Z
LAST-MODIFIED:20260331T142721Z
UID:8222-1774605600-1774609200@tilos.ai
SUMMARY:TILOS-Optimization for ML and AI Seminar: Implicit bias results for Muon\, Adam\, and Friends
DESCRIPTION:Matus Telgarsky\, New York University \nAbstract: This talk will give both an empirical overview and a few simple bonds controlling the optimization path\, or implicit bias\, of modern optimization methods such as Adam and Muon (and Friends). The talk will begin with empirical results demonstrating the implicit bias phenomenon with shallow networks and also transformers combined with chain-of-thought. The talk will then briefly survey a few mathematical implicit bias analyses of nonlinear networks\, which unfortunately do not carry through to transformers. As such\, the talk concludes with a technical portion presenting another approach to analyzing these optimization methods in the linear case\, providing generic implicit bias results for them\, and empirically demonstrating hope that this particular methodology can carry over to the nonlinear case. \n\nMatus Telgarsky is an Associate Professor of Computer Science at the Courant Institute of Math at NYU\, specializing in deep learning theory. The highlight of his academic career was completing a PhD under Sanjoy Dasgupta at UC San Diego. Adventures since then include co-chairing the Midwest ML Symposium in 2017 with Po-Ling Loh\, and chairing two semester-long Simons Institute Programs at UC Berkeley. Accolades include a 2018 NSF Career Award and delivering a COLT 2025 keynote.
URL:https://tilos.ai/event/tilos-optimization-for-ml-and-ai-seminar-implicit-bias-results-for-muon-adam-and-friends/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/03/telgarsky-matus-e1773789078482.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260325T110000
DTEND;TZID=America/Los_Angeles:20260325T120000
DTSTAMP:20260423T131346
CREATED:20260310T175540Z
LAST-MODIFIED:20260326T215133Z
UID:8191-1774436400-1774440000@tilos.ai
SUMMARY:TILOS-SDSU Seminar: Autopilots Need Parachutes: Reliability Lessons from LLM-Automated Embedded AI Systems
DESCRIPTION:Roberto Morabito\, EURECOM \nAbstract: Embedded AI systems are becoming increasingly complex to develop and maintain\, requiring specialized workflows that span data processing\, model conversion\, optimization\, and deployment across heterogeneous hardware platforms. Recently\, large language models have emerged as a promising tool to automate parts of this lifecycle. In this talk\, I present recent work investigating the use of generative AI models as orchestration agents for embedded machine learning pipelines. Using an automated system that leverages LLMs to generate and iteratively refine software artifacts for embedded platforms\, we evaluate the feasibility of automating key stages of the AI lifecycle. Our empirical results reveal both the promise and the limitations of this approach. Generative models can significantly accelerate development workflows. However\, they also introduce instability\, iterative failure modes\, and unpredictable operational costs. I will discuss the main failure patterns observed in practice and outline research directions aimed at improving reliability through hybrid reasoning frameworks and system-level feedback mechanisms. \n\nRoberto Morabito is an Assistant Professor in the Networked Systems group of the Communication Systems Department at EURECOM\, France\, and a Docent at the University of Helsinki. Before joining EURECOM\, he was a Senior Researcher in the Department of Computer Science at the University of Helsinki. Earlier in his career\, he spent eight years at Ericsson Research Finland\, where he worked on cloud platforms\, IoT systems\, and cyber-physical systems. He received his PhD in Networking Technology from Aalto University in 2019 and was a postdoctoral researcher at the EDGE Lab\, School of Electrical and Computer Engineering\, Princeton University. His research lies at the intersection of networked systems\, edge computing\, and distributed AI\, focusing on the design and lifecycle management of AI systems operating under computing and networking resource constraints.
URL:https://tilos.ai/event/tilos-sdsu-seminar-autopilots-need-parachutes-reliability-lessons-from-llm-automated-embedded-ai-systems/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/03/morabito-roberto-e1773165764846.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260313T100000
DTEND;TZID=America/Los_Angeles:20260313T110000
DTSTAMP:20260423T131346
CREATED:20251014T200527Z
LAST-MODIFIED:20260313T183553Z
UID:7665-1773396000-1773399600@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Transformers Learn Generalizable Chain-of-Thought Reasoning via Gradient Descent
DESCRIPTION:Yuejie Chi\, Yale \nAbstract: Transformers have demonstrated remarkable chain-of-thought reasoning capabilities\, yet\, the underlying mechanisms by which they acquire and extrapolate these capabilities remain limited. This talk presents a theoretical analysis of transformers trained via gradient descent for symbolic reasoning and state tracking tasks with increasing problem complexity. Our analysis reveals the coordination of multi-head attention to solve multiple subtasks in a single autoregressive path\, and the bootstrapping of inherently sequential reasoning through recursive self-training curriculum. Our optimization-based guarantees demonstrate that even shallow multi-head transformers\, with chain-of-thought\, can be trained to effectively solve problems that would otherwise require deeper architectures. \n\nYuejie Chi is the Charles C. and Dorothea S. Dilley Professor of Statistics and Data Science at Yale University\, with a secondary appointment in Computer Science\, and a member of the Yale Institute for Foundations of Data Science. Before joining Yale\, Dr. Chi was the Sense of Wonder Group Endowed Professor of Electrical and Computer Engineering in AI Systems at Carnegie Melon University\, with affiliation in MLD and CyLab. She also spent some time as a visiting researcher at Meta’s Fundamental AI Research (FAIR). Dr. Yue’s research interests lie in the theoretical and algorithmic foundations of data science\, generative AI\, reinforcement learning\, and signal processing\, motivated by applications in scientific and engineering domains. Her current focus is on improving the performance\, efficiency and reliability of generative AI and decision making\, driven by data-intensive but resource-constrained scenarios.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-transformers-learn-generalizable-chain-of-thought-reasoning-via-gradient-descent/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/chi-yuejie-e1760472307997.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260227T110000
DTEND;TZID=America/Los_Angeles:20260227T120000
DTSTAMP:20260423T131346
CREATED:20251003T192706Z
LAST-MODIFIED:20260304T205819Z
UID:7637-1772190000-1772193600@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: (De)regularized Wasserstein Gradient Flows via Reproducing Kernels
DESCRIPTION:Bharath Sriperumbudur\, Pennsylvania State University \nAbstract: Wasserstein gradient flows have become a popular tool in machine learning with applications in sampling\, variational inference\, generative modeling\, and reinforcement learning\, among others. The Wasserstein gradient flow (WGF) involves minimizing a probability functional over the Wasserstein space (by taking into account the intrinsic geometry of the Wasserstein space). In this work\, we introduce approximate/regularized Wasserstein gradient flows in two different settings: (a) approximate the probability functional and (b) approximate the Wasserstein geometry. In (a)\, we consider the probability functional to be chi^2-divergence\, whose WGF is difficult to implement. To this end\, we propose a (de)-regularization of the Maximum Mean Discrepancy (DrMMD) as an approximation of chi^2-divergence and develop an approximate WGF\, which is easy to implement and has applications in generative modeling. On the other hand\, in the setting of (b)\, we use Kullback-Leibler divergence as the probability functional and develop an approximation to the Wassertein geometry\, which allows for an efficient implementation than that of the exact WGF\, with applications in sampling. In both settings\, we present a variety of theoretical results that relate the approximate flow to the exact flow and demonstrate the superiority of the approximate flows via numerical simulations. \n\nBharath Sriperumbudur is a professor in the Department of Statistics (with a courtesy appointment in the Department of Mathematics) at the Pennsylvania State University. His research interests include non-parametric statistics\, machine learning\, statistical learning theory\, optimal transport and gradient flows\, regularization and inverse problems\, reproducing kernel spaces in probability and statistics\, functional and topological data analysis.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-bharath-sriperumbudur-penn-state/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/sriperumbudur-bharath-e1759519613665.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260206T110000
DTEND;TZID=America/Los_Angeles:20260206T120000
DTSTAMP:20260423T131346
CREATED:20251014T201307Z
LAST-MODIFIED:20260304T210204Z
UID:7668-1770375600-1770379200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Extended Convex Lifting for Policy Optimization in Control
DESCRIPTION:Yang Zheng\, UC San Diego \nAbstract: Direct policy search has achieved great empirical success in reinforcement learning. Many recent studies have revisited its theoretical foundation for continuous control\, which reveals elegant nonconvex geometry in various benchmark problems. In this talk\, we introduce an Extended Convex Lifting (ECL) framework\, which reveals hidden convexity in classical optimal and robust control problems from a modern optimization perspective. Our ECL offers a bridge between nonconvex policy optimization and convex reformulations. Despite non-convexity and non-smoothness\, the existence of an ECL not only reveals that minimizing the original function is equivalent to a convex problem\, but also certifies a class of first-order non-degenerate stationary points to be globally optimal. This ECL framework encompasses many benchmark control problems\, including LQR\, LQG\, state-feedback\, and output-feedback H-infinity robust control. We believe that the ECL framework may be of independent interest for analyzing nonconvex problems beyond control. \n\nYang Zheng is an Assistant Professor in the ECE Department at UC San Diego. His research focuses on control theory\, convex and nonconvex optimization\, and their applications to autonomous vehicles and traffic systems. He received his DPhil (Ph.D.) in Engineering Science from the University of Oxford in 2019\, and his B.E. and M.S. degrees from Tsinghua University in 2013 and 2015\, respectively. His work has been recognized with several awards\, including the 2019 European Ph.D. Award on Control for Complex and Heterogeneous Systems\, the 2022 Best Paper Award from IEEE Transactions on Control of Network Systems\, the 2023 Best Graduate Teacher Award from UC San Diego’s ECE Department\, the 2024 NSF CAREER Award\, and the 2025 Donald P. Eckman Award from the American Automatic Control Council.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-yang-zheng-uc-san-diego/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/zheng-yang-scaled-e1769464299795.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260130T110000
DTEND;TZID=America/Los_Angeles:20260130T120000
DTSTAMP:20260423T131346
CREATED:20251014T200143Z
LAST-MODIFIED:20260304T210210Z
UID:7663-1769770800-1769774400@tilos.ai
SUMMARY:[CANCELED] Optimization for ML and AI Seminar: Fantastic Pretraining Optimizers and Where to Find Them
DESCRIPTION:Tengyu Ma\, Stanford \nAbstract: AdamW has long been the dominant optimizer in language model pretraining\, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two issues\, we conduct a systematic study of ten deep learning optimizers across four model scales (0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum). We find that fair and informative comparisons require rigorous hyperparameter tuning and evaluations across a range of model scales and data-to-model ratios\, performed at the end of training. First\, optimal hyperparameters for one optimizer may be suboptimal for another\, making blind hyperparameter transfer unfair. Second\, the actual speedup of many proposed optimizers over well-tuned baselines is lower than claimed and decreases with model size to only 1.1x for 1.2B parameter models. Thirdly\, comparing intermediate checkpoints before reaching the target training budgets can be misleading\, as rankings between two optimizers can flip during training due to learning rate decay. Through our thorough investigation\, we find that all the fastest optimizers such as Muon and Soap\, use matrices as preconditioners—multiplying gradients with matrices rather than entry-wise scalars. However\, the speedup of matrix-based optimizers is inversely proportional to model scale\, decreasing from 1.4x over AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models. \n\nTengyu Ma is an assistant professor of computer science at Stanford University. His research interests broadly include topics in machine learning\, algorithms and their theory\, such as deep learning\, (deep) reinforcement learning\, pre-training / foundation models\, robustness\, non-convex optimization\, distributed optimization\, and high-dimensional statistics. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-tengyu-ma-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/ma-tengyu-e1760473083457.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260111
DTEND;VALUE=DATE:20260117
DTSTAMP:20260423T131346
CREATED:20251209T233013Z
LAST-MODIFIED:20251209T233013Z
UID:7981-1768089600-1768607999@tilos.ai
SUMMARY:Gordon Research Conference on Embodied Intelligence
DESCRIPTION:The Robotics GRC is a premier\, international scientific conference focused on advancing the frontiers of science through the presentation of cutting-edge and unpublished research\, prioritizing time for discussion after each talk and fostering informal interactions among scientists of all career stages. The conference program includes an array of speakers and discussion leaders from institutions and organizations worldwide\, concentrating on the latest developments in the field. The conference is five days long and held in a remote location to increase the sense of camaraderie and create scientific communities\, with lasting collaborations and friendships. In addition to premier talks\, the conference has designated time for poster sessions from individuals of all career stages\, and afternoon free time and communal meals allow for informal networking opportunities with leaders in the field. \nThis year’s conference will focus on adaptive behavior and learning in animals and robots. We will explore how biological inspiration drives advancements in robotics\, from simple reactive behaviors to complex planning and learning systems. Insights from biomechanics\, neuroscience\, and animal studies are increasingly shaping the design and control of robots\, making them more robust and adaptable. A key focus will be on embodied intelligence\, enabling robots to excel in locomotion\, manipulation\, and interactions with other agents. \nConversely\, robotics research is also contributing to biology. Studies of perception and action in robotic systems are leading to new mathematical models that help integrative biologists understand locomotion\, manipulation\, and collective behavior in animals. \nBy bringing together experts from robotics\, biomechanics\, and neuroscience\, this conference aims to foster cross-disciplinary insights that will push the boundaries of both fields. \nJoin us for a dynamic exchange of ideas at the intersection of robotics and biology\, where engineering meets evolution.
URL:https://tilos.ai/event/gordon-research-conference-on-embodied-intelligence/
LOCATION:Four Points Sheraton / Holiday Inn Express\, 1050 Schooner Drive\, Ventura\, CA\, 93001\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/12/fourpoints.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260109T110000
DTEND;TZID=America/Los_Angeles:20260109T120000
DTSTAMP:20260423T131346
CREATED:20251014T195932Z
LAST-MODIFIED:20260304T210221Z
UID:7661-1767956400-1767960000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Randomized linear algebra with subspace injections
DESCRIPTION:Joel Tropp\, Caltech \nAbstract: To achieve the greatest possible speed\, practitioners regularly implement randomized algorithms for low-rank approximation and least-squares regression with structured dimension reduction maps. This talk outlines a new perspective on structured dimension reduction\, based on the injectivity properties of the dimension reduction map. This approach provides sharper bounds for sparse dimension reduction maps\, and it leads to exponential improvements for tensor-product dimension reduction. Empirical evidence confirms that these types of structured random matrices offer exemplary performance for a range of synthetic problems and contemporary scientific applications. \nJoint work with Chris Camaño\, Ethan Epperly\, and Raphael Meyer; available at arXiv:2508.21189. \n\nJoel A. Tropp is Steele Family Professor of Applied & Computational Mathematics at the California Institute of Technology. His research centers on applied mathematics\, machine learning\, data science\, numerical algorithms\, and random matrix theory. Some of his best-known contributions include matching pursuit algorithms\, randomized SVD algorithms\, matrix concentration inequalities\, and statistical phase transitions. Prof. Tropp attained the Ph.D. degree in Computational Applied Mathematics at the University of Texas at Austin in 2004\, and he joined Caltech in 2007. He won the PECASE in 2008\, and he was recognized as a Highly Cited Researcher in Computer Science each year from 2014–2018. He is co-founder of the SIAM Journal on Mathematics of Data Science (SIMODS)\, and he was co-chair of the inaugural 2020 SIAM Conference on the Mathematics of Data Science. Prof. Tropp was elected SIAM Fellow in 2019\, IEEE Fellow in 2020\, and IMS Fellow in 2024. He received the 2025 Richard P. Feynman Prize for Excellence in Teaching at Caltech. He is an invited speaker at the 2026 International Congress of Mathematicians (ICM).
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-joel-tropp-caltech/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/tropp-joel-e1760471957302.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251205T110000
DTEND;TZID=America/Los_Angeles:20251205T120000
DTSTAMP:20260423T131346
CREATED:20251014T194842Z
LAST-MODIFIED:20260304T210702Z
UID:7652-1764932400-1764936000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Stochastic-Gradient and Diagonal-Scaling Algorithms for Constrained Optimization and Learning
DESCRIPTION:Frank E. Curtis\, Lehigh University \nAbstract: I will motivate and provide an overview of recent efforts in my research group on the design and analysis of stochastic-gradient-based algorithms for solving constrained optimization problems. I will focus in particular on our motivation for informed supervised learning\, where constraints in the training problem can be used to impose prior knowledge on the properties that should be possessed by a trained prediction model. In addition\, I will provide a detailed look at our newest extensions of heavy-ball and Adam schemes from the unconstrained to the equality-constrained setting\, for which we have shown state-of-the-art convergence guarantees. I will demonstrate the impressive practical performance of our methods using a few informed supervised learning problems. \n\nFrank E. Curtis is a Professor in the Department of Industrial and Systems Engineering at Lehigh University\, where he has been employed since 2009. He received a bachelor’s degree from the College of William and Mary in 2003 with a double major in Computer Science and Mathematics\, received a master’s degree in 2004 and Ph.D. degree in 2007 from the Department of Industrial Engineering and Management Science at Northwestern University\, and spent two years as a Postdoctoral Researcher in the Courant Institute of Mathematical Sciences at New York University from 2007 until 2009. His research focuses on the design\, analysis\, and implementation of numerical methods for solving large-scale nonlinear optimization problems. He received an Early Career Award from the Advanced Scientific Computing Research (ASCR) program of the U.S. Department of Energy (DoE)\, and has received funding from various programs of the U.S. National Science Foundation (NSF)\, including through a TRIPODS Phase I grant awarded to him and his collaborators at Lehigh\, Northwestern\, and Boston University. He has also received funding from the U.S. Office of Naval Research (ONR) and DoE’s Advanced Research Projects Agency-Energy (ARPA-E). He received\, along with Leon Bottou (Meta AI) and Jorge Nocedal (Northwestern)\, the 2021 SIAM/MOS Lagrange Prize in Continuous Optimization. He was awarded\, with James V. Burke (U. of Washington)\, Adrian Lewis (Cornell)\, and Michael Overton (NYU)\, the 2018 INFORMS Computing Society Prize. He and team members Daniel Molzahn (Georgia Tech)\, Andreas Waechter (Northwestern)\, Ermin Wei (Northwestern)\, and Elizabeth Wong (UC San Diego) were awarded second place in the ARPA-E Grid Optimization Competition in 2020. He currently serves as Area Editor for Continuous Optimization for Mathematics of Operations Research and serves as an Associate Editor for Mathematical Programming\, SIAM Journal on Optimization\, Operations Research\, IMA Journal of Numerical Analysis\, and Mathematical Programming Computation. He previously served as the Vice Chair for Nonlinear Programming for the INFORMS Optimization Society\, and is currently very active in professional societies and groups related to mathematical optimization\, including INFORMS\, the Mathematics Optimization Society\, and the SIAM Activity Group on Optimization.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-frank-e-curtis-lehigh-university/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/curtis-frank-e1760471303881.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251204T120000
DTEND;TZID=America/Los_Angeles:20251204T140000
DTSTAMP:20260423T131346
CREATED:20251028T204347Z
LAST-MODIFIED:20251121T020250Z
UID:7692-1764849600-1764856800@tilos.ai
SUMMARY:Networking Lunch Reception at NeurIPS 2025
DESCRIPTION:TILOS will host a networking lunch reception during NeurIPS 2025 at Mezé Greek Fusion from 12:00-2:00pm on Thursday\, December 4\, 2025. This event is open to all NeurIPS attendees affiliated with any of the NSF AI Research Institutes\, as well as invited industry partners. Join us to connect with colleagues across the network of NSF AI Institutes\, share research interests\, and explore opportunities for collaboration. \nRegistration has closed. Please contact tilos@ucsd.edu with any questions. \n? Date: Thursday\, December 4\, 2025? Time: 12:00 – 2:00pm PST? Location: Mezé Greek Fusion (3 blocks from the conference venue)
URL:https://tilos.ai/event/networking-lunch-reception-at-neurips-2025/
LOCATION:Mezé Greek Fusion\, San Diego\, CA\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251203T130000
DTEND;TZID=America/Los_Angeles:20251203T140000
DTSTAMP:20260423T131346
CREATED:20250930T163903Z
LAST-MODIFIED:20260304T210653Z
UID:7627-1764766800-1764770400@tilos.ai
SUMMARY:Optimization for AI and ML Seminar: Training Neural Networks at Any Scale
DESCRIPTION:Volkan Cevher\, École Polytechnique Fédérale de Lausanne \nAbstract: At the heart of deep learning’s transformative impact lies the concept of scale–encompassing both data and computational resources\, as well as their interaction with neural network architectures. Scale\, however\, presents critical challenges\, such as increased instability during training and prohibitively expensive model-specific tuning. Given the substantial resources required to train such models\, formulating high-confidence scaling hypotheses backed by rigorous theoretical research has become paramount. \nTo bridge theory and practice\, the talk explores a key mathematical ingredient of scaling in tandem with scaling theory: the numerical solution algorithms commonly employed in deep learning\, spanning domains from vision to language models. We unify these algorithms under a common master template\, making their foundational principles transparent. In doing so\, we reveal the interplay between adaptation to smoothness structures via online learning and the exploitation of optimization geometry through non-Euclidean norms. Our exposition moves beyond simply building larger models–it emphasizes strategic scaling\, offering insights that promise to advance the field while economizing on resources. \n\nVolkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara\, Turkey\, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta\, GA in 2005. He was a Research Scientist with the University of Maryland\, College Park from 2006-2007 and also with Rice University in Houston\, TX\, from 2008-2009. Currently\, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include machine learning\, signal processing theory\, optimization theory and methods\, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research award in 2018\, the IEEE Signal Processing Society Best Paper Award in 2016\, a Best Paper Award at CAMSAP in 2015\, a Best Paper Award at SPARS in 2009\, and an ERC CG in 2016 as well as an ERC StG in 2011.
URL:https://tilos.ai/event/optimization-for-ai-and-ml-seminar-with-volkan-cevher-epfl/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/cevher-volkan-e1759250260485.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251203T110000
DTEND;TZID=America/Los_Angeles:20251203T120000
DTSTAMP:20260423T131346
CREATED:20250924T154049Z
LAST-MODIFIED:20260227T215023Z
UID:7606-1764759600-1764763200@tilos.ai
SUMMARY:TILOS-SDSU Seminar: 95 Percent: Bridging the Gap Between Prototype and Product
DESCRIPTION:Jeremy Schwartz\, Zoox \nAbstract: When transitioning from the academic world to the professional world of engineering\, one of the most common pitfalls is failing to understand the difference between a compelling prototype and a successful product. This talk will focus on that distinction. We will discuss the differences between them\, and the work required to evolve a good prototype into a real product. We will also discuss some common pitfalls encountered in product development\, and some of the practical software design considerations to keep in mind for development of robust\, mature code. The talk will include examples from my background developing robotic systems for air\, space\, and ground. \n\nJeremy Schwartz is a robotics engineer at Zoox with expertise in a wide variety of areas of mechanical and electrical engineering and computer science. His primary professional expertise is in autonomy and behavioral algorithms\, and he has worked in the aerospace industry as well as ground robotics\, specializing in autonomous systems of all kinds.
URL:https://tilos.ai/event/tilos-sdsu-seminar-with-jeremy-schwartz-of-zoox/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/schwartz-jeremy-e1758728403382.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251201
DTEND;VALUE=DATE:20251203
DTSTAMP:20260423T131346
CREATED:20250903T222016Z
LAST-MODIFIED:20250908T162145Z
UID:7473-1764547200-1764719999@tilos.ai
SUMMARY:Workshop on Topology\, Algebra\, and Geometry in Data Science (co-located with NeurIPS 2025)
DESCRIPTION:We are thrilled to announce the first official TAG-DS Stand-Alone Event–TAG… We’re it! This will be a two day event\, December 1 & 2\, 2025\, featuring keynotes\, poster sessions\, spotlight talks\, collaboration activities\, and community development. The dates and location were selected to align with NeurIPS 2025–twice the fun! The event will be hosted on the University of California San Diego campus both days and is readily accessible by public transit from downtown for those already planning to attend NeurIPS. There will be an associated Proceedings of Machine Learning Research volume for papers submitted to the archival track.
URL:https://tilos.ai/event/topology-algebra-and-geometry-in-data-science-2025/
LOCATION:UC San Diego\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/TAG-DS_logo-1-e1756938002600.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251119T110000
DTEND;TZID=America/Los_Angeles:20251119T120000
DTSTAMP:20260423T131346
CREATED:20251105T193505Z
LAST-MODIFIED:20260227T215217Z
UID:7735-1763550000-1763553600@tilos.ai
SUMMARY:TILOS-SDSU Seminar: Certifiably Correct Machine Perception
DESCRIPTION:David Rosen\, Northeastern University \nAbstract: Many fundamental machine perception and state estimation tasks require the solution of a high-dimensional nonconvex estimation problem; this class includes (for example) the fundamental problems of simultaneous localization and mapping (in robotics)\, 3D reconstruction (in computer vision)\, and sensor network localization (in distributed sensing). Such problems are known to be computationally hard in general\, with many local minima that can entrap the smooth local optimization methods commonly applied to solve them. The result is that standard machine perception algorithms (based upon local optimization) can be surprisingly brittle\, often returning egregiously wrong answers even when the problem to which they are applied is well-posed. \nIn this talk\, we present a novel class of certifiably correct estimation algorithms that are capable of efficiently recovering provably good (often globally optimal) solutions of generally-intractable machine perception problems in many practical settings. Our approach directly tackles the problem of nonconvexity by employing convex relaxations whose minimizers provide provably good approximate solutions to the original estimation problem under moderate measurement noise. We illustrate the design of this class of methods using the fundamental problem of pose-graph optimization (a mathematical abstraction of robotic mapping) as a running example. We conclude with a brief discussion of open questions and future research directions. \n\nDavid M. Rosen is an Assistant Professor in the Departments of Electrical & Computer Engineering and Mathematics and the Khoury College of Computer Sciences (by courtesy) at Northeastern University\, where he leads the Robust Autonomy Laboratory (NEURAL). Prior to joining Northeastern\, he was a Research Scientist at Oculus Research (now Meta Reality Labs) from 2016 to 2018\, and a Postdoctoral Associate at MIT’s Laboratory for Information and Decision Systems (LIDS) from 2018 to 2021. He holds the degrees of B.S. in Mathematics from the California Institute of Technology (2008)\, M.A. in Mathematics from the University of Texas at Austin (2010)\, and ScD in Computer Science from the Massachusetts Institute of Technology (2016). \n\nHe is broadly interested in the mathematical and algorithmic foundations of trustworthy machine perception\, learning\, and control. His work has been recognized with the IEEE Transactions on Robotics Best Paper Award (2024)\, an Honorable Mention for the IEEE Transactions on Robotics Best Paper Award (2021)\, a Best Student Paper Award at Robotics: Science and Systems (2020)\, a Best Paper Award at the International Workshop on the Algorithmic Foundations of Robotics (2016)\, and selection as an RSS Pioneer (2019).
URL:https://tilos.ai/event/tilos-sdsu-seminar-with-david-rosen-northeastern/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/11/rosen-david-scaled-e1762371210779.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251024T110000
DTEND;TZID=America/Los_Angeles:20251024T120000
DTSTAMP:20260423T131346
CREATED:20250925T175700Z
LAST-MODIFIED:20260304T210610Z
UID:7611-1761303600-1761307200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: High-dimensional Optimization with Applications to Compute-Optimal Neural Scaling Laws
DESCRIPTION:Courtney Paquette\, McGill University \nAbstract: Given the massive scale of modern ML models\, we now only get a single shot to train them effectively. This restricts our ability to test multiple architectures and hyper-parameter configurations. Instead\, we need to understand how these models scale\, allowing us to experiment with smaller problems and then apply those insights to larger-scale models. In this talk\, I will present a framework for analyzing scaling laws in stochastic learning algorithms using a power-law random features model (PLRF)\, leveraging high-dimensional probability and random matrix theory. I will then use this scaling law to address the compute-optimal question: How should we choose model size and hyper-parameters to achieve the best possible performance in the most compute-efficient manner? Then using this PLRF model\, I will devise a new momentum-based algorithm that (provably) improves the scaling law exponent. Finally\, I will present some numerical experiments on LSTMs that show how this new stochastic algorithm can be applied to real data to improve the compute-optimal exponent. \n\nCourtney Paquette is an assistant professor at McGill University in the Mathematics and Statistics department\, a CIFAR AI Chair (MILA)\, and an active member of the Montreal Machine Learning Optimization Group (MTL MLOpt) at MILA. Her research broadly focuses on designing and analyzing algorithms for large-scale optimization problems\, motivated by applications in data science\, and using techniques that draw from a variety of fields\, including probability\, complexity theory\, and convex and nonsmooth analysis. Dr. Paquette is a lead organizer of the OPT-ML Workshop at NeurIPS since 2020\, and a lead organizer (and original creator) of the High-dimensional Learning Dynamics (HiLD) Workshop at ICML.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-courtney-paquette-mcgill-university/
LOCATION:CSE 1242 and Virtual\, 3235 Voigt Dr\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/paquette-courtney-scaled-e1758822988381.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250602
DTEND;VALUE=DATE:20250603
DTSTAMP:20260423T131346
CREATED:20250904T174234Z
LAST-MODIFIED:20250904T183243Z
UID:7531-1748822400-1748908799@tilos.ai
SUMMARY:TILOS Industry Day 2025
DESCRIPTION:
URL:https://tilos.ai/event/tilos-industry-day-2025/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250417
DTEND;VALUE=DATE:20250419
DTSTAMP:20260423T131346
CREATED:20250401T180604Z
LAST-MODIFIED:20250904T182557Z
UID:7280-1744848000-1745020799@tilos.ai
SUMMARY:HOT-AI: Horizons for Optimization in AI Workshop
DESCRIPTION:
URL:https://tilos.ai/event/hot-ai-horizons-for-optimization-in-ai-workshop/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250331
DTEND;VALUE=DATE:20250401
DTSTAMP:20260423T131346
CREATED:20250904T175539Z
LAST-MODIFIED:20250904T182652Z
UID:7282-1743379200-1743465599@tilos.ai
SUMMARY:Boston Symmetry Day 2025
DESCRIPTION:
URL:https://tilos.ai/event/boston-symmetry-day-2025/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/boston-symmetry-group-e1698445385321-eiga9L.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250317
DTEND;VALUE=DATE:20250318
DTSTAMP:20260423T131346
CREATED:20250904T181134Z
LAST-MODIFIED:20250904T182933Z
UID:7275-1742169600-1742255999@tilos.ai
SUMMARY:TILOS-Cisco AI + Security Workshop
DESCRIPTION:
URL:https://tilos.ai/event/tilos-cisco-ai-security-workshop/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:Internal Events,TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250306T083000
DTEND;TZID=America/Los_Angeles:20250306T121500
DTSTAMP:20260423T131346
CREATED:20250828T193005Z
LAST-MODIFIED:20250828T193005Z
UID:7276-1741249800-1741263300@tilos.ai
SUMMARY:TILOS Tutorial on AI Alignment
DESCRIPTION:
URL:https://tilos.ai/event/tilos-tutorial-on-ai-alignment/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250219
DTEND;VALUE=DATE:20250221
DTSTAMP:20260423T131346
CREATED:20250904T180342Z
LAST-MODIFIED:20250904T183026Z
UID:7281-1739923200-1740095999@tilos.ai
SUMMARY:Secure AI for Health\, Defense\, and Beyond
DESCRIPTION:
URL:https://tilos.ai/event/secure-ai-for-health-defense-and-beyond/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/UCSD-e1737756262771-s0U7kP-e1757009005925.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250129T110000
DTEND;TZID=America/Los_Angeles:20250129T123000
DTSTAMP:20260423T131346
CREATED:20250828T195813Z
LAST-MODIFIED:20250828T195813Z
UID:7301-1738148400-1738153800@tilos.ai
SUMMARY:TILOS Seminar: Unlearnable Facts Cause Hallucinations in Pretrained Language Models
DESCRIPTION:Adam Tauman Kalai\, OpenAI \nAbstract: Pretrained language models (LMs) tend to preserve many qualities present in their training data\, such as grammaticality\, formatting\, and politeness. However\, for specific types of factuality\, even LMs pretrained on factually correct statements tend to produce falsehoods at high rates. We explain these “hallucinations” by drawing a connection to binary classification\, enabling us to leverage insights from supervised learning. We prove that pretrained LMs (which are “calibrated”) fail to mimic criteria that cannot be learned. Our analysis explains why pretrained LMs hallucinate on facts such as people’s birthdays but not on systematic facts such as even vs. odd numbers.\nOf course\, LM pretraining is only one stage in the development of a chatbot\, and thus hallucinations are *not* inevitable in chatbots.\nThis is joint work with Santosh Vempala. \n\nAdam Tauman Kalai is a Research Scientist at OpenAI working on AI Safety and Ethics. He has worked in Algorithms\, Fairness\, Machine Learning Theory\, Game Theory\, and Crowdsourcing. He received his PhD from Carnegie Mellon University. He has served as an Assistant Professor at Georgia Tech and TTIC\, and is on the science team of the whale-translation Project CETI. He has co-chaired AI and crowdsourcing conferences and has numerous honors\, most notably the Majulook prize.
URL:https://tilos.ai/event/tilos-seminar-unlearnable-facts-cause-hallucinations-in-pretrained-language-models/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/kalai-adam-e1725645665625-utz75c.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20241210
DTEND;VALUE=DATE:20241211
DTSTAMP:20260423T131346
CREATED:20250904T180142Z
LAST-MODIFIED:20250904T182846Z
UID:7289-1733788800-1733875199@tilos.ai
SUMMARY:NSF Workshop on AI for Electronic Design Automation
DESCRIPTION:
URL:https://tilos.ai/event/nsf-workshop-on-ai-for-electronic-design-automation/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2024/10/circuitboard.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240618
DTEND;VALUE=DATE:20240619
DTSTAMP:20260423T131346
CREATED:20250828T201147Z
LAST-MODIFIED:20250904T174448Z
UID:7308-1718668800-1718755199@tilos.ai
SUMMARY:TILOS Industry Day 2024
DESCRIPTION:TILOS (The NSF National AI Institute for Learning-enabled Optimization at Scale) will hold its 3rd Annual Industry Day on June 18\, 2024\, at the Halıcıoğlu Data Science Institute at UC San Diego\, which is the campus hub for Data Science. Our first two Industry Days have attracted more than 100 participants\, each featuring (1) talks from invited Industry Speakers sharing their perspectives on challenges in AI + Optimization + Use domains (chips\, robotics\, networking)\, (2) research highlights from TILOS team members\, and (3) most importantly\, a vibrant TILOS Trainee Poster Session (30+ posters) together with a “Facebook” of students and postdocs (a booklet of these trainees). There is no cost to attend\, but please register here. \nAGENDA\n\n\n\n\n\n\n\n8:00 – 8:45am\nRegistration + Breakfast\n\n\n8:45 – 9:00am\nWelcome Remarks and Introduction to TILOS\nDirector Yusu Wang (UCSD)\nAD Translation Vijay Kumar (UPenn)\nRajesh Gupta (Director of HDSI@UCSD)\n\n\n9:00 – 10:30am\nSESSION 1  Chair: Vijay Kumar (UPenn)\nIndustry Keynote: Towards Scalable and Robust Autonomy\, Nicholas Roy (Zoox)\nTILOS Faculty Highlights:\n[9:50am] Traceable and Scalable GNN-based Circuit Optimization\, Farinaz Koushanfar (UCSD)\n[10:10am] Feature learning in neural networks and kernel models\, Misha Belkin (UCSD)\n\n\n10:30 – 10:45am\nBreak\n\n\n10:45am – 12:15pm\nSESSION 2  Chair: Yian Ma (UCSD)\nIndustry Keynote: AI and Networks: Challenges & Opportunities\, Nageen Himayat (Intel Labs)\nTILOS Faculty Highlights:\n[11:35am] Learning-enabled Optimization at Scale in Wireless Communications and  Networking\, Alejandro Ribeiro (UPenn)\n[11:55am] Reasoning Numerically\, Sean Gao (UCSD)\n\n\n12:15 – 2:00pm\nTILOS Trainee Poster Lightning Preview Session + Lunch\n\n\n2:00 – 3:00pm\nPanel Discussion on Academic–Industry Relations / Collaborations\nPanelists:\nNing Bi (Qualcomm VP Engineering)\nVitaly Feldman (Apple ML Research)\nKatherine Heller (Google Responsible AI)\nTara Javidi (UCSD)\nSomdeb Majumdar (Intel AI/ML Lab)\nModerator: Vijay Kumar (UPenn)\n\n\n3:00 – 3:30pm\nBreak\n\n\n3:30 – 5:00pm\nSESSION 3  Chair: Henrik Christensen (UCSD)\nIndustry Keynote: Foundation Models for Robotics\, Carolina Parada (Google DeepMind)\nTILOS Faculty Highlights:\n[4:20pm] Semantic Mapping and Task Planning for Autonomous Robots\,  Nikolay Atanasov (UCSD)\n[4:40pm] Bias in Evaluation Processes: An Optimization-Based Model\, Nisheeth Vishnoi (Yale U)\n\n\n5:00 – 7:30pm\nBuffet Dinner + Trainee Poster Session (HDSI 123 & 155)\n\n\n\nKEYNOTE PRESENTATION ABSTRACTS \nTowards Scalable and Robust Autonomy \nHow we design and deploy highly autonomous robots such as self-driving cars is evolving rapidly\, and there are numerous technical challenges in how to deploy an autonomous system at scale. I will describe some of the technical design decisions in developing an autonomous robotic at scale\, some of the candidate solutions and open questions for the future. \nNicholas Roy is the Autonomy Architecture Lead and a principal software engineer at Zoox. He and his team address technical challenges that cut across the autonomy verticals\, leading the design and deployment of cross-functional capabilities in the Zoox autonomy system. He is also the Bisplinghoff Professor of Aeronautics & Astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. Roy’s research focuses on decision-making under uncertainty\, mobile robot autonomy and human-robot interaction. Roy’s research has been transitioned into multiple commercial applications. \n\nAI and Networks: Challenges & Opportunities \nArtificial Intelligence and Machine Learning (AI/ML) Technologies are widely expected to play an integral role in the design and architecture of Next Generation Networks. We present several applications where AI/ML techniques are used to enhance the performance of wireless networking systems\, as well as discuss approaches to enhance AI computations over resource constrained networks. We also highlight the importance of ensuring resilience of network AI solutions and discuss future directions. \nNageen Himayat is a Senior Principal Engineer with the Security and Privacy Research Labs. She leads the Trusted & Distributed Intelligence (TDI) team conducting research on trustworthy AI and network security topics. Her research contributions span areas such as AI security\, distributed ML\, machine learning for networks\, multi-radio heterogeneous networks\, cross layer radio resource management\, and non-linear signal processing techniques. Nageen has authored over 350 technical publications\, contributing to several IEEE peer-reviewed publications\, 3GPP/IEEE standards\, as well as numerous patent filings. Prior to Intel\, Nageen was with Lucent Technologies and General Instrument Corp\, where she developed standards and systems for both wireless and wire-line broadband access networks. Nageen obtained her B.S.E.E degree from Rice University\, and her M.S./Ph.D. degree from the University of Pennsylvania. She also holds an MBA degree from the Haas School of Business at University of California\, Berkeley. \n\nFoundations Models for Robotics \nFoundation models have unlocked major advancements in AI. In this talk\, I will discuss how foundation models are enabling a step function in progress towards general purpose robots\, including enabling robots to understand\, reason\, hold situated conversations with humans and learn from them\, transfer visual and semantic generalization to real world actions\, and show initial signs of transfer between robot embodiments. \nIt is still early in this research journey but it is an exciting one because we can confidently be part of this fantastic fast and dynamic field of foundation models and not only ride the wave of innovation\, but help shape it. With this new approach\, we have to once again ask all the tough questions\, and call for advances in perception\, grounded reasoning\, and safety to build more advanced embodied foundation models\, while leveraging the human-centeredness\, semantic understanding\, and natural interaction that these models seamlessly enable. We’re just getting started. \nDr. Carolina Parada is an Engineering Director at Google DeepMind Robotics who is passionate about developing useful robots through human centered robot learning. Since 2019\, she leads multiple research groups in robot learning\, perception\, simulation\, and embodied reasoning. Prior to that\, she led the perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove research and engineering efforts that enabled all the voice products at Google. \n\n\n		\n		\n			\n				\n			\n				\n				Nageen Himayat of Intel Labs presents “AI and Networks: Challenges & Opportunities” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Student and postdoc poster session at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Sean Gao (UC San Diego) presents “Reasoning Numerically” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Demonstration of a Robotic Art outreach activity at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				TILOS Robotics team member Nikolay Atanasov (UC San Diego) presents “Semantic Mapping and Task Planning for Autonomous\nRobots” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Student and postdoc poster session at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				TILOS Associate Director of Translation and University of Pennsylvania Dean of Engineering Vijay Kumar (right) moderates a discussion on Academic–Industry Relations and Collaboration at TILOS Industry Day 2024 with panelists (from left) Ning Bi (Vice President of Engineering\, Qualcomm)\, Vitaly Feldman (Apple ML Research)\, Katherine Heller (Google Responsible AI)\, Tara Javidi (Professor of Electrical and Computer Engineering\, UC San Diego)\, and Somdeb Majumdar (Director\, Intel AI/ML Lab)\n				\n			\n		\n\nLocation: Halıcıoğlu Data Science Institute [MAP]\nRoom 123\n3234 Matthews Lane\nLa Jolla\, CA 92093 \nContacts: Angela Berti (aberti@ucsd.edu)\, Yusu Wang (yusuwang@ucsd.edu) \nParking: Hopkins Parking Structure (9800 Hopkins Dr\, La Jolla\, CA 92093; 10 minute walk to venue). \nParking fees are payable at pay stations or pay-by-phone. Note that many visitor spots are limited to two hours. Even though the app allows you to pay for longer periods\, you will get a ticket after that time if parked in a 2-hour space.
URL:https://tilos.ai/event/tilos-industry-day-2024/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240315
DTEND;VALUE=DATE:20240317
DTSTAMP:20260423T131346
CREATED:20250904T175958Z
LAST-MODIFIED:20250904T182814Z
UID:7311-1710460800-1710633599@tilos.ai
SUMMARY:HDSI-TILOS “LLM Meets Theory” Workshop 2024
DESCRIPTION:The UC San Diego HDSI-TILOS “LLM Meets Theory” Workshop aims to bring together students and faculty to discuss the future of mathematical and scientific theory and large language models (LLMs). LLMs are like a miracle—not one that breaks the laws of nature (that would be impossible\, of course)\, but something that defied all expectations and could not be predicted just a few years ago. In particular\, the simplicity of the resulting statistical models (which are essentially Markov chains\, and are limited to only predicting the next token) came as a complete surprise to almost all of us. In view of this\, it is crucial to gain some understanding of the implications and potential trajectory of these models. Therefore\, at UCSD HDSI\, we plan to invite a few researchers for talks and also leave a lot of time for panel discussions.
URL:https://tilos.ai/event/hdsi-tilos-llm-meets-theory-workshop-2024/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2024/02/HDSI.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240221T140000
DTEND;TZID=America/Los_Angeles:20240221T153000
DTSTAMP:20260423T131346
CREATED:20250828T201626Z
LAST-MODIFIED:20250828T201626Z
UID:7318-1708524000-1708529400@tilos.ai
SUMMARY:TILOS-HDSI Distinguished Colloquium: The Synergy between Machine Learning and the Natural Sciences
DESCRIPTION:Max Welling\, Research Chair in Machine Learning\, University of Amsterdam \nAbstract: Traditionally machine learning has been heavily influenced by neuroscience (hence the name artificial neural networks) and physics (e.g. MCMC\, Belief Propagation\, and Diffusion based Generative AI). We have recently witnessed that the flow of information has also reversed\, with new tools developed in the ML community impacting physics\, chemistry and biology. Examples include faster DFT\, Force-Field accelerated MD simulations\, PDE Neural Surrogate models\, generating druglike molecules\, and many more. In this talk I will review the exciting opportunities for further cross fertilization between these fields\, ranging from faster (classical) DFT calculations and enhanced transition path sampling to traveling waves in artificial neural networks. \n\nProf. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies\, professor at UC Irvine\, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton\, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate Prof. Gerard ‘t Hooft.
URL:https://tilos.ai/event/tilos-hdsi-distinguished-colloquium-the-synergy-between-machine-learning-and-the-natural-sciences/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/welling-max-e1709233283734-CWxvcN.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231117T093000
DTEND;TZID=America/Los_Angeles:20231117T103000
DTSTAMP:20260423T131346
CREATED:20250828T203006Z
LAST-MODIFIED:20250828T204333Z
UID:7323-1700213400-1700217000@tilos.ai
SUMMARY:Overview of the Executive Order on Safe\, Secure\, and Trustworthy Artificial Intelligence
DESCRIPTION:UC San Diego Professor of Data Science and Philosophy and TILOS affiliate David Danks will present an introduction to the U.S. Government’s Executive Order on Safe\, Secure\, and Trustworthy Artificial Intelligence for TILOS members. \nDavid Danks currently serves on the National AI Advisory Committee (NAIAC)\, which is tasked with advising the President and the National AI Initiative Office on topics related to AI. This talk will give an overview of the recent Executive Order and related activity by the U.S. Government in the space of AI (including regulation\, incentives\, and new programs). Ample time will be reserved for Q&A. \nThis is an internal TILOS event and will not be recorded.
URL:https://tilos.ai/event/overview-of-the-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:Internal Events,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/danks-david-1-e1756412984106.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231103
DTEND;VALUE=DATE:20231104
DTSTAMP:20260423T131346
CREATED:20250904T175208Z
LAST-MODIFIED:20250904T182741Z
UID:7325-1698969600-1699055999@tilos.ai
SUMMARY:Boston Symmetry Day 2023
DESCRIPTION:TILOS is a sponsor of Boston Symmetry Day\, a meeting of symmetry-minded folks in the Boston area. It is the largest event on symmetry and machine learning in the United States. Registration is free for all who would like to attend\, subject to space constraints.
URL:https://tilos.ai/event/boston-symmetry-day-2023/
LOCATION:MIT
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/boston-symmetry-group-e1698445385321-eiga9L.png
END:VEVENT
END:VCALENDAR