BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240315
DTEND;VALUE=DATE:20240317
DTSTAMP:20260403T110101
CREATED:20250904T175958Z
LAST-MODIFIED:20250904T182814Z
UID:7311-1710460800-1710633599@tilos.ai
SUMMARY:HDSI-TILOS “LLM Meets Theory” Workshop 2024
DESCRIPTION:The UC San Diego HDSI-TILOS “LLM Meets Theory” Workshop aims to bring together students and faculty to discuss the future of mathematical and scientific theory and large language models (LLMs). LLMs are like a miracle—not one that breaks the laws of nature (that would be impossible\, of course)\, but something that defied all expectations and could not be predicted just a few years ago. In particular\, the simplicity of the resulting statistical models (which are essentially Markov chains\, and are limited to only predicting the next token) came as a complete surprise to almost all of us. In view of this\, it is crucial to gain some understanding of the implications and potential trajectory of these models. Therefore\, at UCSD HDSI\, we plan to invite a few researchers for talks and also leave a lot of time for panel discussions.
URL:https://tilos.ai/event/hdsi-tilos-llm-meets-theory-workshop-2024/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2024/02/HDSI.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240320T100000
DTEND;TZID=America/Los_Angeles:20240320T110000
DTSTAMP:20260403T110101
CREATED:20250828T201417Z
LAST-MODIFIED:20250828T201417Z
UID:7315-1710928800-1710932400@tilos.ai
SUMMARY:TILOS Seminar: How Large Models of Language and Vision Help Agents to Learn to Behave
DESCRIPTION:Roy Fox\, Assistant Professor and Director of the Intelligent Dynamics Lab\, UC Irvine \nAbstract: If learning from data is valuable\, can learning from big data be very valuable? So far\, it has been so in vision and language\, for which foundation models can be trained on web-scale data to support a plethora of downstream tasks; not so much in control\, for which scalable learning remains elusive. Can information encoded in vision and language models guide reinforcement learning of control policies? In this talk\, I will discuss several ways for foundation models to help agents to learn to behave. Language models can provide better context for decision-making: we will see how they can succinctly describe the world state to focus the agent on relevant features; and how they can form generalizable skills that identify key subgoals. Vision and vision–language models can help the agent to model the world: we will see how they can block visual distractions to keep state representations task-relevant; and how they can hypothesize about abstract world models that guide exploration and planning. \n\nRoy Fox is an Assistant Professor of Computer Science at the University of California\, Irvine. His research interests include theory and applications of control learning: reinforcement learning (RL)\, control theory\, information theory\, and robotics. His current research focuses on structured and model-based RL\, language for RL and RL for language\, and optimization in deep control learning of virtual and physical agents.
URL:https://tilos.ai/event/tilos-seminar-how-large-models-of-language-and-vision-help-agents-to-learn-to-behave/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/fox-roy-e1710782779885-cplaNm.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240417T100000
DTEND;TZID=America/Los_Angeles:20240417T110000
DTSTAMP:20260403T110101
CREATED:20250828T201326Z
LAST-MODIFIED:20250828T201326Z
UID:7309-1713348000-1713351600@tilos.ai
SUMMARY:TILOS Seminar: Transformers learn in-context by (functional) gradient descent
DESCRIPTION:Xiang Cheng\, TILOS Postdoctoral Scholar\, MIT \nAbstract: Motivated by the in-context learning phenomenon\, we investigate how the Transformer neural network can implement learning algorithms in its forward pass. We show that a linear Transformer naturally learns to implement gradient descent\, which enables it to learn linear functions in-context. More generally\, we show that a non-linear Transformer can implement functional gradient descent with respect to some RKHS metric\, which allows it to learn a broad class of functions in-context. Additionally\, we show that the RKHS metric is determined by the choice of attention activation\, and that the optimal choice of attention activation depends in a natural way on the class of functions that need to be learned. I will end by discussing some implications of our results for the choice and design of Transformer architectures.
URL:https://tilos.ai/event/tilos-seminar-transformers-learn-in-context-by-functional-gradient-descent/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/cheng-xiang.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240522T100000
DTEND;TZID=America/Los_Angeles:20240522T110000
DTSTAMP:20260403T110101
CREATED:20250828T201245Z
LAST-MODIFIED:20250828T201245Z
UID:7305-1716372000-1716375600@tilos.ai
SUMMARY:TILOS Seminar: Large Datasets and Models for Robots in the Real World
DESCRIPTION:Nicklas Hansen\, UC San Diego \nAbstract: Recent progress in AI can be attributed to the emergence of large models trained on large datasets. However\, teaching AI agents to reliably interact with our physical world has proven challenging\, which is in part due to a lack of large and sufficiently diverse robot datasets. In this talk\, I will cover ongoing efforts of the Open X-Embodiment project–a collaboration between 279 researchers across 20+ institutions–to build a large\, open dataset for real-world robotics\, and discuss how this new paradigm is rapidly changing the field. Concretely\, I will discuss why we need large datasets in robotics\, what such datasets may look like\, and how large models can be trained and evaluated effectively in a cross-embodiment cross-environment setting. Finally\, I will conclude the talk by sharing my perspective on the limitations of current embodied AI agents\, as well as how to move forward as a community. \n\nNicklas Hansen is a Ph.D. student at University of California San Diego advised by Prof. Xiaolong Wang and Prof. Hao Su. His research focuses on developing generalist AI agents that learn from interaction with the physical and digital world. He has spent time at Meta AI (FAIR) and University of California Berkeley (BAIR)\, and received his B.S. and M.S. degrees from Technical University of Denmark. He is a recipient of the 2024 NVIDIA Graduate Fellowship\, and his work has been featured at top venues in machine learning and robotics. Webpage: www.nicklashansen.com
URL:https://tilos.ai/event/tilos-seminar-large-datasets-and-models-for-robots-in-the-real-world/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/Nicklas_Hansen-e1713393341399-GU4tJB.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240618
DTEND;VALUE=DATE:20240619
DTSTAMP:20260403T110101
CREATED:20250828T201147Z
LAST-MODIFIED:20250904T174448Z
UID:7308-1718668800-1718755199@tilos.ai
SUMMARY:TILOS Industry Day 2024
DESCRIPTION:TILOS (The NSF National AI Institute for Learning-enabled Optimization at Scale) will hold its 3rd Annual Industry Day on June 18\, 2024\, at the Halıcıoğlu Data Science Institute at UC San Diego\, which is the campus hub for Data Science. Our first two Industry Days have attracted more than 100 participants\, each featuring (1) talks from invited Industry Speakers sharing their perspectives on challenges in AI + Optimization + Use domains (chips\, robotics\, networking)\, (2) research highlights from TILOS team members\, and (3) most importantly\, a vibrant TILOS Trainee Poster Session (30+ posters) together with a “Facebook” of students and postdocs (a booklet of these trainees). There is no cost to attend\, but please register here. \nAGENDA\n\n\n\n\n\n\n\n8:00 – 8:45am\nRegistration + Breakfast\n\n\n8:45 – 9:00am\nWelcome Remarks and Introduction to TILOS\nDirector Yusu Wang (UCSD)\nAD Translation Vijay Kumar (UPenn)\nRajesh Gupta (Director of HDSI@UCSD)\n\n\n9:00 – 10:30am\nSESSION 1  Chair: Vijay Kumar (UPenn)\nIndustry Keynote: Towards Scalable and Robust Autonomy\, Nicholas Roy (Zoox)\nTILOS Faculty Highlights:\n[9:50am] Traceable and Scalable GNN-based Circuit Optimization\, Farinaz Koushanfar (UCSD)\n[10:10am] Feature learning in neural networks and kernel models\, Misha Belkin (UCSD)\n\n\n10:30 – 10:45am\nBreak\n\n\n10:45am – 12:15pm\nSESSION 2  Chair: Yian Ma (UCSD)\nIndustry Keynote: AI and Networks: Challenges & Opportunities\, Nageen Himayat (Intel Labs)\nTILOS Faculty Highlights:\n[11:35am] Learning-enabled Optimization at Scale in Wireless Communications and  Networking\, Alejandro Ribeiro (UPenn)\n[11:55am] Reasoning Numerically\, Sean Gao (UCSD)\n\n\n12:15 – 2:00pm\nTILOS Trainee Poster Lightning Preview Session + Lunch\n\n\n2:00 – 3:00pm\nPanel Discussion on Academic–Industry Relations / Collaborations\nPanelists:\nNing Bi (Qualcomm VP Engineering)\nVitaly Feldman (Apple ML Research)\nKatherine Heller (Google Responsible AI)\nTara Javidi (UCSD)\nSomdeb Majumdar (Intel AI/ML Lab)\nModerator: Vijay Kumar (UPenn)\n\n\n3:00 – 3:30pm\nBreak\n\n\n3:30 – 5:00pm\nSESSION 3  Chair: Henrik Christensen (UCSD)\nIndustry Keynote: Foundation Models for Robotics\, Carolina Parada (Google DeepMind)\nTILOS Faculty Highlights:\n[4:20pm] Semantic Mapping and Task Planning for Autonomous Robots\,  Nikolay Atanasov (UCSD)\n[4:40pm] Bias in Evaluation Processes: An Optimization-Based Model\, Nisheeth Vishnoi (Yale U)\n\n\n5:00 – 7:30pm\nBuffet Dinner + Trainee Poster Session (HDSI 123 & 155)\n\n\n\nKEYNOTE PRESENTATION ABSTRACTS \nTowards Scalable and Robust Autonomy \nHow we design and deploy highly autonomous robots such as self-driving cars is evolving rapidly\, and there are numerous technical challenges in how to deploy an autonomous system at scale. I will describe some of the technical design decisions in developing an autonomous robotic at scale\, some of the candidate solutions and open questions for the future. \nNicholas Roy is the Autonomy Architecture Lead and a principal software engineer at Zoox. He and his team address technical challenges that cut across the autonomy verticals\, leading the design and deployment of cross-functional capabilities in the Zoox autonomy system. He is also the Bisplinghoff Professor of Aeronautics & Astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. Roy’s research focuses on decision-making under uncertainty\, mobile robot autonomy and human-robot interaction. Roy’s research has been transitioned into multiple commercial applications. \n\nAI and Networks: Challenges & Opportunities \nArtificial Intelligence and Machine Learning (AI/ML) Technologies are widely expected to play an integral role in the design and architecture of Next Generation Networks. We present several applications where AI/ML techniques are used to enhance the performance of wireless networking systems\, as well as discuss approaches to enhance AI computations over resource constrained networks. We also highlight the importance of ensuring resilience of network AI solutions and discuss future directions. \nNageen Himayat is a Senior Principal Engineer with the Security and Privacy Research Labs. She leads the Trusted & Distributed Intelligence (TDI) team conducting research on trustworthy AI and network security topics. Her research contributions span areas such as AI security\, distributed ML\, machine learning for networks\, multi-radio heterogeneous networks\, cross layer radio resource management\, and non-linear signal processing techniques. Nageen has authored over 350 technical publications\, contributing to several IEEE peer-reviewed publications\, 3GPP/IEEE standards\, as well as numerous patent filings. Prior to Intel\, Nageen was with Lucent Technologies and General Instrument Corp\, where she developed standards and systems for both wireless and wire-line broadband access networks. Nageen obtained her B.S.E.E degree from Rice University\, and her M.S./Ph.D. degree from the University of Pennsylvania. She also holds an MBA degree from the Haas School of Business at University of California\, Berkeley. \n\nFoundations Models for Robotics \nFoundation models have unlocked major advancements in AI. In this talk\, I will discuss how foundation models are enabling a step function in progress towards general purpose robots\, including enabling robots to understand\, reason\, hold situated conversations with humans and learn from them\, transfer visual and semantic generalization to real world actions\, and show initial signs of transfer between robot embodiments. \nIt is still early in this research journey but it is an exciting one because we can confidently be part of this fantastic fast and dynamic field of foundation models and not only ride the wave of innovation\, but help shape it. With this new approach\, we have to once again ask all the tough questions\, and call for advances in perception\, grounded reasoning\, and safety to build more advanced embodied foundation models\, while leveraging the human-centeredness\, semantic understanding\, and natural interaction that these models seamlessly enable. We’re just getting started. \nDr. Carolina Parada is an Engineering Director at Google DeepMind Robotics who is passionate about developing useful robots through human centered robot learning. Since 2019\, she leads multiple research groups in robot learning\, perception\, simulation\, and embodied reasoning. Prior to that\, she led the perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove research and engineering efforts that enabled all the voice products at Google. \n\n\n		\n		\n			\n				\n			\n				\n				Nageen Himayat of Intel Labs presents “AI and Networks: Challenges & Opportunities” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Student and postdoc poster session at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Sean Gao (UC San Diego) presents “Reasoning Numerically” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Demonstration of a Robotic Art outreach activity at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				TILOS Robotics team member Nikolay Atanasov (UC San Diego) presents “Semantic Mapping and Task Planning for Autonomous\nRobots” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Student and postdoc poster session at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				TILOS Associate Director of Translation and University of Pennsylvania Dean of Engineering Vijay Kumar (right) moderates a discussion on Academic–Industry Relations and Collaboration at TILOS Industry Day 2024 with panelists (from left) Ning Bi (Vice President of Engineering\, Qualcomm)\, Vitaly Feldman (Apple ML Research)\, Katherine Heller (Google Responsible AI)\, Tara Javidi (Professor of Electrical and Computer Engineering\, UC San Diego)\, and Somdeb Majumdar (Director\, Intel AI/ML Lab)\n				\n			\n		\n\nLocation: Halıcıoğlu Data Science Institute [MAP]\nRoom 123\n3234 Matthews Lane\nLa Jolla\, CA 92093 \nContacts: Angela Berti (aberti@ucsd.edu)\, Yusu Wang (yusuwang@ucsd.edu) \nParking: Hopkins Parking Structure (9800 Hopkins Dr\, La Jolla\, CA 92093; 10 minute walk to venue). \nParking fees are payable at pay stations or pay-by-phone. Note that many visitor spots are limited to two hours. Even though the app allows you to pay for longer periods\, you will get a ticket after that time if parked in a 2-hour space.
URL:https://tilos.ai/event/tilos-industry-day-2024/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240724T100000
DTEND;TZID=America/Los_Angeles:20240724T110000
DTSTAMP:20260403T110101
CREATED:20250828T200721Z
LAST-MODIFIED:20250828T200721Z
UID:7304-1721815200-1721818800@tilos.ai
SUMMARY:TILOS Seminar: What Kinds of Functions do Neural Networks Learn? Theory and Practical Applications
DESCRIPTION:Robert Nowak\, University of Wisconsin \nAbstract: This talk presents a theory characterizing the types of functions neural networks learn from data. Specifically\, the function space generated by deep ReLU networks consists of compositions of functions from the Banach space of second-order bounded variation in the Radon transform domain. This Banach space includes functions with smooth projections in most directions. A representer theorem associated with this space demonstrates that finite-width neural networks suffice for fitting finite datasets. The theory has several practical applications. First\, it provides a simple and theoretically grounded method for network compression. Second\, it shows that multi-task training can yield significantly different solutions compared to single-task training\, and that multi-task solutions can be related to kernel ridge regressions. Third\, the theory has implications for improving implicit neural representations\, where multi-layer neural networks are used to represent continuous signals\, images\, or 3D scenes. This exploration bridges theoretical insights with practical advancements\, offering a new perspective on neural network capabilities and future research directions. \n\nRobert Nowak is the Grace Wahba Professor of Data Science and Keith and Jane Nosbusch Professor in Electrical and Computer Engineering at the University of Wisconsin-Madison. His research focuses on machine learning\, optimization\, and signal processing. He serves on the editorial boards of the SIAM Journal on the Mathematics of Data Science and the IEEE Journal on Selected Areas in Information Theory.
URL:https://tilos.ai/event/tilos-seminar-what-kinds-of-functions-do-neural-networks-learn-theory-and-practical-applications/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2024/07/nowak-robert.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241002T110000
DTEND;TZID=America/Los_Angeles:20241002T120000
DTSTAMP:20260403T110101
CREATED:20250828T200544Z
LAST-MODIFIED:20250828T200612Z
UID:7297-1727866800-1727870400@tilos.ai
SUMMARY:TILOS-SDSU Seminar: AI/ML & NLP for UAS/Air Traffic Management
DESCRIPTION:Krishna Kalyanam\, NASA Ames Research Center \nAbstract: We introduce several Air Traffic Management (ATM) initiatives envisioned by NASA and FAA for a future airspace that combines conventional traffic and new entrants (e.g.\, drones) without sacrificing safety. In this framework\, we demonstrate the use of state-of-the-art AI/ML modeling and prediction tools that will enable efficient and safe traffic flow in the U.S. National Airspace System (NAS). For example\, Natural Language Processing (NLP) tools can help extract data (e.g.\, airspace constraints) that are currently contained in legacy text and audio format and convert them into digital information. The digitized information can be ingested by route planning\, arrival scheduling and other decision support tools both on the ground and in the flight deck. We show how historical data (track\, weather & events) can be preprocessed and utilized to create accurate models to predict flight trajectories and events of interest (e.g.\, Traffic Management Initiatives). We show several application areas within ATM that benefit from AI/ML including trajectory prediction\, airport runway configuration management and automatic speech to text. The overarching goal of the work is to accelerate the integration of package delivery drones\, air taxis and autonomous cargo aircraft into the NAS without impacting the safety and efficacy of current manned operations. As an example\, we also show a strategic deconfliction scenario and demonstrate scalable algorithms that provide conflict free schedules for package delivery drones in an urban setting. \n\nDr. Krishna Kalyanam is the Autonomy & AI/ML tech lead with the NASA Aeronautics Research Institute (NARI). In his current role\, he is focused on delivering state of the art AI/ML algorithms to enable scalable and efficient manned/unmanned operations in a mixed-use National airspace. Prior to joining NASA\, Dr. Kalyanam was with AFRL’s Autonomous Controls branch\, where he co-designed several multi-UAV cooperative control algorithms that were flight tested as part of the Intelligent Control & Evaluation of Teams (ICE-T) program. Dr. Kalyanam has published 100+ papers on stochastic control\, human machine teaming and multi-agent scheduling in IEEE\, ASME and AIAA venues. Dr. Kalyanam is a senior member of IEEE and an associate fellow of the AIAA.  He is a recipient of the prestigious Research associateship award sponsored by the National Academies. He was also part of the UAV Autonomy team that won the AFRL “Star Team” award for performing the most innovative in-house basic research in 2018.
URL:https://tilos.ai/event/tilos-sdsu-seminar-ai-ml-nlp-for-uas-air-traffic-management/
LOCATION:San Diego State University\, 5500 Campanile Dr\, San Diego\, 92182\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/kalyanam-krishna-e1726505877275-apyqNc.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241107T160000
DTEND;TZID=America/Los_Angeles:20241107T170000
DTSTAMP:20260403T110101
CREATED:20250828T200431Z
LAST-MODIFIED:20250828T200431Z
UID:7292-1730995200-1730998800@tilos.ai
SUMMARY:TILOS Seminar: Data Models for Deep Learning: Beyond i.i.d. Assumptions
DESCRIPTION:Elchanan Mossel\, Professor of Mathematics\, MIT \nAbstract: Classical Machine Learning theory is largely built upon the assumption that data samples are independent and identically distributed (i.i.d.) from general distribution families. In this talk\, I will present novel insights that emerge when we move beyond these traditional assumptions\, exploring both dependent sampling scenarios and structured generative distributions. These perspectives offer fresh theoretical frameworks and practical implications for modern machine learning approaches. \n\nElchanan Mossel is a Professor of Mathematics at the Massachusetts Institute of Technology (MIT)\, specializing in probability theory\, combinatorics\, and theoretical computer science. His research explores a range of complex\, interdisciplinary problems\, including social choice theory\, inference in networks\, and the analysis of algorithms\, with applications across economics\, political science\, and genetics. Mossel completed his Ph.D. at the Hebrew University of Jerusalem and held postdoctoral positions at Microsoft Research and UC Berkeley before joining MIT. Recognized for his innovative work\, Mossel has received a Sloan fellowship\, NSF CAREER award\, and COLT best paper award\, and is a Fellow of the American Mathematical Society.
URL:https://tilos.ai/event/tilos-seminar-data-models-for-deep-learning-beyond-i-i-d-assumptions/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/mossel-elchanan-e1728935276435-milFYz.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241113T110000
DTEND;TZID=America/Los_Angeles:20241113T120000
DTSTAMP:20260403T110101
CREATED:20250828T200305Z
LAST-MODIFIED:20250828T200305Z
UID:7291-1731495600-1731499200@tilos.ai
SUMMARY:TILOS Seminar: Off-the-shelf Algorithmic Stability
DESCRIPTION:Rebecca Willett\, University of Chicago \nAbstract: Algorithmic stability holds when our conclusions\, estimates\, fitted models\, predictions\, or decisions are insensitive to small changes to the training data. Stability has emerged as a core principle for reliable data science\, providing insights into generalization\, cross-validation\, uncertainty quantification\, and more. Whereas prior literature has developed mathematical tools for analyzing the stability of specific machine learning (ML) algorithms\, we study methods that can be applied to arbitrary learning algorithms to satisfy a desired level of stability. First\, I will discuss how bagging is guaranteed to stabilize any prediction model\, regardless of the input data. Thus\, if we remove or replace a small fraction of the training data at random\, the resulting prediction will typically change very little. Our analysis provides insight into how the size of the bags (bootstrap datasets) influences stability\, giving practitioners a new tool for guaranteeing a desired level of stability. Second\, I will describe how to extend these stability guarantees beyond prediction modeling to more general statistical estimation problems where bagging is not as well known but equally useful for stability. Specifically\, I will describe a new framework for stable classification and model selection by combining bagging on class or model weights with a stable\, “soft” version of the argmax operator. This is joint work with Jake Soloff and Rina Barber. \n\nRebecca Willett is a Professor of Statistics and Computer Science and the Director of AI in the Data Science Institute at the University of Chicago\, and she holds a courtesy appointment at the Toyota Technological Institute at Chicago. Her research is focused on machine learning foundations\, scientific machine learning\, and signal processing. Willett received the inaugural Data Science Career Prize from the Society of Industrial and Applied Mathematics in 2024\, was named a Fellow of the Society of Industrial and Applied Mathematics in 2021\, and was named a Fellow of the IEEE in 2022. She is the Deputy Director for Research at the NSF-Simons Foundation National Institute for Theory and Mathematics in Biology\, Deputy Director for Research at the NSF-Simons Institute for AI in the Sky (SkAI)\, and a member of the NSF Institute for the Foundations of Data Science Executive Committee. She is the Faculty Director of the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship. She helps direct the Air Force Research Lab University Center of Excellence on Machine Learning. She received the National Science Foundation CAREER Award in 2007\, was a DARPA Computer Science Study Group member\, and received an Air Force Office of Scientific Research Young Investigator Program award in 2010. She completed her PhD in Electrical and Computer Engineering at Rice University in 2005. She was an Assistant and then tenured Associate Professor of Electrical and Computer Engineering at Duke University from 2005 to 2013. She was an Associate Professor of Electrical and Computer Engineering\, Harvey D. Spangler Faculty Scholar\, and Fellow of the Wisconsin Institutes for Discovery at the University of Wisconsin-Madison from 2013 to 2018.
URL:https://tilos.ai/event/tilos-seminar-off-the-shelf-algorithmic-stability/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2024/10/new-willett_square-250x250-1.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241120T110000
DTEND;TZID=America/Los_Angeles:20241120T120000
DTSTAMP:20260403T110101
CREATED:20250828T200101Z
LAST-MODIFIED:20250828T200101Z
UID:7294-1732100400-1732104000@tilos.ai
SUMMARY:TILOS Seminar: How Transformers Learn Causal Structure with Gradient Descent
DESCRIPTION:Jason Lee\, Princeton University \nAbstract: The incredible success of transformers on sequence modeling tasks can be largely attributed to the self-attention mechanism\, which allows information to be transferred between different parts of a sequence. Self-attention allows transformers to encode causal structure which makes them particularly suitable for sequence modeling. However\, the process by which transformers learn such causal structure via gradient-based training algorithms remains poorly understood. To better understand this process\, we introduce an in-context learning task that requires learning latent causal structure. We prove that gradient descent on a simplified two-layer transformer learns to solve this task by encoding the latent causal graph in the first attention layer. The key insight of our proof is that the gradient of the attention matrix encodes the mutual information between tokens. As a consequence of the data processing inequality\, the largest entries of this gradient correspond to edges in the latent causal graph. As a special case\, when the sequences are generated from in-context Markov chains\, we prove that transformers learn an induction head (Olsson et al.\, 2022). We confirm our theoretical findings by showing that transformers trained on our in-context learning task are able to recover a wide variety of causal structures. \n\nJason Lee is an associate professor in Electrical Engineering and Computer Science (secondary) at Princeton University. Prior to that\, he was in the Data Science and Operations department at the University of Southern California and a postdoctoral researcher at UC Berkeley working with Michael I. Jordan. Jason received his PhD at Stanford University advised by Trevor Hastie and Jonathan Taylor. His research interests are in the theory of machine learning\, optimization\, and statistics. Lately\, he has worked on the foundations of deep learning\, representation learning\, and reinforcement learning. He has received the Samsung AI Researcher of the Year Award\, NSF Career Award\, ONR Young Investigator Award in Mathematical Data Science\, Sloan Research Fellowship\, NeurIPS Best Student Paper Award and Finalist for the Best Paper Prize for Young Researchers in Continuous Optimization\, and Princeton Commendation for Outstanding Teaching.
URL:https://tilos.ai/event/tilos-seminar-how-transformers-learn-causal-structure-with-gradient-descent/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/lee-jason-e1727126682884-UcJAUD.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20241210
DTEND;VALUE=DATE:20241211
DTSTAMP:20260403T110101
CREATED:20250904T180142Z
LAST-MODIFIED:20250904T182846Z
UID:7289-1733788800-1733875199@tilos.ai
SUMMARY:NSF Workshop on AI for Electronic Design Automation
DESCRIPTION:
URL:https://tilos.ai/event/nsf-workshop-on-ai-for-electronic-design-automation/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2024/10/circuitboard.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250129T110000
DTEND;TZID=America/Los_Angeles:20250129T123000
DTSTAMP:20260403T110101
CREATED:20250828T195813Z
LAST-MODIFIED:20250828T195813Z
UID:7301-1738148400-1738153800@tilos.ai
SUMMARY:TILOS Seminar: Unlearnable Facts Cause Hallucinations in Pretrained Language Models
DESCRIPTION:Adam Tauman Kalai\, OpenAI \nAbstract: Pretrained language models (LMs) tend to preserve many qualities present in their training data\, such as grammaticality\, formatting\, and politeness. However\, for specific types of factuality\, even LMs pretrained on factually correct statements tend to produce falsehoods at high rates. We explain these “hallucinations” by drawing a connection to binary classification\, enabling us to leverage insights from supervised learning. We prove that pretrained LMs (which are “calibrated”) fail to mimic criteria that cannot be learned. Our analysis explains why pretrained LMs hallucinate on facts such as people’s birthdays but not on systematic facts such as even vs. odd numbers.\nOf course\, LM pretraining is only one stage in the development of a chatbot\, and thus hallucinations are *not* inevitable in chatbots.\nThis is joint work with Santosh Vempala. \n\nAdam Tauman Kalai is a Research Scientist at OpenAI working on AI Safety and Ethics. He has worked in Algorithms\, Fairness\, Machine Learning Theory\, Game Theory\, and Crowdsourcing. He received his PhD from Carnegie Mellon University. He has served as an Assistant Professor at Georgia Tech and TTIC\, and is on the science team of the whale-translation Project CETI. He has co-chaired AI and crowdsourcing conferences and has numerous honors\, most notably the Majulook prize.
URL:https://tilos.ai/event/tilos-seminar-unlearnable-facts-cause-hallucinations-in-pretrained-language-models/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/kalai-adam-e1725645665625-utz75c.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250212T110000
DTEND;TZID=America/Los_Angeles:20250212T120000
DTSTAMP:20260403T110101
CREATED:20250828T195559Z
LAST-MODIFIED:20250828T195908Z
UID:7299-1739358000-1739361600@tilos.ai
SUMMARY:TILOS-SDSU Seminar: Challenging Estimation Problems in Vehicle Autonomy
DESCRIPTION:Rajesh Rajamani\, University of Minnesota \nAbstract: This talk presents some interesting problems in estimation related to vehicle autonomy. First\, a teleoperation application in which a remote operator can intervene to control an autonomous vehicle is considered. Fundamental challenges here include the need to design an effective teleoperation station\, bandwidth and time-criticality constraints in wireless communication\, and the need for a control system that can handle delays. A predictive display system that uses generative AI to estimate the current video display for the teleoperator from fusion of delayed camera and Lidar images is developed. By estimating trajectories of the ego vehicle and of other nearby vehicles on the road\, realistic intermediate updates of the remote vehicle environment are used to compensate for delayed camera data. A different estimation application involving the driving of a vehicle with automated steering control on snow-covered and rural roads is considered next. Since camera-based feedback of lane markers cannot be used\, sensor fusion algorithms and RTK-corrected GPS are utilized for lateral position estimation. Finally\, the modification of target vehicle tracking methods utilized on autonomous vehicles for use on other low-cost platforms is considered. Applications involving protection of vulnerable road users such as e-scooter riders\, bicyclists and construction zone workers is demonstrated. The fundamental theme underlying the different estimation problems in this seminar is the effective use of nonlinear vehicle dynamic models and novel nonlinear observer design algorithms. \n\nRajesh Rajamani obtained his M.S. and Ph.D. degrees from the University of California at Berkeley and his B.Tech degree from the Indian Institute of Technology at Madras. He joined the faculty in Mechanical Engineering at the University of Minnesota in 1998 where he is currently the Benjamin Y.H. Liu-TSI Endowed Chair Professor and Associate Director (Research) of the Minnesota Robotics Institute. His active research interests include estimation\, sensing and control for smart and autonomous systems.\nDr. Rajamani has co-authored over 190 journal papers and is a co-inventor on 20+ patents/patent applications. He is a Fellow of IEEE and ASME and has been a recipient of the CAREER award from the National Science Foundation\, the O. Hugo Schuck Award from the American Automatic Control Council\, the Ralph Teetor Award from SAE\, the Charles Stark Draper award from ASME\, and a number of best paper awards from journals and conferences. Several inventions from his laboratory have been commercialized through start-up ventures co-founded by industry executives. One of these companies\, Innotronics\, was recently recognized among the 35 Best University Start-Ups of 2016 by the US National Council of Entrepreneurial Tech Transfer.
URL:https://tilos.ai/event/tilos-sdsu-seminar-challenging-estimation-problems-in-vehicle-autonomy/
LOCATION:San Diego State University\, 5500 Campanile Dr\, San Diego\, 92182\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/rajamani-rajesh-e1725919938393-FsSjfr.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250219
DTEND;VALUE=DATE:20250221
DTSTAMP:20260403T110101
CREATED:20250904T180342Z
LAST-MODIFIED:20250904T183026Z
UID:7281-1739923200-1740095999@tilos.ai
SUMMARY:Secure AI for Health\, Defense\, and Beyond
DESCRIPTION:
URL:https://tilos.ai/event/secure-ai-for-health-defense-and-beyond/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/UCSD-e1737756262771-s0U7kP-e1757009005925.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250306T083000
DTEND;TZID=America/Los_Angeles:20250306T121500
DTSTAMP:20260403T110101
CREATED:20250828T193005Z
LAST-MODIFIED:20250828T193005Z
UID:7276-1741249800-1741263300@tilos.ai
SUMMARY:TILOS Tutorial on AI Alignment
DESCRIPTION:
URL:https://tilos.ai/event/tilos-tutorial-on-ai-alignment/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250312T110000
DTEND;TZID=America/Los_Angeles:20250312T120000
DTSTAMP:20260403T110101
CREATED:20250828T192527Z
LAST-MODIFIED:20250828T192602Z
UID:7295-1741777200-1741780800@tilos.ai
SUMMARY:TILOS Seminar: Synthetic Tasks as Testbeds for Attributing Model Behavior
DESCRIPTION:Surbhi Goel\, University of Pennsylvania \nAbstract: Understanding how different components of the machine learning pipeline—spanning data composition\, architectural choices\, and optimization dynamics—shape model behavior remains a fundamental challenge. In this talk\, I will argue that synthetic tasks\, which enable precise control over data distribution and task complexity\, serve as powerful testbeds for analyzing and attributing behaviors in deep learning. Focusing on the sparse parity learning problem\, a canonical task in learning theory\, I will present insights into: (1) the phenomenon of “hidden progress” in gradient-based optimization\, where models exhibit consistent advancement despite stagnating loss curves; (2) nuanced trade-offs between data\, compute\, model width\, and initialization that govern learning success; and (3) the role of progressive distillation in implicitly structuring curricula to accelerate feature learning. These findings highlight the utility of synthetic tasks in uncovering empirical insights into the mechanisms driving deep learning\, without the cost of training expensive models. This talk is based on joint work with a lot of amazing collaborators: Boaz Barak\, Ben Edelman\, Sham Kakade\, Bingbin Liu\, Eran Malach\, Sadhika Malladi\, Abhishek Panigrahi\, Andrej Risteski\, and Cyril Zhang. \n\nSurbhi Goel is the Magerman Term Assistant Professor of Computer and Information Science at the University of Pennsylvania. She is associated with the theory group\, the ASSET Center on safe\, explainable\, and trustworthy AI systems\, and the Warren Center for Network and Data Sciences. Surbhi’s research focuses on theoretical foundations of modern machine learning paradigms\, particularly deep learning\, and is supported by Microsoft Research and OpenAI. Previously\, she was a postdoctoral researcher at Microsoft Research NYC and completed her Ph.D. at the University of Texas at Austin under Adam Klivans\, receiving the UTCS Bert Kay Dissertation Award. She has also been a visiting researcher at IAS\, Princeton\, and the Simons Institute at UC Berkeley. Surbhi co-founded the Learning Theory Alliance (LeT‐All) and holds several leadership roles\, including Office Hours co-chair for ICLR 2024 and co-treasurer for the Association for Computational Learning Theory.
URL:https://tilos.ai/event/tilos-seminar-synthetic-tasks-as-testbeds-for-attributing-model-behavior/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/goel-surbhi-e1727126779765-U5P80t.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250317
DTEND;VALUE=DATE:20250318
DTSTAMP:20260403T110101
CREATED:20250904T181134Z
LAST-MODIFIED:20250904T182933Z
UID:7275-1742169600-1742255999@tilos.ai
SUMMARY:TILOS-Cisco AI + Security Workshop
DESCRIPTION:
URL:https://tilos.ai/event/tilos-cisco-ai-security-workshop/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:Internal Events,TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250327T140000
DTEND;TZID=America/Los_Angeles:20250327T150000
DTSTAMP:20260403T110101
CREATED:20250828T192427Z
LAST-MODIFIED:20250828T192653Z
UID:7273-1743084000-1743087600@tilos.ai
SUMMARY:TILOS Seminar: Single location regression and attention-based models
DESCRIPTION:Claire Boyer\, Université Paris-Saclay \nAbstract: Attention-based models\, such as Transformer\, excel across various tasks but lack a comprehensive theoretical understanding\, especially regarding token-wise sparsity and internal linear representations. To address this gap\, we introduce the single-location regression task\, where only one token in a sequence determines the output\, and its position is a latent random variable\, retrievable via a linear projection of the input. To solve this task\, we propose a dedicated predictor\, which turns out to be a simplified version of a non-linear self-attention layer. We study its theoretical properties\, by showing its asymptotic Bayes optimality and analyzing its training dynamics. In particular\, despite the non-convex nature of the problem\, the predictor effectively learns the underlying structure. This work highlights the capacity of attention mechanisms to handle sparse token information and internal linear structures.
URL:https://tilos.ai/event/tilos-seminar-single-location-regression-and-attention-based-models/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/boyer-claire-e1742860147959-s8d3nW.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250331
DTEND;VALUE=DATE:20250401
DTSTAMP:20260403T110101
CREATED:20250904T175539Z
LAST-MODIFIED:20250904T182652Z
UID:7282-1743379200-1743465599@tilos.ai
SUMMARY:Boston Symmetry Day 2025
DESCRIPTION:
URL:https://tilos.ai/event/boston-symmetry-day-2025/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/boston-symmetry-group-e1698445385321-eiga9L.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250402T110000
DTEND;TZID=America/Los_Angeles:20250402T120000
DTSTAMP:20260403T110101
CREATED:20250828T192344Z
LAST-MODIFIED:20260227T222401Z
UID:7287-1743591600-1743595200@tilos.ai
SUMMARY:TILOS Seminar: Foundational Methods for Foundation Models for Scientific Machine Learning
DESCRIPTION:Michael W. Mahoney\, ICSI\, LBNL\, and Department of Statistics\, UC Berkeley \nAbstract: The remarkable successes of ChatGPT in natural language processing (NLP) and related developments in computer vision (CV) motivate the question of what foundation models would look like and what new advances they would enable\, when built on the rich\, diverse\, multimodal data that are available from large-scale experimental and simulational data in scientific computing (SC)\, broadly defined. Such models could provide a robust and principled foundation for scientific machine learning (SciML)\, going well beyond simply using ML tools developed for internet and social media applications to help solve future scientific problems. I will describe recent work demonstrating the potential of the “pre-train and fine-tune” paradigm\, widely-used in CV and NLP\, for SciML problems\, demonstrating a clear path towards building SciML foundation models; as well as recent work highlighting multiple “failure modes” that arise when trying to interface data-driven ML methodologies with domain-driven SC methodologies\, demonstrating clear obstacles to traversing that path successfully. I will also describe initial work on developing novel methods to address several of these challenges\, as well as their implementations at scale\, a general solution to which will be needed to build robust and reliable SciML models consisting of millions or billions or trillions of parameters. \n\nMichael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning\, including randomized matrix algorithms and randomized numerical linear algebra\, scientific machine learning\, scalable stochastic optimization\, geometric network analysis tools for structure extraction in large informatics graphs\, scalable implicit regularization methods\, computational methods for neural network analysis\, physics informed machine learning\, and applications in genetics\, astronomy\, medical imaging\, social network analysis\, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics\, and he has worked and taught at Yale University in the mathematics department\, at Yahoo Research\, and at Stanford University in the mathematics department. Among other things\, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI)\, he was on the National Research Council’s Committee on the Analysis of Massive Data\, he co-organized the Simons Institute’s fall 2013 and 2018 programs on the foundations of data science\, he ran the Park City Mathematics Institute’s 2016 PCMI Summer Session on The Mathematics of Data\, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets\, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. More information is available at https://www.stat.berkeley.edu/~mmahoney/.
URL:https://tilos.ai/event/tilos-seminar-foundational-methods-for-foundation-models-for-scientific-machine-learning/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/mahoney-michael-e1733251484543-1e6Odv.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250416T110000
DTEND;TZID=America/Los_Angeles:20250416T120000
DTSTAMP:20260403T110101
CREATED:20250828T192233Z
LAST-MODIFIED:20260227T222458Z
UID:7286-1744801200-1744804800@tilos.ai
SUMMARY:TILOS Seminar: Amplifying human performance in combinatorial competitive programming
DESCRIPTION:Petar Veličković\, Google DeepMind \nAbstract: Recent years have seen a significant surge in complex AI systems for competitive programming\, capable of performing at admirable levels against human competitors. While steady progress has been made\, the highest percentiles still remain out of reach for these methods on standard competition platforms such as Codeforces. In this talk\, I will describe and dive into our recent work\, where we focussed on combinatorial competitive programming. In combinatorial challenges\, the target is to find as-good-as-possible solutions to otherwise computationally intractable problems\, over specific given inputs. We hypothesise that this scenario offers a unique testbed for human-AI synergy\, as human programmers can write a backbone of a heuristic solution\, after which AI can be used to optimise the scoring function used by the heuristic. We deploy our approach on previous iterations of Hash Code\, a global team programming competition inspired by NP-hard software engineering problems at Google\, and we leverage FunSearch to evolve our scoring functions. Our evolved solutions significantly improve the attained scores from their baseline\, successfully breaking into the top percentile on all previous Hash Code online qualification rounds\, and outperforming the top human teams on several. To the best of our knowledge\, this is the first known AI-assisted top-tier result in competitive programming.
URL:https://tilos.ai/event/tilos-seminar-amplifying-human-performance-in-combinatorial-competitive-programming/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/velickovic-petar-e1736275993608-TwwARw.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250417
DTEND;VALUE=DATE:20250419
DTSTAMP:20260403T110101
CREATED:20250401T180604Z
LAST-MODIFIED:20250904T182557Z
UID:7280-1744848000-1745020799@tilos.ai
SUMMARY:HOT-AI: Horizons for Optimization in AI Workshop
DESCRIPTION:
URL:https://tilos.ai/event/hot-ai-horizons-for-optimization-in-ai-workshop/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250523T110000
DTEND;TZID=America/Los_Angeles:20250523T120000
DTSTAMP:20260403T110101
CREATED:20250828T192125Z
LAST-MODIFIED:20260227T222820Z
UID:7272-1747998000-1748001600@tilos.ai
SUMMARY:TILOS Seminar: Optimal Quantization for LLMs and Matrix Multiplication
DESCRIPTION:Yury Polyanskiy\, MIT \nAbstract: The main building block of large language models is matrix multiplication\, which is often bottlenecked by the speed of loading these matrices from memory. A number of recent quantization algorithms (SmoothQuant\, GPTQ\, QuIP\, SpinQuant etc) address this issue by storing matrices in lower precision. We derive optimal asymptotic information-theoretic tradeoff between accuracy of the matrix product and compression rate (number of bits per matrix entry). We also show that a non-asymptotic version of our construction (based on nested Gosset lattices and Conway-Sloan decoding)\, which we call NestQuant\, reduces perplexity deterioration almost three-fold compared to the state-of-the-art algorithms (as measured on LLama-2\, Llama-3 with 8B to 70B parameters). Based on a joint work with Or Ordentlich (HUJI)\, Eitan Porat and Semyon Savkin (MIT EECS). \n\nYury Polyanskiy is a Cutten Professor of Electrical Engineering and Computer Science\, a member of IDSS and LIDS at MIT\, and an IEEE Fellow (2024). Yury received M.S. degree in applied mathematics and physics from the Moscow Institute of Physics and Technology in 2005 and Ph.D. degree in electrical engineering from Princeton University in 2010. His research interests span information theory\, machine learning and statistics. Dr. Polyanskiy won the 2020 IEEE Information Theory Society James Massey Award\, 2013 NSF CAREER award and 2011 IEEE Information Theory Society Paper Award.
URL:https://tilos.ai/event/tilos-seminar-optimal-quantization-for-llms-and-matrix-multiplication/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/04/polyanskiy-yuri.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250602
DTEND;VALUE=DATE:20250603
DTSTAMP:20260403T110101
CREATED:20250904T174234Z
LAST-MODIFIED:20250904T183243Z
UID:7531-1748822400-1748908799@tilos.ai
SUMMARY:TILOS Industry Day 2025
DESCRIPTION:
URL:https://tilos.ai/event/tilos-industry-day-2025/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251001T110000
DTEND;TZID=America/Los_Angeles:20251001T120000
DTSTAMP:20260403T110101
CREATED:20250828T192015Z
LAST-MODIFIED:20260304T210603Z
UID:7259-1759316400-1759320000@tilos.ai
SUMMARY:TILOS-HDSI Seminar: A New Paradigm for Learning with Distribution Shift
DESCRIPTION:Adam Klivans\, The University of Texas at Austin \nAbstract: We revisit the fundamental problem of learning with distribution shift\, where a learner is given labeled samples from training distribution D\, unlabeled samples from test distribution D′ and is asked to output a classifier with low test error. The standard approach in this setting is to prove a generalization bound in terms of some notion of distance between D and D′. These distances\, however\, are difficult to compute\, and this has been the main stumbling block for efficient algorithm design over the last two decades. \nWe sidestep this issue and define a new model called TDS learning\, where a learner runs a test on the training set and is allowed to reject if this test detects distribution shift relative to a fixed output classifier. This approach leads to the first set of efficient algorithms for learning with distribution shift that do not take any assumptions on the test distribution. Finally\, we discuss how our techniques have recently been used to solve longstanding problems in supervised learning with contamination. \n\nAdam Klivans is a Professor of Computer Science at the University of Texas at Austin and Director of the NSF AI Institute for Foundations of Machine Learning (IFML). His research interests lie in machine learning and theoretical computer science\, in particular\, Learning Theory\, Computational Complexity\, Pseudorandomness\, Limit Theorems\, and Gaussian Space. Dr. Klivans is a recipient of the NSF CAREER Award and serves on the editorial board for the Theory of Computing and Machine Learning Journal.
URL:https://tilos.ai/event/tilos-seminar-with-adam-klivans/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/klivans-adam-e1756405638325.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251024T110000
DTEND;TZID=America/Los_Angeles:20251024T120000
DTSTAMP:20260403T110101
CREATED:20250925T175700Z
LAST-MODIFIED:20260304T210610Z
UID:7611-1761303600-1761307200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: High-dimensional Optimization with Applications to Compute-Optimal Neural Scaling Laws
DESCRIPTION:Courtney Paquette\, McGill University \nAbstract: Given the massive scale of modern ML models\, we now only get a single shot to train them effectively. This restricts our ability to test multiple architectures and hyper-parameter configurations. Instead\, we need to understand how these models scale\, allowing us to experiment with smaller problems and then apply those insights to larger-scale models. In this talk\, I will present a framework for analyzing scaling laws in stochastic learning algorithms using a power-law random features model (PLRF)\, leveraging high-dimensional probability and random matrix theory. I will then use this scaling law to address the compute-optimal question: How should we choose model size and hyper-parameters to achieve the best possible performance in the most compute-efficient manner? Then using this PLRF model\, I will devise a new momentum-based algorithm that (provably) improves the scaling law exponent. Finally\, I will present some numerical experiments on LSTMs that show how this new stochastic algorithm can be applied to real data to improve the compute-optimal exponent. \n\nCourtney Paquette is an assistant professor at McGill University in the Mathematics and Statistics department\, a CIFAR AI Chair (MILA)\, and an active member of the Montreal Machine Learning Optimization Group (MTL MLOpt) at MILA. Her research broadly focuses on designing and analyzing algorithms for large-scale optimization problems\, motivated by applications in data science\, and using techniques that draw from a variety of fields\, including probability\, complexity theory\, and convex and nonsmooth analysis. Dr. Paquette is a lead organizer of the OPT-ML Workshop at NeurIPS since 2020\, and a lead organizer (and original creator) of the High-dimensional Learning Dynamics (HiLD) Workshop at ICML.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-courtney-paquette-mcgill-university/
LOCATION:CSE 1242 and Virtual\, 3235 Voigt Dr\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/paquette-courtney-scaled-e1758822988381.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251024T120000
DTEND;TZID=America/Los_Angeles:20251024T133000
DTSTAMP:20260403T110101
CREATED:20251021T183004Z
LAST-MODIFIED:20251021T183004Z
UID:7684-1761307200-1761312600@tilos.ai
SUMMARY:Student and Postdoc Lunch at Zanzibar Cafe
DESCRIPTION:Join fellow TILOS students and postdoctoral researchers for an informal lunch at Zanzibar Cafe\, located on the second floor of Price Center.
URL:https://tilos.ai/event/student-and-postdoc-lunch-at-zanzibar-cafe/
LOCATION:Zanzibar Cafe at UC San Diego
CATEGORIES:Internal Events
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/10/zanzibar-e1761058377808.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251112T110000
DTEND;TZID=America/Los_Angeles:20251112T120000
DTSTAMP:20260403T110101
CREATED:20251104T173955Z
LAST-MODIFIED:20260304T210641Z
UID:7730-1762945200-1762948800@tilos.ai
SUMMARY:TILOS-HDSI Seminar: AI safety theory: the missing middle ground
DESCRIPTION:Adam Oberman\, McGill University \nAbstract:  Over the past few years\, the capabilities of generative artificial intelligence (AI) systems have advanced rapidly. Along with the benefits of AI\, there is also a risk of harm. In order to benefit from AI while mitigating the risks\, we need a grounded theoretical framework. \nThe current AI safety theory\, which predates generative AI\, is insufficient. Most theoretical AI safety results tend to reason absolutely: a system is a system is “aligned” or “mis-aligned”\, “honest” or “dishonest”. But in practice safety is probabilistic\, not absolute. The missing middle ground is a quantitative or relative theory of safety — a way to reason formally about degrees of safety. Such a theory is required for defining safety and harms\, and is essential for technical solutions as well as for making good policy decisions. \nIn this talk I will: \n\nReview current AI risks (from misuse\, from lack of reliability\, and systemic risks to the economy) as well as important future risks (lack of control).\nReview theoretical predictions of bad AI behavior and discuss experiments which demonstrate that they can occur in current LLMs.\nExplain why technical and theoretical safety solutions are valuable\, even by contributors outside of the major labs.\nDiscuss some gaps in the theory and present some open problems which could address the gaps.\n\n\nAdam Oberman is a Full Professor of Mathematics and Statistics at McGill University\, a Canada CIFAR AI Chair\, and an Associate Member of Mila. He is a research collaborator at LawZero\, Yoshua Bengio’s AI Safety Institute. He has been researching AI safety since 2024. His research spans generative models\, reinforcement learning\, optimization\, calibration\, and robustness. Earlier in his career\, he made significant contributions to optimal transport and nonlinear partial differential equations. He earned degrees from the University of Toronto and the University of Chicago\, and previously held faculty and postdoctoral positions at Simon Fraser University and the University of Texas at Austin.
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-adam-oberman-mcgill-ai-safety-theory-the-missing-middle-ground/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2025/11/oberman-adam-e1762277416983.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251119T110000
DTEND;TZID=America/Los_Angeles:20251119T120000
DTSTAMP:20260403T110101
CREATED:20251105T193505Z
LAST-MODIFIED:20260227T215217Z
UID:7735-1763550000-1763553600@tilos.ai
SUMMARY:TILOS-SDSU Seminar: Certifiably Correct Machine Perception
DESCRIPTION:David Rosen\, Northeastern University \nAbstract: Many fundamental machine perception and state estimation tasks require the solution of a high-dimensional nonconvex estimation problem; this class includes (for example) the fundamental problems of simultaneous localization and mapping (in robotics)\, 3D reconstruction (in computer vision)\, and sensor network localization (in distributed sensing). Such problems are known to be computationally hard in general\, with many local minima that can entrap the smooth local optimization methods commonly applied to solve them. The result is that standard machine perception algorithms (based upon local optimization) can be surprisingly brittle\, often returning egregiously wrong answers even when the problem to which they are applied is well-posed. \nIn this talk\, we present a novel class of certifiably correct estimation algorithms that are capable of efficiently recovering provably good (often globally optimal) solutions of generally-intractable machine perception problems in many practical settings. Our approach directly tackles the problem of nonconvexity by employing convex relaxations whose minimizers provide provably good approximate solutions to the original estimation problem under moderate measurement noise. We illustrate the design of this class of methods using the fundamental problem of pose-graph optimization (a mathematical abstraction of robotic mapping) as a running example. We conclude with a brief discussion of open questions and future research directions. \n\nDavid M. Rosen is an Assistant Professor in the Departments of Electrical & Computer Engineering and Mathematics and the Khoury College of Computer Sciences (by courtesy) at Northeastern University\, where he leads the Robust Autonomy Laboratory (NEURAL). Prior to joining Northeastern\, he was a Research Scientist at Oculus Research (now Meta Reality Labs) from 2016 to 2018\, and a Postdoctoral Associate at MIT’s Laboratory for Information and Decision Systems (LIDS) from 2018 to 2021. He holds the degrees of B.S. in Mathematics from the California Institute of Technology (2008)\, M.A. in Mathematics from the University of Texas at Austin (2010)\, and ScD in Computer Science from the Massachusetts Institute of Technology (2016). \n\nHe is broadly interested in the mathematical and algorithmic foundations of trustworthy machine perception\, learning\, and control. His work has been recognized with the IEEE Transactions on Robotics Best Paper Award (2024)\, an Honorable Mention for the IEEE Transactions on Robotics Best Paper Award (2021)\, a Best Student Paper Award at Robotics: Science and Systems (2020)\, a Best Paper Award at the International Workshop on the Algorithmic Foundations of Robotics (2016)\, and selection as an RSS Pioneer (2019).
URL:https://tilos.ai/event/tilos-sdsu-seminar-with-david-rosen-northeastern/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/11/rosen-david-scaled-e1762371210779.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251201
DTEND;VALUE=DATE:20251203
DTSTAMP:20260403T110101
CREATED:20250903T222016Z
LAST-MODIFIED:20250908T162145Z
UID:7473-1764547200-1764719999@tilos.ai
SUMMARY:Workshop on Topology\, Algebra\, and Geometry in Data Science (co-located with NeurIPS 2025)
DESCRIPTION:We are thrilled to announce the first official TAG-DS Stand-Alone Event–TAG… We’re it! This will be a two day event\, December 1 & 2\, 2025\, featuring keynotes\, poster sessions\, spotlight talks\, collaboration activities\, and community development. The dates and location were selected to align with NeurIPS 2025–twice the fun! The event will be hosted on the University of California San Diego campus both days and is readily accessible by public transit from downtown for those already planning to attend NeurIPS. There will be an associated Proceedings of Machine Learning Research volume for papers submitted to the archival track.
URL:https://tilos.ai/event/topology-algebra-and-geometry-in-data-science-2025/
LOCATION:UC San Diego\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/TAG-DS_logo-1-e1756938002600.png
END:VEVENT
END:VCALENDAR