BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20221116T100000
DTEND;TZID=America/Los_Angeles:20221116T110000
DTSTAMP:20260403T111829
CREATED:20250904T172450Z
LAST-MODIFIED:20250904T172450Z
UID:7353-1668592800-1668596400@tilos.ai
SUMMARY:TILOS Seminar: Rare Gems: Finding Lottery Tickets at Initialization
DESCRIPTION:Dimitris Papailiopoulos\, Associate Professor\, University of Wisconsin–Madison \nAbstract: Large neural networks can be pruned to a small fraction of their original size\, with little loss in accuracy\, by following a time-consuming “train\, prune\, re-train” approach. Frankle & Carbin in 2019 conjectured that we can avoid this by training lottery tickets\, i.e.\, special sparse subnetworks found at initialization\, that can be trained to high accuracy. However\, a subsequent line of work presents concrete evidence that current algorithms for finding trainable networks at initialization\, fail simple baseline comparisons\, e.g.\, against training random sparse subnetworks. Finding lottery tickets that train to better accuracy compared to simple baselines remains an open problem. In this work\, we resolve this open problem by discovering Rare Gems: sparse\, trainable networks at initialization\, that achieve high accuracy even before training. When Rare Gems are trained with SGD\, they achieve accuracy competitive or better than Iterative Magnitude Pruning (IMP) with warmup.
URL:https://tilos.ai/event/tilos-seminar-rare-gems-finding-lottery-tickets-at-initialization/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/papailiopoulos-dimitris-1-e1711660394297.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230118T100000
DTEND;TZID=America/Los_Angeles:20230118T110000
DTSTAMP:20260403T111829
CREATED:20250904T173009Z
LAST-MODIFIED:20250904T173009Z
UID:7352-1674036000-1674039600@tilos.ai
SUMMARY:TILOS Seminar: Causal Discovery for Root Cause Analysis
DESCRIPTION:Murat Kocaoglu\, Assistant Professor\, Purdue University \nAbstract: Cause-effect relations are crucial for several fields\, from medicine to policy design as they inform us of the outcomes of our actions a priori. However\, causal knowledge is hard to curate for complex systems that might be changing frequently. Causal discovery algorithms allow us to extract causal knowledge from the available data. In this talk\, first\, we provide a short introduction to algorithmic causal discovery. Next\, we propose a novel causal discovery algorithm from a collection of observational and interventional datasets in the presence of unobserved confounders\, with unknown intervention targets. Finally\, we demonstrate the effectiveness of our algorithm for root-cause analysis in microservice architectures.
URL:https://tilos.ai/event/tilos-seminar-causal-discovery-for-root-cause-analysis/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/kocaoglu-murat-e1757007002404.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230215T100000
DTEND;TZID=America/Los_Angeles:20230215T110000
DTSTAMP:20260403T111829
CREATED:20250904T173100Z
LAST-MODIFIED:20250904T173100Z
UID:7351-1676455200-1676458800@tilos.ai
SUMMARY:TILOS Seminar: Engineering the Future of Software with AI
DESCRIPTION:Dr. Ruchir Puri\, Chief Scientist\, IBM Research\, IBM Fellow\, Vice-President IBM Corporate Technology \nAbstract: Software has become woven into every aspect of our society\, and it will be fair to say that “Software has eaten the world.” More recently\, advances in AI are starting to transform every aspect of our society as well. These two tectonic forces of transformation\, software and AI\, are colliding together resulting in a seismic shift—a future where software itself will be built\, maintained\, and operated by AI—pushing us towards a future where “Computers can program themselves!” In this talk\, we will discuss these forces of “AI for Code” and how the future of software engineering is being redefined by AI.
URL:https://tilos.ai/event/tilos-seminar-engineering-the-future-of-software-with-ai/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/09/puri-ruchir.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230419T100000
DTEND;TZID=America/Los_Angeles:20230419T110000
DTSTAMP:20260403T111829
CREATED:20250903T184737Z
LAST-MODIFIED:20250903T184902Z
UID:7365-1681898400-1681902000@tilos.ai
SUMMARY:TILOS Seminar: ML Training Strategies Inspired by Humans’ Learning Skills
DESCRIPTION:Pengtao Xie\, Assistant Professor\, UC San Diego \nAbstract: Humans\, as the most powerful learners on the planet\, have accumulated a lot of learning skills\, such as learning through tests\, interleaving learning\, self-explanation\, active recalling\, to name a few. These learning skills and methodologies enable humans to learn new topics more effectively and efficiently. We are interested in investigating whether humans’ learning skills can be borrowed to help machines to learn better. Specifically\, we aim to formalize these skills and leverage them to train better machine learning (ML) models. To achieve this goal\, we develop a general framework\, Skillearn\, which provides a principled way to represent humans’ learning skills mathematically and use the formally-represented skills to improve the training of ML models. In two case studies\, we apply Skillearn to formalize two learning skills of humans: learning by passing tests and interleaving learning\, and use the formalized skills to improve neural architecture search. \n\nPengtao Xie is an assistant professor at UC San Diego. He received his PhD from the Machine Learning Department at Carnegie Mellon University in 2018. His research interests lie in machine learning inspired by human learning and its applications in healthcare. His research outcomes have been adopted by medical device companies\, medical imaging centers\, hospitals\, etc. and have been published at top-tier artificial intelligence conferences and journals including ICML\, NeurIPS\, ACL\, ICCV\, TACL\, etc. He is the recipient of the Tencent AI-Lab Faculty Award\, Tencent WeChat Faculty Award\, the Innovator Award presented by the Pittsburgh Business Times\, the Siebel Scholars award\, and the Goldman Sachs Global Leader Scholarship.
URL:https://tilos.ai/event/tilos-seminar-ml-training-strategies-inspired-by-humans-learning-skills/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/xie-pengtao-scaled-e1696371691928.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230426T090000
DTEND;TZID=America/Los_Angeles:20230426T100000
DTSTAMP:20260403T111829
CREATED:20250828T204254Z
LAST-MODIFIED:20250828T204254Z
UID:7361-1682499600-1682503200@tilos.ai
SUMMARY:TILOS-OPTML++ Seminar: Sums of Squares: from Algebra to Analysis
DESCRIPTION:Francis Bach\, NRIA\, ENS\, and PSL Paris \nAbstract: The representation of non-negative functions as sums of squares has become an important tool in many modeling and optimization tasks. Traditionally applied to polynomial functions\, it requires rich tools from algebraic geometry that led to many developments in the last twenty years. In this talk\, I will look at this problem from a functional analysis point of view\, leading to new applications and new results on the performance of sum-of-squares optimization. \n\nFrancis Bach is a researcher at Inria\, leading since 2011 the machine learning team which is part of the Computer Science department at Ecole Normale Supérieure. He graduated from Ecole Polytechnique in 1997 and completed his Ph.D. in Computer Science at U.C. Berkeley in 2005\, working with Professor Michael Jordan. He spent two years in the Mathematical Morphology group at Ecole des Mines de Paris\, then he joined the computer vision project-team at Inria/Ecole Normale Supérieure from 2007 to 2010. Francis Bach is primarily interested in machine learning\, and especially in sparse methods\, kernel-based learning\, large-scale optimization\, computer vision and signal processing. He obtained in 2009 a Starting Grant and in 2016 a Consolidator Grant from the European Research Council\, and received the Inria young researcher prize in 2012\, the ICML test-of-time award in 2014 and 2019\, as well as the Lagrange prize in continuous optimization in 2018\, and the Jean-Jacques Moreau prize in 2019. He was elected in 2020 at the French Academy of Sciences. In 2015\, he was program co-chair of the International Conference in Machine learning (ICML)\, and general chair in 2018; he is now co-editor-in-chief of the Journal of Machine Learning Research.
URL:https://tilos.ai/event/tilos-optml-seminar-sums-of-squares-from-algebra-to-analysis/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series,TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/francis_bach_septembre_2016_small-e1711659265321-yFIGFR.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230515T143000
DTEND;TZID=America/Los_Angeles:20230515T153000
DTSTAMP:20260403T111829
CREATED:20250828T202516Z
LAST-MODIFIED:20250828T204126Z
UID:7336-1684161000-1684164600@tilos.ai
SUMMARY:TILOS Seminar: The Hidden Convex Optimization Landscape of Deep Neural Networks
DESCRIPTION:Mert Pilanci\, Stanford University \nAbstract: Since deep neural network training problems are inherently non-convex\, their recent dramatic success largely relies on non-convex optimization heuristics and experimental findings. Despite significant advancements\, the non-convex nature of neural network training poses two central challenges: first\, understanding the underlying mechanisms that contribute to model performance\, and second\, achieving efficient training with low computational cost and energy consumption. The performance of non-convex models is notably influenced by the selection of optimization methods and hyperparameters\, including initialization\, mini-batching\, and step sizes. Conversely\, convex optimization problems are characterized by their robustness to these choices\, allowing for the efficient and consistent achievement of globally optimal solutions\, irrespective of optimization parameters. In this talk\, we explore a novel perspective by examining multilayer neural networks equipped with ReLU activation functions through the framework of convex optimization. We introduce exact convex optimization formulations of ReLU network training problems. We show that two-layer ReLU networks can be globally trained via convex programs with the number of variables polynomial in the number of training samples\, feature dimension\, and the number of hidden neurons. We show that our analysis extends to deeper networks and that these convex programs possess an intuitive geometric interpretation. Our results provide an equivalent characterization of neural networks as convex models where a mixture of locally linear models are fitted to the data with sparsity inducing convex regularization. Moreover\, we show that standard convolutional neural networks can be globally optimized in fully polynomial time. We discuss extensions to batch normalization\, generative adversarial networks and transformers. Finally\, we present numerical simulations verifying our claims and illustrating that the proposed convex approach is faster and more reliable than standard local search heuristics such as SGD and variants. \n\nMert Pilanci is an assistant professor of Electrical Engineering at Stanford University. He received his Ph.D. in Electrical Engineering and Computer Science from UC Berkeley in 2016. Prior to joining Stanford\, he was an assistant professor of Electrical Engineering and Computer Science at the University of Michigan. In 2017\, he was a Math+X postdoctoral fellow working with Emmanuel Candès at Stanford University. Mert’s research interests are in neural networks\, machine learning\, optimization\, and signal processing. His group develops theory and algorithms for solving large scale optimization problems in machine learning. His research also seeks to develop safe and interpretable artificial intelligence and information theoretic foundations of distributed computing.
URL:https://tilos.ai/event/tilos-seminar-the-hidden-convex-optimization-landscape-of-deep-neural-networks/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series,TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/pilanci-mert-e1756408324872.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230519T100000
DTEND;TZID=America/Los_Angeles:20230519T110000
DTSTAMP:20260403T111829
CREATED:20250904T171300Z
LAST-MODIFIED:20250904T171300Z
UID:7337-1684490400-1684494000@tilos.ai
SUMMARY:TILOS Seminar: Learning from Diverse and Small Data
DESCRIPTION:Ramya Korlakai Vinayak\, Assistant Professor\, University of Wisconsin–Madison \nAbstract: Machine learning (ML) algorithms are becoming ubiquitous in various application domains such as public health\, genomics\, psychology\, and social sciences. In these domains\, data is often obtained from populations that are diverse\, e.g.\, varying demographics\, phenotypes\, preferences etc. Many ML algorithms focus on learning model parameters that work well on average over the population but do not capture the diversity. On the other hand\, such datasets usually have few observations per individual that limits our ability to learn about each individual separately. Question of interest in these scenarios is\, how can we reliably capture the diversity in the data in small data settings? \nIn this talk\, we will address this question in the following settings: \n(i) In many applications\, we observe count data which can be modeled as Binomial (e.g.\, polling\, surveys\, epidemiology) or Poisson (e.g.\, single cell RNA data) data. As a single or finite parameters do not capture the diversity of the population in such datasets\, they are often modeled as nonparametric mixtures. In this setting\, we will address the following question\, “how well can we learn the distribution of parameters over the population without learning the individual parameters?” and show that nonparametric maximum likelihood estimators are in fact minimax optimal. \n(ii) Learning preferences from human judgements using comparison queries plays a crucial role in cognitive and behavioral psychology\, crowdsourcing democracy\, surveys in social science applications\, and recommendation systems. Models in the literature often focus on learning average preference over the population due to the limitations on the amount of data available per individual. We will discuss some recent results on how we can reliably capture diversity in preferences while pooling together data from individuals. \n\nRamya Korlakai Vinayak is an assistant professor in the Dept. of ECE and affiliated faculty in the Dept. of Computer Science and the Dept. of Statistics at the University of Wisconsin–Madison. Her research interests span the areas of machine learning\, statistical inference\, and crowdsourcing. Her work focuses on addressing theoretical and practical challenges that arise when learning from societal data. Prior to joining UW Madison\, Ramya was a postdoctoral researcher in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. She received her Ph.D. in Electrical Engineering from Caltech. She obtained her Masters from Caltech and Bachelors from IIT Madras. She is a recipient of the Schlumberger Foundation Faculty of the Future fellowship from 2013-15\, and an invited participant at the Rising Stars in EECS workshop in 2019. She is the recipient of NSF CAREER Award 2023-2028.
URL:https://tilos.ai/event/tilos-seminar-learning-from-diverse-and-small-data/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/vinayak-ramya-e1711658956146-NwHzUB.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230602T090000
DTEND;TZID=America/Los_Angeles:20230602T100000
DTSTAMP:20260403T111829
CREATED:20250828T204002Z
LAST-MODIFIED:20250903T234009Z
UID:7335-1685696400-1685700000@tilos.ai
SUMMARY:AI Ethics Roundtable
DESCRIPTION:The TILOS Ethics and Early Career Committee invites you to an upcoming round table discussion on AI Ethics. This will take place virtually through Zoom on Friday\, June 2\, 2023 at 9am Pacific / 11am Central / Noon Eastern. \nPlease join Dr. Nisheeth Vishnoi from Yale\, Dr. David Danks from UC San Diego\, and Dr. Hoda Heidari from Carnegie Mellon University as we discuss a variety of aspects of AI Ethics with our moderators Dr. Stefanie Jegelka from MIT and Dr. Jodi Reeves from National University. This event is a great opportunity for TILOS students to learn about the constantly evolving issues of AI Ethics in research and the societal impact of AI. It will also provide a platform for students to gain insights and valuable advice that can help them in their future career pursuits. \n\nNisheeth Vishnoi is the A. Bartlett Giamatti Professor of Computer Science and a co-founder of the Computation and Society Initiative at Yale University. He studies the foundations of computation\, and his research spans several areas of theoretical computer science\, optimization\, and machine learning. He is also interested in understanding nature and society from a computational viewpoint. Here\, his current focus includes understanding the emergence of intelligence and developing methods to address ethical issues at the interface of artificial intelligence and humanity. \n\nDavid Danks is Professor of Data Science & Philosophy and affiliate faculty in Computer Science & Engineering at University of California\, San Diego. His research interests range widely across philosophy\, cognitive science\, and machine learning\, including their intersection. Danks has examined the ethical\, psychological\, and policy issues around AI and robotics across multiple sectors\, including transportation\, healthcare\, privacy\, and security. He has also done significant research in computational cognitive science and developed multiple novel causal discovery algorithms for complex types of observational and experimental data. Danks is the recipient of a James S. McDonnell Foundation Scholar Award\, as well as an Andrew Carnegie Fellowship. He currently serves on multiple advisory boards\, including the National AI Advisory Committee. \n\nHoda Heidari is an Assistant Professor in Machine Learning and Societal Computing at the School of Computer Science\, Carnegie Mellon University. Her research is broadly concerned with the social\, ethical\, and economic implications of Artificial Intelligence. In particular\, her research addresses issues of unfairness and accountability through Machine Learning. Her work in this area has won a best-paper award at the ACM Conference on Fairness\, Accountability\, and Transparency (FAccT) and an exemplary track award at the ACM Conference on Economics and Computation (EC). She has organized several scholarly events on topics related to Responsible and Trustworthy AI\, including a tutorial at the Web Conference (WWW) and several workshops at the Neural and Information Processing Systems (NeurIPS) conference. Dr. Heidari completed her doctoral studies in Computer and Information Science at the University of Pennsylvania. She holds an M.Sc. degree in Statistics from the Wharton School of Business. Before joining Carnegie Mellon as a faculty member\, she was a postdoctoral scholar at the Machine Learning Institute of ETH Zurich\, followed by a year at the Artificial Intelligence\, Policy\, and Practice (AIPP) initiative at Cornell University.
URL:https://tilos.ai/event/ai-ethics-roundtable/
LOCATION:Virtual
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230918T100000
DTEND;TZID=America/Los_Angeles:20230918T110000
DTSTAMP:20260403T111829
CREATED:20250828T203818Z
LAST-MODIFIED:20250828T203818Z
UID:7327-1695031200-1695034800@tilos.ai
SUMMARY:TILOS Seminar: Machine Learning from Weak\, Noisy\, and Biased Supervision
DESCRIPTION:Masashi Sugiyama\, University of Tokyo and RIKEN \nAbstract: In statistical inference and machine learning\, we face a variety of uncertainties such as training data with insufficient information\, label noise\, and bias. In this talk\, I will give an overview of our research on reliable machine learning\, including weakly supervised classification (positive unlabeled classification\, positive confidence classification\, complementary label classification\, etc.)\, noisy label classification (noise transition estimation\, instance-dependent noise\, clean sample selection\, etc.)\, and transfer learning (joint importance-predictor estimation for covariate shift adaptation\, dynamic importance estimation for full distribution shift\, continuous distribution shift\, etc.).
URL:https://tilos.ai/event/tilos-seminar-machine-learning-from-weak-noisy-and-biased-supervision/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series,TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/Sugiyama-1-e1711659352629-5zhb7G.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231004T093000
DTEND;TZID=America/Los_Angeles:20231004T103000
DTSTAMP:20260403T111829
CREATED:20250828T203648Z
LAST-MODIFIED:20250828T203648Z
UID:7330-1696411800-1696415400@tilos.ai
SUMMARY:TILOS Fireside Chat on Theory in the Age of Modern AI
DESCRIPTION:The first TILOS Fireside Chat of Fall 2023 will be a conversation about theory in the age of modern AI led by TILOS members Nisheeth Vishnoi\, Tara Javidi\, Misha Belkin\, and Arya Mazumdar (moderator). This will be a great opportunity to discuss implications of AI and roles of theory (especially with the recent development in LLMs)\, and an exciting way to start the third year of TILOS!
URL:https://tilos.ai/event/tilos-fireside-chat-on-theory-in-the-age-of-modern-ai/
LOCATION:Virtual
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231011T100000
DTEND;TZID=America/Los_Angeles:20231011T110000
DTSTAMP:20260403T111829
CREATED:20250828T203527Z
LAST-MODIFIED:20250828T203527Z
UID:7332-1697018400-1697022000@tilos.ai
SUMMARY:TILOS Seminar: Towards Foundation Models for Graph Reasoning and AI 4 Science
DESCRIPTION:Michael Galkin\, Research Scientist\, Intel AI Lab \nAbstract: Foundation models in graph learning are hard to design due to the lack of common invariances that transfer across different structures and domains. In this talk\, I will give an overview of the two main tracks of my research at Intel AI: creating foundation models for knowledge graph reasoning that can run zero-shot inference on any multi-relational graphs\, and foundation models for materials discovery in the AI4Science domain that capture physical properties of crystal structures and transfer to a variety of predictive and generative tasks. We will also talk about theoretical and practical challenges like scaling behavior\, data scarcity\, and diverse evaluation of foundation graph models. \n\nMichael Galkin is a Research Scientist at Intel AI Lab in San Diego working on Graph Machine Learning and Geometric Deep Learning. Previously\, he was a postdoc at Mila–Quebec AI Institute with Will Hamilton\, Reihaneh Rabbany\, and Jian Tang\, focusing on many graph representation learning problems. Sometimes\, Mike writes long blog posts on Medium about graph learning.
URL:https://tilos.ai/event/tilos-seminar-towards-foundation-models-for-graph-reasoning-and-ai-4-science/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/galkin-michael-e1696372136747-ADo2jB.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231102T110000
DTEND;TZID=America/Los_Angeles:20231102T120000
DTSTAMP:20260403T111829
CREATED:20250828T203419Z
LAST-MODIFIED:20250828T203419Z
UID:7329-1698922800-1698926400@tilos.ai
SUMMARY:TILOS Seminar: Building Personalized Decision Models with Federated Human Preferences
DESCRIPTION:Aadirupa Saha\, Research Scientist\, Apple \nAbstract: Customer statistics collected in several real-world systems have reflected that users often prefer eliciting their liking for a given pair of items\, say (A\,B)\, in terms of relative queries like: “Do you prefer Item A over B?”\, rather than their absolute counterparts: “How much do you score items A and B on a scale of [0-10]?”. Drawing inspirations\, in the search for a more effective feedback collection mechanism\, led to the famous formulation of Dueling Bandits (DB)\, which is a widely studied online learning framework for efficient information aggregation from relative/comparative feedback. However despite the novel objective\, unfortunately\, most of the existing DB techniques were limited only to simpler settings of finite decision spaces\, and stochastic environments\, which are unrealistic in practice. In this talk\, we will start with the basic problem formulations for DB and familiarize ourselves with some of the breakthrough results. Following this\, will dive deeper into a more practical framework of contextual dueling bandits (C-DB) where the goal of the learner is to make personalized predictions based on the user contexts. We will see a new algorithmic approach that can efficiently achieve the optimal O(sqrt T) regret performance for this problem\, resolving an open problem from Dudík et al. [COLT\, 2015]. In the last part of the talk\, we will extend the aforementioned models to a federated framework\, which entails developing preference-driven prediction models for distributed environments for creating large-scale personalized systems\, including recommender systems and chatbot interactions. Apart from exploiting the limited preference feedback model\, the challenge lies in ensuring user privacy and reducing communication complexity in the federated setting. We will conclude the talk with some interesting open problems. \n\nAadirupa is currently a research scientist at Apple ML research\, broadly working in the area of Machine Learning theory. She did a short-term research visit at Toyota Technological Institute\, Chicago (TTIC)\, after finishing her postdoc at Microsoft Research New York City. She obtained her Ph.D. from IISc Bangalore with Aditya Gopalan and Chiranjib Bhattacharyya. Website: https://aadirupa.github.io
URL:https://tilos.ai/event/tilos-seminar-building-personalized-decision-models-with-federated-human-preferences/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/saha-aadirupa-1-e1696372152821-Ts31fs.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231103
DTEND;VALUE=DATE:20231104
DTSTAMP:20260403T111829
CREATED:20250904T175208Z
LAST-MODIFIED:20250904T182741Z
UID:7325-1698969600-1699055999@tilos.ai
SUMMARY:Boston Symmetry Day 2023
DESCRIPTION:TILOS is a sponsor of Boston Symmetry Day\, a meeting of symmetry-minded folks in the Boston area. It is the largest event on symmetry and machine learning in the United States. Registration is free for all who would like to attend\, subject to space constraints.
URL:https://tilos.ai/event/boston-symmetry-day-2023/
LOCATION:MIT
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/boston-symmetry-group-e1698445385321-eiga9L.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231108T110000
DTEND;TZID=America/Los_Angeles:20231108T120000
DTSTAMP:20260403T111829
CREATED:20250828T203219Z
LAST-MODIFIED:20250828T203236Z
UID:7324-1699441200-1699444800@tilos.ai
SUMMARY:TILOS-OPTML++ Seminar: Optimization\, Robustness and Privacy in Deep Neural Networks: Insights from the Neural Tangent Kernel
DESCRIPTION:Marco Mondelli\, Institute of Science and Technology Austria \nAbstract: A recent line of work has analyzed the properties of deep over-parameterized neural networks through the lens of the Neural Tangent Kernel (NTK). In this talk\, I will show how concentration bounds on the NTK (and\, specifically\, on its smallest eigenvalue) provide insights on (i) the optimization of the network via gradient descent\, (ii) its adversarial robustness\, and (iii) its privacy guarantees. I will start by proving tight bounds on the smallest eigenvalue of the NTK for deep neural networks with minimum over-parameterization. This implies that the network optimized by gradient descent interpolates the training dataset (i.e.\, reaches 0 training loss)\, as soon as the number of parameters is information-theoretically optimal. Next\, I will focus on two properties of the interpolating solution: robustness and privacy. A thought-provoking paper by Bubeck and Sellke has proposed a “universal law of robustness”: interpolating smoothly the data necessarily requires many more parameters than simple memorization. By providing sharp bounds on random features (RF) and NTK models\, I will show that\, while the RF model is never robust (regardless of the over-parameterization)\, the NTK model saturates the universal law of robustness\, addressing a conjecture by Bubeck\, Li and Nagaraj. Finally\, I will study the safety of RF and NTK models against a family of powerful black-box information retrieval attacks: the proposed analysis shows that safety provably strengthens with an increase in the generalization capability\, unveiling the role of the model and of its activation function. \n\nMarco Mondelli received the B.S. and M.S. degree in Telecommunications Engineering from the University of Pisa\, Italy\, in 2010 and 2012\, respectively. In 2016\, he obtained his Ph.D. degree in Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne (EPFL)\, Switzerland. He is currently an Assistant Professor at the Institute of Science and Technology Austria (ISTA). Prior to that\, he was a Postdoctoral Scholar in the Department of Electrical Engineering at Stanford University\, USA\, from February 2017 to August 2019. He was also a Research Fellow with the Simons Institute for the Theory of Computing\, UC Berkeley\, USA\, for the program on Foundations of Data Science from August to December 2018. His research interests include data science\, machine learning\, information theory\, and modern coding theory. He was the recipient of a number of fellowships and awards\, including the Jack K. Wolf ISIT Student Paper Award in 2015\, the STOC Best Paper Award in 2016\, the EPFL Doctorate Award in 2018\, the Simons-Berkeley Research Fellowship in 2018\, the Lopez-Loreta Prize in 2019\, and Information Theory Society Best Paper Award in 2021.
URL:https://tilos.ai/event/optimization-robustness-and-privacy-in-deep-neural-networks-insights-from-the-neural-tangent-kernel/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series,TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/mondelli-marco-scaled-e1711659727954-z3UC0d.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231117T093000
DTEND;TZID=America/Los_Angeles:20231117T103000
DTSTAMP:20260403T111829
CREATED:20250828T203006Z
LAST-MODIFIED:20250828T204333Z
UID:7323-1700213400-1700217000@tilos.ai
SUMMARY:Overview of the Executive Order on Safe\, Secure\, and Trustworthy Artificial Intelligence
DESCRIPTION:UC San Diego Professor of Data Science and Philosophy and TILOS affiliate David Danks will present an introduction to the U.S. Government’s Executive Order on Safe\, Secure\, and Trustworthy Artificial Intelligence for TILOS members. \nDavid Danks currently serves on the National AI Advisory Committee (NAIAC)\, which is tasked with advising the President and the National AI Initiative Office on topics related to AI. This talk will give an overview of the recent Executive Order and related activity by the U.S. Government in the space of AI (including regulation\, incentives\, and new programs). Ample time will be reserved for Q&A. \nThis is an internal TILOS event and will not be recorded.
URL:https://tilos.ai/event/overview-of-the-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:Internal Events,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/danks-david-1-e1756412984106.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240118T100000
DTEND;TZID=America/Los_Angeles:20240118T110000
DTSTAMP:20260403T111829
CREATED:20250828T202828Z
LAST-MODIFIED:20250828T202828Z
UID:7321-1705572000-1705575600@tilos.ai
SUMMARY:TILOS Seminar: The Dissimilarity Dimension: Sharper Bounds for Optimistic Algorithms
DESCRIPTION:Aldo Pacchiano\, Assistant Professor\, Boston University Center for Computing and Data Sciences \nAbstract: The principle of Optimism in the Face of Uncertainty (OFU) is one of the foundational algorithmic design choices in Reinforcement Learning and Bandits. Optimistic algorithms balance exploration and exploitation by deploying data collection strategies that maximize expected rewards in plausible models. This is the basis of celebrated algorithms like the Upper Confidence Bound (UCB) for multi-armed bandits. For nearly a decade\, the analysis of optimistic algorithms\, including Optimistic Least Squares\, in the context of rich reward function classes has relied on the concept of eluder dimension\, introduced by Russo and Van Roy in 2013. In this talk we shed light on the limitations of the eluder dimension in capturing the true behavior of optimistic strategies in the realm of function approximation. We remediate these by introducing a novel statistical measure\, the “dissimilarity dimension”. We show it can be used to provide sharper sample analysis of algorithms like Optimistic Least Squares by establishing a link between regret and the dissimilarity dimension. To illustrate this\, we will show that some function classes have arbitrarily large eluder dimension but constant dissimilarity. Our regret analysis draws inspiration from graph theory and may be of interest to the mathematically minded beyond the field of statistical learning theory. This talk sheds new light on the fundamental principle of optimism and its algorithms in the function approximation regime\, advancing our understanding of these concepts. \n\nAldo Pacchiano is an Assistant Professor at the Boston University Center for Computing and Data Sciences and a Fellow at the Eric and Wendy Schmidt Center of the Broad Institute of MIT and Harvard. He obtained his PhD under the supervision of Profs. Michael Jordan and Peter Bartlett at UC Berkeley and was a Postdoctoral Researcher at Microsoft Research\, NYC. His research lies in the areas of Reinforcement Learning\, Online Learning\, Bandits and Algorithmic Fairness. He is particularly interested in furthering our statistical understanding of learning phenomena in adaptive environments and use these theoretical insights and techniques to design efficient and safe algorithms for scientific\, engineering\, and large-scale societal applications.
URL:https://tilos.ai/event/tilos-seminar-the-dissimilarity-dimension-sharper-bounds-for-optimistic-algorithms/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2024/01/pacchiano-aldo.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240221T140000
DTEND;TZID=America/Los_Angeles:20240221T153000
DTSTAMP:20260403T111829
CREATED:20250828T201626Z
LAST-MODIFIED:20250828T201626Z
UID:7318-1708524000-1708529400@tilos.ai
SUMMARY:TILOS-HDSI Distinguished Colloquium: The Synergy between Machine Learning and the Natural Sciences
DESCRIPTION:Max Welling\, Research Chair in Machine Learning\, University of Amsterdam \nAbstract: Traditionally machine learning has been heavily influenced by neuroscience (hence the name artificial neural networks) and physics (e.g. MCMC\, Belief Propagation\, and Diffusion based Generative AI). We have recently witnessed that the flow of information has also reversed\, with new tools developed in the ML community impacting physics\, chemistry and biology. Examples include faster DFT\, Force-Field accelerated MD simulations\, PDE Neural Surrogate models\, generating druglike molecules\, and many more. In this talk I will review the exciting opportunities for further cross fertilization between these fields\, ranging from faster (classical) DFT calculations and enhanced transition path sampling to traveling waves in artificial neural networks. \n\nProf. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies\, professor at UC Irvine\, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton\, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate Prof. Gerard ‘t Hooft.
URL:https://tilos.ai/event/tilos-hdsi-distinguished-colloquium-the-synergy-between-machine-learning-and-the-natural-sciences/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/welling-max-e1709233283734-CWxvcN.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240308T120000
DTEND;TZID=America/Los_Angeles:20240308T130000
DTSTAMP:20260403T111829
CREATED:20250828T201521Z
LAST-MODIFIED:20250903T224938Z
UID:7316-1709899200-1709902800@tilos.ai
SUMMARY:AI Ethics in Research Webinar
DESCRIPTION:Please join Dr. Nisheeth Vishnoi from Yale and Dr. David Danks from UC San Diego who will discuss their Research in AI Ethics. Professor Danks develops practical frameworks and methods to incorporate ethical and policy considerations throughout the AI lifecycle\, including different ways to include them in optimization steps. Bias and fairness have been a particular focus given the multiple ways in which they can be measured\, represented\, and used. Professor Vishnoi uses optimization as a lens to study how subjective human and societal biases emerge in the objective world of artificial algorithms\, as well as how to design strategies to mitigate these biases.\nThis event is a great opportunity to learn about the constantly evolving issues of AI Ethics in research and the societal impact of AI. It will also provide a platform for students to gain insights and valuable advice that can help them in their future career pursuits. \n\nNisheeth Vishnoi is the A. Bartlett Giamatti Professor of Computer Science and a co-founder of the Computation and Society Initiative at Yale University. He studies the foundations of computation\, and his research spans several areas of theoretical computer science\, optimization\, and machine learning. He is also interested in understanding nature and society from a computational viewpoint. Here\, his current focus includes understanding the emergence of intelligence and developing methods to address ethical issues at the interface of artificial intelligence and humanity. \n\nDavid Danks is Professor of Data Science and Philosophy and affiliate faculty in Computer Science and Engineering at University of California\, San Diego. His research interests range widely across philosophy\, cognitive science\, and machine learning\, including their intersection. Danks has examined the ethical\, psychological\, and policy issues around AI and robotics across multiple sectors\, including transportation\, healthcare\, privacy\, and security. He has also done significant research in computational cognitive science and developed multiple novel causal discovery algorithms for complex types of observational and experimental data. Danks is the recipient of a James S. McDonnell Foundation Scholar Award\, as well as an Andrew Carnegie Fellowship. He currently serves on multiple advisory boards\, including the National AI Advisory Committee.
URL:https://tilos.ai/event/ai-ethics-in-research-webinar/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240315
DTEND;VALUE=DATE:20240317
DTSTAMP:20260403T111829
CREATED:20250904T175958Z
LAST-MODIFIED:20250904T182814Z
UID:7311-1710460800-1710633599@tilos.ai
SUMMARY:HDSI-TILOS “LLM Meets Theory” Workshop 2024
DESCRIPTION:The UC San Diego HDSI-TILOS “LLM Meets Theory” Workshop aims to bring together students and faculty to discuss the future of mathematical and scientific theory and large language models (LLMs). LLMs are like a miracle—not one that breaks the laws of nature (that would be impossible\, of course)\, but something that defied all expectations and could not be predicted just a few years ago. In particular\, the simplicity of the resulting statistical models (which are essentially Markov chains\, and are limited to only predicting the next token) came as a complete surprise to almost all of us. In view of this\, it is crucial to gain some understanding of the implications and potential trajectory of these models. Therefore\, at UCSD HDSI\, we plan to invite a few researchers for talks and also leave a lot of time for panel discussions.
URL:https://tilos.ai/event/hdsi-tilos-llm-meets-theory-workshop-2024/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2024/02/HDSI.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240320T100000
DTEND;TZID=America/Los_Angeles:20240320T110000
DTSTAMP:20260403T111829
CREATED:20250828T201417Z
LAST-MODIFIED:20250828T201417Z
UID:7315-1710928800-1710932400@tilos.ai
SUMMARY:TILOS Seminar: How Large Models of Language and Vision Help Agents to Learn to Behave
DESCRIPTION:Roy Fox\, Assistant Professor and Director of the Intelligent Dynamics Lab\, UC Irvine \nAbstract: If learning from data is valuable\, can learning from big data be very valuable? So far\, it has been so in vision and language\, for which foundation models can be trained on web-scale data to support a plethora of downstream tasks; not so much in control\, for which scalable learning remains elusive. Can information encoded in vision and language models guide reinforcement learning of control policies? In this talk\, I will discuss several ways for foundation models to help agents to learn to behave. Language models can provide better context for decision-making: we will see how they can succinctly describe the world state to focus the agent on relevant features; and how they can form generalizable skills that identify key subgoals. Vision and vision–language models can help the agent to model the world: we will see how they can block visual distractions to keep state representations task-relevant; and how they can hypothesize about abstract world models that guide exploration and planning. \n\nRoy Fox is an Assistant Professor of Computer Science at the University of California\, Irvine. His research interests include theory and applications of control learning: reinforcement learning (RL)\, control theory\, information theory\, and robotics. His current research focuses on structured and model-based RL\, language for RL and RL for language\, and optimization in deep control learning of virtual and physical agents.
URL:https://tilos.ai/event/tilos-seminar-how-large-models-of-language-and-vision-help-agents-to-learn-to-behave/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/fox-roy-e1710782779885-cplaNm.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240417T100000
DTEND;TZID=America/Los_Angeles:20240417T110000
DTSTAMP:20260403T111829
CREATED:20250828T201326Z
LAST-MODIFIED:20250828T201326Z
UID:7309-1713348000-1713351600@tilos.ai
SUMMARY:TILOS Seminar: Transformers learn in-context by (functional) gradient descent
DESCRIPTION:Xiang Cheng\, TILOS Postdoctoral Scholar\, MIT \nAbstract: Motivated by the in-context learning phenomenon\, we investigate how the Transformer neural network can implement learning algorithms in its forward pass. We show that a linear Transformer naturally learns to implement gradient descent\, which enables it to learn linear functions in-context. More generally\, we show that a non-linear Transformer can implement functional gradient descent with respect to some RKHS metric\, which allows it to learn a broad class of functions in-context. Additionally\, we show that the RKHS metric is determined by the choice of attention activation\, and that the optimal choice of attention activation depends in a natural way on the class of functions that need to be learned. I will end by discussing some implications of our results for the choice and design of Transformer architectures.
URL:https://tilos.ai/event/tilos-seminar-transformers-learn-in-context-by-functional-gradient-descent/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2023/10/cheng-xiang.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240522T100000
DTEND;TZID=America/Los_Angeles:20240522T110000
DTSTAMP:20260403T111829
CREATED:20250828T201245Z
LAST-MODIFIED:20250828T201245Z
UID:7305-1716372000-1716375600@tilos.ai
SUMMARY:TILOS Seminar: Large Datasets and Models for Robots in the Real World
DESCRIPTION:Nicklas Hansen\, UC San Diego \nAbstract: Recent progress in AI can be attributed to the emergence of large models trained on large datasets. However\, teaching AI agents to reliably interact with our physical world has proven challenging\, which is in part due to a lack of large and sufficiently diverse robot datasets. In this talk\, I will cover ongoing efforts of the Open X-Embodiment project–a collaboration between 279 researchers across 20+ institutions–to build a large\, open dataset for real-world robotics\, and discuss how this new paradigm is rapidly changing the field. Concretely\, I will discuss why we need large datasets in robotics\, what such datasets may look like\, and how large models can be trained and evaluated effectively in a cross-embodiment cross-environment setting. Finally\, I will conclude the talk by sharing my perspective on the limitations of current embodied AI agents\, as well as how to move forward as a community. \n\nNicklas Hansen is a Ph.D. student at University of California San Diego advised by Prof. Xiaolong Wang and Prof. Hao Su. His research focuses on developing generalist AI agents that learn from interaction with the physical and digital world. He has spent time at Meta AI (FAIR) and University of California Berkeley (BAIR)\, and received his B.S. and M.S. degrees from Technical University of Denmark. He is a recipient of the 2024 NVIDIA Graduate Fellowship\, and his work has been featured at top venues in machine learning and robotics. Webpage: www.nicklashansen.com
URL:https://tilos.ai/event/tilos-seminar-large-datasets-and-models-for-robots-in-the-real-world/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/Nicklas_Hansen-e1713393341399-GU4tJB.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240618
DTEND;VALUE=DATE:20240619
DTSTAMP:20260403T111829
CREATED:20250828T201147Z
LAST-MODIFIED:20250904T174448Z
UID:7308-1718668800-1718755199@tilos.ai
SUMMARY:TILOS Industry Day 2024
DESCRIPTION:TILOS (The NSF National AI Institute for Learning-enabled Optimization at Scale) will hold its 3rd Annual Industry Day on June 18\, 2024\, at the Halıcıoğlu Data Science Institute at UC San Diego\, which is the campus hub for Data Science. Our first two Industry Days have attracted more than 100 participants\, each featuring (1) talks from invited Industry Speakers sharing their perspectives on challenges in AI + Optimization + Use domains (chips\, robotics\, networking)\, (2) research highlights from TILOS team members\, and (3) most importantly\, a vibrant TILOS Trainee Poster Session (30+ posters) together with a “Facebook” of students and postdocs (a booklet of these trainees). There is no cost to attend\, but please register here. \nAGENDA\n\n\n\n\n\n\n\n8:00 – 8:45am\nRegistration + Breakfast\n\n\n8:45 – 9:00am\nWelcome Remarks and Introduction to TILOS\nDirector Yusu Wang (UCSD)\nAD Translation Vijay Kumar (UPenn)\nRajesh Gupta (Director of HDSI@UCSD)\n\n\n9:00 – 10:30am\nSESSION 1  Chair: Vijay Kumar (UPenn)\nIndustry Keynote: Towards Scalable and Robust Autonomy\, Nicholas Roy (Zoox)\nTILOS Faculty Highlights:\n[9:50am] Traceable and Scalable GNN-based Circuit Optimization\, Farinaz Koushanfar (UCSD)\n[10:10am] Feature learning in neural networks and kernel models\, Misha Belkin (UCSD)\n\n\n10:30 – 10:45am\nBreak\n\n\n10:45am – 12:15pm\nSESSION 2  Chair: Yian Ma (UCSD)\nIndustry Keynote: AI and Networks: Challenges & Opportunities\, Nageen Himayat (Intel Labs)\nTILOS Faculty Highlights:\n[11:35am] Learning-enabled Optimization at Scale in Wireless Communications and  Networking\, Alejandro Ribeiro (UPenn)\n[11:55am] Reasoning Numerically\, Sean Gao (UCSD)\n\n\n12:15 – 2:00pm\nTILOS Trainee Poster Lightning Preview Session + Lunch\n\n\n2:00 – 3:00pm\nPanel Discussion on Academic–Industry Relations / Collaborations\nPanelists:\nNing Bi (Qualcomm VP Engineering)\nVitaly Feldman (Apple ML Research)\nKatherine Heller (Google Responsible AI)\nTara Javidi (UCSD)\nSomdeb Majumdar (Intel AI/ML Lab)\nModerator: Vijay Kumar (UPenn)\n\n\n3:00 – 3:30pm\nBreak\n\n\n3:30 – 5:00pm\nSESSION 3  Chair: Henrik Christensen (UCSD)\nIndustry Keynote: Foundation Models for Robotics\, Carolina Parada (Google DeepMind)\nTILOS Faculty Highlights:\n[4:20pm] Semantic Mapping and Task Planning for Autonomous Robots\,  Nikolay Atanasov (UCSD)\n[4:40pm] Bias in Evaluation Processes: An Optimization-Based Model\, Nisheeth Vishnoi (Yale U)\n\n\n5:00 – 7:30pm\nBuffet Dinner + Trainee Poster Session (HDSI 123 & 155)\n\n\n\nKEYNOTE PRESENTATION ABSTRACTS \nTowards Scalable and Robust Autonomy \nHow we design and deploy highly autonomous robots such as self-driving cars is evolving rapidly\, and there are numerous technical challenges in how to deploy an autonomous system at scale. I will describe some of the technical design decisions in developing an autonomous robotic at scale\, some of the candidate solutions and open questions for the future. \nNicholas Roy is the Autonomy Architecture Lead and a principal software engineer at Zoox. He and his team address technical challenges that cut across the autonomy verticals\, leading the design and deployment of cross-functional capabilities in the Zoox autonomy system. He is also the Bisplinghoff Professor of Aeronautics & Astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. Roy’s research focuses on decision-making under uncertainty\, mobile robot autonomy and human-robot interaction. Roy’s research has been transitioned into multiple commercial applications. \n\nAI and Networks: Challenges & Opportunities \nArtificial Intelligence and Machine Learning (AI/ML) Technologies are widely expected to play an integral role in the design and architecture of Next Generation Networks. We present several applications where AI/ML techniques are used to enhance the performance of wireless networking systems\, as well as discuss approaches to enhance AI computations over resource constrained networks. We also highlight the importance of ensuring resilience of network AI solutions and discuss future directions. \nNageen Himayat is a Senior Principal Engineer with the Security and Privacy Research Labs. She leads the Trusted & Distributed Intelligence (TDI) team conducting research on trustworthy AI and network security topics. Her research contributions span areas such as AI security\, distributed ML\, machine learning for networks\, multi-radio heterogeneous networks\, cross layer radio resource management\, and non-linear signal processing techniques. Nageen has authored over 350 technical publications\, contributing to several IEEE peer-reviewed publications\, 3GPP/IEEE standards\, as well as numerous patent filings. Prior to Intel\, Nageen was with Lucent Technologies and General Instrument Corp\, where she developed standards and systems for both wireless and wire-line broadband access networks. Nageen obtained her B.S.E.E degree from Rice University\, and her M.S./Ph.D. degree from the University of Pennsylvania. She also holds an MBA degree from the Haas School of Business at University of California\, Berkeley. \n\nFoundations Models for Robotics \nFoundation models have unlocked major advancements in AI. In this talk\, I will discuss how foundation models are enabling a step function in progress towards general purpose robots\, including enabling robots to understand\, reason\, hold situated conversations with humans and learn from them\, transfer visual and semantic generalization to real world actions\, and show initial signs of transfer between robot embodiments. \nIt is still early in this research journey but it is an exciting one because we can confidently be part of this fantastic fast and dynamic field of foundation models and not only ride the wave of innovation\, but help shape it. With this new approach\, we have to once again ask all the tough questions\, and call for advances in perception\, grounded reasoning\, and safety to build more advanced embodied foundation models\, while leveraging the human-centeredness\, semantic understanding\, and natural interaction that these models seamlessly enable. We’re just getting started. \nDr. Carolina Parada is an Engineering Director at Google DeepMind Robotics who is passionate about developing useful robots through human centered robot learning. Since 2019\, she leads multiple research groups in robot learning\, perception\, simulation\, and embodied reasoning. Prior to that\, she led the perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove research and engineering efforts that enabled all the voice products at Google. \n\n\n		\n		\n			\n				\n			\n				\n				Nageen Himayat of Intel Labs presents “AI and Networks: Challenges & Opportunities” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Student and postdoc poster session at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Sean Gao (UC San Diego) presents “Reasoning Numerically” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Demonstration of a Robotic Art outreach activity at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				TILOS Robotics team member Nikolay Atanasov (UC San Diego) presents “Semantic Mapping and Task Planning for Autonomous\nRobots” at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				Student and postdoc poster session at TILOS Industry Day 2024\n				\n			\n				\n			\n				\n				TILOS Associate Director of Translation and University of Pennsylvania Dean of Engineering Vijay Kumar (right) moderates a discussion on Academic–Industry Relations and Collaboration at TILOS Industry Day 2024 with panelists (from left) Ning Bi (Vice President of Engineering\, Qualcomm)\, Vitaly Feldman (Apple ML Research)\, Katherine Heller (Google Responsible AI)\, Tara Javidi (Professor of Electrical and Computer Engineering\, UC San Diego)\, and Somdeb Majumdar (Director\, Intel AI/ML Lab)\n				\n			\n		\n\nLocation: Halıcıoğlu Data Science Institute [MAP]\nRoom 123\n3234 Matthews Lane\nLa Jolla\, CA 92093 \nContacts: Angela Berti (aberti@ucsd.edu)\, Yusu Wang (yusuwang@ucsd.edu) \nParking: Hopkins Parking Structure (9800 Hopkins Dr\, La Jolla\, CA 92093; 10 minute walk to venue). \nParking fees are payable at pay stations or pay-by-phone. Note that many visitor spots are limited to two hours. Even though the app allows you to pay for longer periods\, you will get a ticket after that time if parked in a 2-hour space.
URL:https://tilos.ai/event/tilos-industry-day-2024/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240724T100000
DTEND;TZID=America/Los_Angeles:20240724T110000
DTSTAMP:20260403T111829
CREATED:20250828T200721Z
LAST-MODIFIED:20250828T200721Z
UID:7304-1721815200-1721818800@tilos.ai
SUMMARY:TILOS Seminar: What Kinds of Functions do Neural Networks Learn? Theory and Practical Applications
DESCRIPTION:Robert Nowak\, University of Wisconsin \nAbstract: This talk presents a theory characterizing the types of functions neural networks learn from data. Specifically\, the function space generated by deep ReLU networks consists of compositions of functions from the Banach space of second-order bounded variation in the Radon transform domain. This Banach space includes functions with smooth projections in most directions. A representer theorem associated with this space demonstrates that finite-width neural networks suffice for fitting finite datasets. The theory has several practical applications. First\, it provides a simple and theoretically grounded method for network compression. Second\, it shows that multi-task training can yield significantly different solutions compared to single-task training\, and that multi-task solutions can be related to kernel ridge regressions. Third\, the theory has implications for improving implicit neural representations\, where multi-layer neural networks are used to represent continuous signals\, images\, or 3D scenes. This exploration bridges theoretical insights with practical advancements\, offering a new perspective on neural network capabilities and future research directions. \n\nRobert Nowak is the Grace Wahba Professor of Data Science and Keith and Jane Nosbusch Professor in Electrical and Computer Engineering at the University of Wisconsin-Madison. His research focuses on machine learning\, optimization\, and signal processing. He serves on the editorial boards of the SIAM Journal on the Mathematics of Data Science and the IEEE Journal on Selected Areas in Information Theory.
URL:https://tilos.ai/event/tilos-seminar-what-kinds-of-functions-do-neural-networks-learn-theory-and-practical-applications/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2024/07/nowak-robert.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241002T110000
DTEND;TZID=America/Los_Angeles:20241002T120000
DTSTAMP:20260403T111829
CREATED:20250828T200544Z
LAST-MODIFIED:20250828T200612Z
UID:7297-1727866800-1727870400@tilos.ai
SUMMARY:TILOS-SDSU Seminar: AI/ML & NLP for UAS/Air Traffic Management
DESCRIPTION:Krishna Kalyanam\, NASA Ames Research Center \nAbstract: We introduce several Air Traffic Management (ATM) initiatives envisioned by NASA and FAA for a future airspace that combines conventional traffic and new entrants (e.g.\, drones) without sacrificing safety. In this framework\, we demonstrate the use of state-of-the-art AI/ML modeling and prediction tools that will enable efficient and safe traffic flow in the U.S. National Airspace System (NAS). For example\, Natural Language Processing (NLP) tools can help extract data (e.g.\, airspace constraints) that are currently contained in legacy text and audio format and convert them into digital information. The digitized information can be ingested by route planning\, arrival scheduling and other decision support tools both on the ground and in the flight deck. We show how historical data (track\, weather & events) can be preprocessed and utilized to create accurate models to predict flight trajectories and events of interest (e.g.\, Traffic Management Initiatives). We show several application areas within ATM that benefit from AI/ML including trajectory prediction\, airport runway configuration management and automatic speech to text. The overarching goal of the work is to accelerate the integration of package delivery drones\, air taxis and autonomous cargo aircraft into the NAS without impacting the safety and efficacy of current manned operations. As an example\, we also show a strategic deconfliction scenario and demonstrate scalable algorithms that provide conflict free schedules for package delivery drones in an urban setting. \n\nDr. Krishna Kalyanam is the Autonomy & AI/ML tech lead with the NASA Aeronautics Research Institute (NARI). In his current role\, he is focused on delivering state of the art AI/ML algorithms to enable scalable and efficient manned/unmanned operations in a mixed-use National airspace. Prior to joining NASA\, Dr. Kalyanam was with AFRL’s Autonomous Controls branch\, where he co-designed several multi-UAV cooperative control algorithms that were flight tested as part of the Intelligent Control & Evaluation of Teams (ICE-T) program. Dr. Kalyanam has published 100+ papers on stochastic control\, human machine teaming and multi-agent scheduling in IEEE\, ASME and AIAA venues. Dr. Kalyanam is a senior member of IEEE and an associate fellow of the AIAA.  He is a recipient of the prestigious Research associateship award sponsored by the National Academies. He was also part of the UAV Autonomy team that won the AFRL “Star Team” award for performing the most innovative in-house basic research in 2018.
URL:https://tilos.ai/event/tilos-sdsu-seminar-ai-ml-nlp-for-uas-air-traffic-management/
LOCATION:San Diego State University\, 5500 Campanile Dr\, San Diego\, 92182\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/kalyanam-krishna-e1726505877275-apyqNc.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241107T160000
DTEND;TZID=America/Los_Angeles:20241107T170000
DTSTAMP:20260403T111829
CREATED:20250828T200431Z
LAST-MODIFIED:20250828T200431Z
UID:7292-1730995200-1730998800@tilos.ai
SUMMARY:TILOS Seminar: Data Models for Deep Learning: Beyond i.i.d. Assumptions
DESCRIPTION:Elchanan Mossel\, Professor of Mathematics\, MIT \nAbstract: Classical Machine Learning theory is largely built upon the assumption that data samples are independent and identically distributed (i.i.d.) from general distribution families. In this talk\, I will present novel insights that emerge when we move beyond these traditional assumptions\, exploring both dependent sampling scenarios and structured generative distributions. These perspectives offer fresh theoretical frameworks and practical implications for modern machine learning approaches. \n\nElchanan Mossel is a Professor of Mathematics at the Massachusetts Institute of Technology (MIT)\, specializing in probability theory\, combinatorics\, and theoretical computer science. His research explores a range of complex\, interdisciplinary problems\, including social choice theory\, inference in networks\, and the analysis of algorithms\, with applications across economics\, political science\, and genetics. Mossel completed his Ph.D. at the Hebrew University of Jerusalem and held postdoctoral positions at Microsoft Research and UC Berkeley before joining MIT. Recognized for his innovative work\, Mossel has received a Sloan fellowship\, NSF CAREER award\, and COLT best paper award\, and is a Fellow of the American Mathematical Society.
URL:https://tilos.ai/event/tilos-seminar-data-models-for-deep-learning-beyond-i-i-d-assumptions/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/mossel-elchanan-e1728935276435-milFYz.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241113T110000
DTEND;TZID=America/Los_Angeles:20241113T120000
DTSTAMP:20260403T111829
CREATED:20250828T200305Z
LAST-MODIFIED:20250828T200305Z
UID:7291-1731495600-1731499200@tilos.ai
SUMMARY:TILOS Seminar: Off-the-shelf Algorithmic Stability
DESCRIPTION:Rebecca Willett\, University of Chicago \nAbstract: Algorithmic stability holds when our conclusions\, estimates\, fitted models\, predictions\, or decisions are insensitive to small changes to the training data. Stability has emerged as a core principle for reliable data science\, providing insights into generalization\, cross-validation\, uncertainty quantification\, and more. Whereas prior literature has developed mathematical tools for analyzing the stability of specific machine learning (ML) algorithms\, we study methods that can be applied to arbitrary learning algorithms to satisfy a desired level of stability. First\, I will discuss how bagging is guaranteed to stabilize any prediction model\, regardless of the input data. Thus\, if we remove or replace a small fraction of the training data at random\, the resulting prediction will typically change very little. Our analysis provides insight into how the size of the bags (bootstrap datasets) influences stability\, giving practitioners a new tool for guaranteeing a desired level of stability. Second\, I will describe how to extend these stability guarantees beyond prediction modeling to more general statistical estimation problems where bagging is not as well known but equally useful for stability. Specifically\, I will describe a new framework for stable classification and model selection by combining bagging on class or model weights with a stable\, “soft” version of the argmax operator. This is joint work with Jake Soloff and Rina Barber. \n\nRebecca Willett is a Professor of Statistics and Computer Science and the Director of AI in the Data Science Institute at the University of Chicago\, and she holds a courtesy appointment at the Toyota Technological Institute at Chicago. Her research is focused on machine learning foundations\, scientific machine learning\, and signal processing. Willett received the inaugural Data Science Career Prize from the Society of Industrial and Applied Mathematics in 2024\, was named a Fellow of the Society of Industrial and Applied Mathematics in 2021\, and was named a Fellow of the IEEE in 2022. She is the Deputy Director for Research at the NSF-Simons Foundation National Institute for Theory and Mathematics in Biology\, Deputy Director for Research at the NSF-Simons Institute for AI in the Sky (SkAI)\, and a member of the NSF Institute for the Foundations of Data Science Executive Committee. She is the Faculty Director of the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship. She helps direct the Air Force Research Lab University Center of Excellence on Machine Learning. She received the National Science Foundation CAREER Award in 2007\, was a DARPA Computer Science Study Group member\, and received an Air Force Office of Scientific Research Young Investigator Program award in 2010. She completed her PhD in Electrical and Computer Engineering at Rice University in 2005. She was an Assistant and then tenured Associate Professor of Electrical and Computer Engineering at Duke University from 2005 to 2013. She was an Associate Professor of Electrical and Computer Engineering\, Harvey D. Spangler Faculty Scholar\, and Fellow of the Wisconsin Institutes for Discovery at the University of Wisconsin-Madison from 2013 to 2018.
URL:https://tilos.ai/event/tilos-seminar-off-the-shelf-algorithmic-stability/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2024/10/new-willett_square-250x250-1.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20241120T110000
DTEND;TZID=America/Los_Angeles:20241120T120000
DTSTAMP:20260403T111829
CREATED:20250828T200101Z
LAST-MODIFIED:20250828T200101Z
UID:7294-1732100400-1732104000@tilos.ai
SUMMARY:TILOS Seminar: How Transformers Learn Causal Structure with Gradient Descent
DESCRIPTION:Jason Lee\, Princeton University \nAbstract: The incredible success of transformers on sequence modeling tasks can be largely attributed to the self-attention mechanism\, which allows information to be transferred between different parts of a sequence. Self-attention allows transformers to encode causal structure which makes them particularly suitable for sequence modeling. However\, the process by which transformers learn such causal structure via gradient-based training algorithms remains poorly understood. To better understand this process\, we introduce an in-context learning task that requires learning latent causal structure. We prove that gradient descent on a simplified two-layer transformer learns to solve this task by encoding the latent causal graph in the first attention layer. The key insight of our proof is that the gradient of the attention matrix encodes the mutual information between tokens. As a consequence of the data processing inequality\, the largest entries of this gradient correspond to edges in the latent causal graph. As a special case\, when the sequences are generated from in-context Markov chains\, we prove that transformers learn an induction head (Olsson et al.\, 2022). We confirm our theoretical findings by showing that transformers trained on our in-context learning task are able to recover a wide variety of causal structures. \n\nJason Lee is an associate professor in Electrical Engineering and Computer Science (secondary) at Princeton University. Prior to that\, he was in the Data Science and Operations department at the University of Southern California and a postdoctoral researcher at UC Berkeley working with Michael I. Jordan. Jason received his PhD at Stanford University advised by Trevor Hastie and Jonathan Taylor. His research interests are in the theory of machine learning\, optimization\, and statistics. Lately\, he has worked on the foundations of deep learning\, representation learning\, and reinforcement learning. He has received the Samsung AI Researcher of the Year Award\, NSF Career Award\, ONR Young Investigator Award in Mathematical Data Science\, Sloan Research Fellowship\, NeurIPS Best Student Paper Award and Finalist for the Best Paper Prize for Young Researchers in Continuous Optimization\, and Princeton Commendation for Outstanding Teaching.
URL:https://tilos.ai/event/tilos-seminar-how-transformers-learn-causal-structure-with-gradient-descent/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/lee-jason-e1727126682884-UcJAUD.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20241210
DTEND;VALUE=DATE:20241211
DTSTAMP:20260403T111829
CREATED:20250904T180142Z
LAST-MODIFIED:20250904T182846Z
UID:7289-1733788800-1733875199@tilos.ai
SUMMARY:NSF Workshop on AI for Electronic Design Automation
DESCRIPTION:
URL:https://tilos.ai/event/nsf-workshop-on-ai-for-electronic-design-automation/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2024/10/circuitboard.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250129T110000
DTEND;TZID=America/Los_Angeles:20250129T123000
DTSTAMP:20260403T111829
CREATED:20250828T195813Z
LAST-MODIFIED:20250828T195813Z
UID:7301-1738148400-1738153800@tilos.ai
SUMMARY:TILOS Seminar: Unlearnable Facts Cause Hallucinations in Pretrained Language Models
DESCRIPTION:Adam Tauman Kalai\, OpenAI \nAbstract: Pretrained language models (LMs) tend to preserve many qualities present in their training data\, such as grammaticality\, formatting\, and politeness. However\, for specific types of factuality\, even LMs pretrained on factually correct statements tend to produce falsehoods at high rates. We explain these “hallucinations” by drawing a connection to binary classification\, enabling us to leverage insights from supervised learning. We prove that pretrained LMs (which are “calibrated”) fail to mimic criteria that cannot be learned. Our analysis explains why pretrained LMs hallucinate on facts such as people’s birthdays but not on systematic facts such as even vs. odd numbers.\nOf course\, LM pretraining is only one stage in the development of a chatbot\, and thus hallucinations are *not* inevitable in chatbots.\nThis is joint work with Santosh Vempala. \n\nAdam Tauman Kalai is a Research Scientist at OpenAI working on AI Safety and Ethics. He has worked in Algorithms\, Fairness\, Machine Learning Theory\, Game Theory\, and Crowdsourcing. He received his PhD from Carnegie Mellon University. He has served as an Assistant Professor at Georgia Tech and TTIC\, and is on the science team of the whale-translation Project CETI. He has co-chaired AI and crowdsourcing conferences and has numerous honors\, most notably the Majulook prize.
URL:https://tilos.ai/event/tilos-seminar-unlearnable-facts-cause-hallucinations-in-pretrained-language-models/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/kalai-adam-e1725645665625-utz75c.jpg
END:VEVENT
END:VCALENDAR