BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250219
DTEND;VALUE=DATE:20250221
DTSTAMP:20260403T140323
CREATED:20250904T180342Z
LAST-MODIFIED:20250904T183026Z
UID:7281-1739923200-1740095999@tilos.ai
SUMMARY:Secure AI for Health\, Defense\, and Beyond
DESCRIPTION:
URL:https://tilos.ai/event/secure-ai-for-health-defense-and-beyond/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/UCSD-e1737756262771-s0U7kP-e1757009005925.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250306T083000
DTEND;TZID=America/Los_Angeles:20250306T121500
DTSTAMP:20260403T140323
CREATED:20250828T193005Z
LAST-MODIFIED:20250828T193005Z
UID:7276-1741249800-1741263300@tilos.ai
SUMMARY:TILOS Tutorial on AI Alignment
DESCRIPTION:
URL:https://tilos.ai/event/tilos-tutorial-on-ai-alignment/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250312T110000
DTEND;TZID=America/Los_Angeles:20250312T120000
DTSTAMP:20260403T140323
CREATED:20250828T192527Z
LAST-MODIFIED:20250828T192602Z
UID:7295-1741777200-1741780800@tilos.ai
SUMMARY:TILOS Seminar: Synthetic Tasks as Testbeds for Attributing Model Behavior
DESCRIPTION:Surbhi Goel\, University of Pennsylvania \nAbstract: Understanding how different components of the machine learning pipeline—spanning data composition\, architectural choices\, and optimization dynamics—shape model behavior remains a fundamental challenge. In this talk\, I will argue that synthetic tasks\, which enable precise control over data distribution and task complexity\, serve as powerful testbeds for analyzing and attributing behaviors in deep learning. Focusing on the sparse parity learning problem\, a canonical task in learning theory\, I will present insights into: (1) the phenomenon of “hidden progress” in gradient-based optimization\, where models exhibit consistent advancement despite stagnating loss curves; (2) nuanced trade-offs between data\, compute\, model width\, and initialization that govern learning success; and (3) the role of progressive distillation in implicitly structuring curricula to accelerate feature learning. These findings highlight the utility of synthetic tasks in uncovering empirical insights into the mechanisms driving deep learning\, without the cost of training expensive models. This talk is based on joint work with a lot of amazing collaborators: Boaz Barak\, Ben Edelman\, Sham Kakade\, Bingbin Liu\, Eran Malach\, Sadhika Malladi\, Abhishek Panigrahi\, Andrej Risteski\, and Cyril Zhang. \n\nSurbhi Goel is the Magerman Term Assistant Professor of Computer and Information Science at the University of Pennsylvania. She is associated with the theory group\, the ASSET Center on safe\, explainable\, and trustworthy AI systems\, and the Warren Center for Network and Data Sciences. Surbhi’s research focuses on theoretical foundations of modern machine learning paradigms\, particularly deep learning\, and is supported by Microsoft Research and OpenAI. Previously\, she was a postdoctoral researcher at Microsoft Research NYC and completed her Ph.D. at the University of Texas at Austin under Adam Klivans\, receiving the UTCS Bert Kay Dissertation Award. She has also been a visiting researcher at IAS\, Princeton\, and the Simons Institute at UC Berkeley. Surbhi co-founded the Learning Theory Alliance (LeT‐All) and holds several leadership roles\, including Office Hours co-chair for ICLR 2024 and co-treasurer for the Association for Computational Learning Theory.
URL:https://tilos.ai/event/tilos-seminar-synthetic-tasks-as-testbeds-for-attributing-model-behavior/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/goel-surbhi-e1727126779765-U5P80t.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250317
DTEND;VALUE=DATE:20250318
DTSTAMP:20260403T140323
CREATED:20250904T181134Z
LAST-MODIFIED:20250904T182933Z
UID:7275-1742169600-1742255999@tilos.ai
SUMMARY:TILOS-Cisco AI + Security Workshop
DESCRIPTION:
URL:https://tilos.ai/event/tilos-cisco-ai-security-workshop/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:Internal Events,TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250327T140000
DTEND;TZID=America/Los_Angeles:20250327T150000
DTSTAMP:20260403T140323
CREATED:20250828T192427Z
LAST-MODIFIED:20250828T192653Z
UID:7273-1743084000-1743087600@tilos.ai
SUMMARY:TILOS Seminar: Single location regression and attention-based models
DESCRIPTION:Claire Boyer\, Université Paris-Saclay \nAbstract: Attention-based models\, such as Transformer\, excel across various tasks but lack a comprehensive theoretical understanding\, especially regarding token-wise sparsity and internal linear representations. To address this gap\, we introduce the single-location regression task\, where only one token in a sequence determines the output\, and its position is a latent random variable\, retrievable via a linear projection of the input. To solve this task\, we propose a dedicated predictor\, which turns out to be a simplified version of a non-linear self-attention layer. We study its theoretical properties\, by showing its asymptotic Bayes optimality and analyzing its training dynamics. In particular\, despite the non-convex nature of the problem\, the predictor effectively learns the underlying structure. This work highlights the capacity of attention mechanisms to handle sparse token information and internal linear structures.
URL:https://tilos.ai/event/tilos-seminar-single-location-regression-and-attention-based-models/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/boyer-claire-e1742860147959-s8d3nW.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250331
DTEND;VALUE=DATE:20250401
DTSTAMP:20260403T140323
CREATED:20250904T175539Z
LAST-MODIFIED:20250904T182652Z
UID:7282-1743379200-1743465599@tilos.ai
SUMMARY:Boston Symmetry Day 2025
DESCRIPTION:
URL:https://tilos.ai/event/boston-symmetry-day-2025/
LOCATION:CA
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/boston-symmetry-group-e1698445385321-eiga9L.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250402T110000
DTEND;TZID=America/Los_Angeles:20250402T120000
DTSTAMP:20260403T140323
CREATED:20250828T192344Z
LAST-MODIFIED:20260227T222401Z
UID:7287-1743591600-1743595200@tilos.ai
SUMMARY:TILOS Seminar: Foundational Methods for Foundation Models for Scientific Machine Learning
DESCRIPTION:Michael W. Mahoney\, ICSI\, LBNL\, and Department of Statistics\, UC Berkeley \nAbstract: The remarkable successes of ChatGPT in natural language processing (NLP) and related developments in computer vision (CV) motivate the question of what foundation models would look like and what new advances they would enable\, when built on the rich\, diverse\, multimodal data that are available from large-scale experimental and simulational data in scientific computing (SC)\, broadly defined. Such models could provide a robust and principled foundation for scientific machine learning (SciML)\, going well beyond simply using ML tools developed for internet and social media applications to help solve future scientific problems. I will describe recent work demonstrating the potential of the “pre-train and fine-tune” paradigm\, widely-used in CV and NLP\, for SciML problems\, demonstrating a clear path towards building SciML foundation models; as well as recent work highlighting multiple “failure modes” that arise when trying to interface data-driven ML methodologies with domain-driven SC methodologies\, demonstrating clear obstacles to traversing that path successfully. I will also describe initial work on developing novel methods to address several of these challenges\, as well as their implementations at scale\, a general solution to which will be needed to build robust and reliable SciML models consisting of millions or billions or trillions of parameters. \n\nMichael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning\, including randomized matrix algorithms and randomized numerical linear algebra\, scientific machine learning\, scalable stochastic optimization\, geometric network analysis tools for structure extraction in large informatics graphs\, scalable implicit regularization methods\, computational methods for neural network analysis\, physics informed machine learning\, and applications in genetics\, astronomy\, medical imaging\, social network analysis\, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics\, and he has worked and taught at Yale University in the mathematics department\, at Yahoo Research\, and at Stanford University in the mathematics department. Among other things\, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI)\, he was on the National Research Council’s Committee on the Analysis of Massive Data\, he co-organized the Simons Institute’s fall 2013 and 2018 programs on the foundations of data science\, he ran the Park City Mathematics Institute’s 2016 PCMI Summer Session on The Mathematics of Data\, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets\, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. More information is available at https://www.stat.berkeley.edu/~mmahoney/.
URL:https://tilos.ai/event/tilos-seminar-foundational-methods-for-foundation-models-for-scientific-machine-learning/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/mahoney-michael-e1733251484543-1e6Odv.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250416T110000
DTEND;TZID=America/Los_Angeles:20250416T120000
DTSTAMP:20260403T140323
CREATED:20250828T192233Z
LAST-MODIFIED:20260227T222458Z
UID:7286-1744801200-1744804800@tilos.ai
SUMMARY:TILOS Seminar: Amplifying human performance in combinatorial competitive programming
DESCRIPTION:Petar Veličković\, Google DeepMind \nAbstract: Recent years have seen a significant surge in complex AI systems for competitive programming\, capable of performing at admirable levels against human competitors. While steady progress has been made\, the highest percentiles still remain out of reach for these methods on standard competition platforms such as Codeforces. In this talk\, I will describe and dive into our recent work\, where we focussed on combinatorial competitive programming. In combinatorial challenges\, the target is to find as-good-as-possible solutions to otherwise computationally intractable problems\, over specific given inputs. We hypothesise that this scenario offers a unique testbed for human-AI synergy\, as human programmers can write a backbone of a heuristic solution\, after which AI can be used to optimise the scoring function used by the heuristic. We deploy our approach on previous iterations of Hash Code\, a global team programming competition inspired by NP-hard software engineering problems at Google\, and we leverage FunSearch to evolve our scoring functions. Our evolved solutions significantly improve the attained scores from their baseline\, successfully breaking into the top percentile on all previous Hash Code online qualification rounds\, and outperforming the top human teams on several. To the best of our knowledge\, this is the first known AI-assisted top-tier result in competitive programming.
URL:https://tilos.ai/event/tilos-seminar-amplifying-human-performance-in-combinatorial-competitive-programming/
LOCATION:Virtual
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/velickovic-petar-e1736275993608-TwwARw.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250417
DTEND;VALUE=DATE:20250419
DTSTAMP:20260403T140323
CREATED:20250401T180604Z
LAST-MODIFIED:20250904T182557Z
UID:7280-1744848000-1745020799@tilos.ai
SUMMARY:HOT-AI: Horizons for Optimization in AI Workshop
DESCRIPTION:
URL:https://tilos.ai/event/hot-ai-horizons-for-optimization-in-ai-workshop/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20250523T110000
DTEND;TZID=America/Los_Angeles:20250523T120000
DTSTAMP:20260403T140323
CREATED:20250828T192125Z
LAST-MODIFIED:20260227T222820Z
UID:7272-1747998000-1748001600@tilos.ai
SUMMARY:TILOS Seminar: Optimal Quantization for LLMs and Matrix Multiplication
DESCRIPTION:Yury Polyanskiy\, MIT \nAbstract: The main building block of large language models is matrix multiplication\, which is often bottlenecked by the speed of loading these matrices from memory. A number of recent quantization algorithms (SmoothQuant\, GPTQ\, QuIP\, SpinQuant etc) address this issue by storing matrices in lower precision. We derive optimal asymptotic information-theoretic tradeoff between accuracy of the matrix product and compression rate (number of bits per matrix entry). We also show that a non-asymptotic version of our construction (based on nested Gosset lattices and Conway-Sloan decoding)\, which we call NestQuant\, reduces perplexity deterioration almost three-fold compared to the state-of-the-art algorithms (as measured on LLama-2\, Llama-3 with 8B to 70B parameters). Based on a joint work with Or Ordentlich (HUJI)\, Eitan Porat and Semyon Savkin (MIT EECS). \n\nYury Polyanskiy is a Cutten Professor of Electrical Engineering and Computer Science\, a member of IDSS and LIDS at MIT\, and an IEEE Fellow (2024). Yury received M.S. degree in applied mathematics and physics from the Moscow Institute of Physics and Technology in 2005 and Ph.D. degree in electrical engineering from Princeton University in 2010. His research interests span information theory\, machine learning and statistics. Dr. Polyanskiy won the 2020 IEEE Information Theory Society James Massey Award\, 2013 NSF CAREER award and 2011 IEEE Information Theory Society Paper Award.
URL:https://tilos.ai/event/tilos-seminar-optimal-quantization-for-llms-and-matrix-multiplication/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/04/polyanskiy-yuri.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250602
DTEND;VALUE=DATE:20250603
DTSTAMP:20260403T140323
CREATED:20250904T174234Z
LAST-MODIFIED:20250904T183243Z
UID:7531-1748822400-1748908799@tilos.ai
SUMMARY:TILOS Industry Day 2025
DESCRIPTION:
URL:https://tilos.ai/event/tilos-industry-day-2025/
LOCATION:HDSI 123\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251001T110000
DTEND;TZID=America/Los_Angeles:20251001T120000
DTSTAMP:20260403T140323
CREATED:20250828T192015Z
LAST-MODIFIED:20260304T210603Z
UID:7259-1759316400-1759320000@tilos.ai
SUMMARY:TILOS-HDSI Seminar: A New Paradigm for Learning with Distribution Shift
DESCRIPTION:Adam Klivans\, The University of Texas at Austin \nAbstract: We revisit the fundamental problem of learning with distribution shift\, where a learner is given labeled samples from training distribution D\, unlabeled samples from test distribution D′ and is asked to output a classifier with low test error. The standard approach in this setting is to prove a generalization bound in terms of some notion of distance between D and D′. These distances\, however\, are difficult to compute\, and this has been the main stumbling block for efficient algorithm design over the last two decades. \nWe sidestep this issue and define a new model called TDS learning\, where a learner runs a test on the training set and is allowed to reject if this test detects distribution shift relative to a fixed output classifier. This approach leads to the first set of efficient algorithms for learning with distribution shift that do not take any assumptions on the test distribution. Finally\, we discuss how our techniques have recently been used to solve longstanding problems in supervised learning with contamination. \n\nAdam Klivans is a Professor of Computer Science at the University of Texas at Austin and Director of the NSF AI Institute for Foundations of Machine Learning (IFML). His research interests lie in machine learning and theoretical computer science\, in particular\, Learning Theory\, Computational Complexity\, Pseudorandomness\, Limit Theorems\, and Gaussian Space. Dr. Klivans is a recipient of the NSF CAREER Award and serves on the editorial board for the Theory of Computing and Machine Learning Journal.
URL:https://tilos.ai/event/tilos-seminar-with-adam-klivans/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/klivans-adam-e1756405638325.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251024T110000
DTEND;TZID=America/Los_Angeles:20251024T120000
DTSTAMP:20260403T140323
CREATED:20250925T175700Z
LAST-MODIFIED:20260304T210610Z
UID:7611-1761303600-1761307200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: High-dimensional Optimization with Applications to Compute-Optimal Neural Scaling Laws
DESCRIPTION:Courtney Paquette\, McGill University \nAbstract: Given the massive scale of modern ML models\, we now only get a single shot to train them effectively. This restricts our ability to test multiple architectures and hyper-parameter configurations. Instead\, we need to understand how these models scale\, allowing us to experiment with smaller problems and then apply those insights to larger-scale models. In this talk\, I will present a framework for analyzing scaling laws in stochastic learning algorithms using a power-law random features model (PLRF)\, leveraging high-dimensional probability and random matrix theory. I will then use this scaling law to address the compute-optimal question: How should we choose model size and hyper-parameters to achieve the best possible performance in the most compute-efficient manner? Then using this PLRF model\, I will devise a new momentum-based algorithm that (provably) improves the scaling law exponent. Finally\, I will present some numerical experiments on LSTMs that show how this new stochastic algorithm can be applied to real data to improve the compute-optimal exponent. \n\nCourtney Paquette is an assistant professor at McGill University in the Mathematics and Statistics department\, a CIFAR AI Chair (MILA)\, and an active member of the Montreal Machine Learning Optimization Group (MTL MLOpt) at MILA. Her research broadly focuses on designing and analyzing algorithms for large-scale optimization problems\, motivated by applications in data science\, and using techniques that draw from a variety of fields\, including probability\, complexity theory\, and convex and nonsmooth analysis. Dr. Paquette is a lead organizer of the OPT-ML Workshop at NeurIPS since 2020\, and a lead organizer (and original creator) of the High-dimensional Learning Dynamics (HiLD) Workshop at ICML.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-courtney-paquette-mcgill-university/
LOCATION:CSE 1242 and Virtual\, 3235 Voigt Dr\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/paquette-courtney-scaled-e1758822988381.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251024T120000
DTEND;TZID=America/Los_Angeles:20251024T133000
DTSTAMP:20260403T140323
CREATED:20251021T183004Z
LAST-MODIFIED:20251021T183004Z
UID:7684-1761307200-1761312600@tilos.ai
SUMMARY:Student and Postdoc Lunch at Zanzibar Cafe
DESCRIPTION:Join fellow TILOS students and postdoctoral researchers for an informal lunch at Zanzibar Cafe\, located on the second floor of Price Center.
URL:https://tilos.ai/event/student-and-postdoc-lunch-at-zanzibar-cafe/
LOCATION:Zanzibar Cafe at UC San Diego
CATEGORIES:Internal Events
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/10/zanzibar-e1761058377808.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251112T110000
DTEND;TZID=America/Los_Angeles:20251112T120000
DTSTAMP:20260403T140323
CREATED:20251104T173955Z
LAST-MODIFIED:20260304T210641Z
UID:7730-1762945200-1762948800@tilos.ai
SUMMARY:TILOS-HDSI Seminar: AI safety theory: the missing middle ground
DESCRIPTION:Adam Oberman\, McGill University \nAbstract:  Over the past few years\, the capabilities of generative artificial intelligence (AI) systems have advanced rapidly. Along with the benefits of AI\, there is also a risk of harm. In order to benefit from AI while mitigating the risks\, we need a grounded theoretical framework. \nThe current AI safety theory\, which predates generative AI\, is insufficient. Most theoretical AI safety results tend to reason absolutely: a system is a system is “aligned” or “mis-aligned”\, “honest” or “dishonest”. But in practice safety is probabilistic\, not absolute. The missing middle ground is a quantitative or relative theory of safety — a way to reason formally about degrees of safety. Such a theory is required for defining safety and harms\, and is essential for technical solutions as well as for making good policy decisions. \nIn this talk I will: \n\nReview current AI risks (from misuse\, from lack of reliability\, and systemic risks to the economy) as well as important future risks (lack of control).\nReview theoretical predictions of bad AI behavior and discuss experiments which demonstrate that they can occur in current LLMs.\nExplain why technical and theoretical safety solutions are valuable\, even by contributors outside of the major labs.\nDiscuss some gaps in the theory and present some open problems which could address the gaps.\n\n\nAdam Oberman is a Full Professor of Mathematics and Statistics at McGill University\, a Canada CIFAR AI Chair\, and an Associate Member of Mila. He is a research collaborator at LawZero\, Yoshua Bengio’s AI Safety Institute. He has been researching AI safety since 2024. His research spans generative models\, reinforcement learning\, optimization\, calibration\, and robustness. Earlier in his career\, he made significant contributions to optimal transport and nonlinear partial differential equations. He earned degrees from the University of Toronto and the University of Chicago\, and previously held faculty and postdoctoral positions at Simon Fraser University and the University of Texas at Austin.
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-adam-oberman-mcgill-ai-safety-theory-the-missing-middle-ground/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/webp:https://tilos.ai/wp-content/uploads/2025/11/oberman-adam-e1762277416983.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251119T110000
DTEND;TZID=America/Los_Angeles:20251119T120000
DTSTAMP:20260403T140323
CREATED:20251105T193505Z
LAST-MODIFIED:20260227T215217Z
UID:7735-1763550000-1763553600@tilos.ai
SUMMARY:TILOS-SDSU Seminar: Certifiably Correct Machine Perception
DESCRIPTION:David Rosen\, Northeastern University \nAbstract: Many fundamental machine perception and state estimation tasks require the solution of a high-dimensional nonconvex estimation problem; this class includes (for example) the fundamental problems of simultaneous localization and mapping (in robotics)\, 3D reconstruction (in computer vision)\, and sensor network localization (in distributed sensing). Such problems are known to be computationally hard in general\, with many local minima that can entrap the smooth local optimization methods commonly applied to solve them. The result is that standard machine perception algorithms (based upon local optimization) can be surprisingly brittle\, often returning egregiously wrong answers even when the problem to which they are applied is well-posed. \nIn this talk\, we present a novel class of certifiably correct estimation algorithms that are capable of efficiently recovering provably good (often globally optimal) solutions of generally-intractable machine perception problems in many practical settings. Our approach directly tackles the problem of nonconvexity by employing convex relaxations whose minimizers provide provably good approximate solutions to the original estimation problem under moderate measurement noise. We illustrate the design of this class of methods using the fundamental problem of pose-graph optimization (a mathematical abstraction of robotic mapping) as a running example. We conclude with a brief discussion of open questions and future research directions. \n\nDavid M. Rosen is an Assistant Professor in the Departments of Electrical & Computer Engineering and Mathematics and the Khoury College of Computer Sciences (by courtesy) at Northeastern University\, where he leads the Robust Autonomy Laboratory (NEURAL). Prior to joining Northeastern\, he was a Research Scientist at Oculus Research (now Meta Reality Labs) from 2016 to 2018\, and a Postdoctoral Associate at MIT’s Laboratory for Information and Decision Systems (LIDS) from 2018 to 2021. He holds the degrees of B.S. in Mathematics from the California Institute of Technology (2008)\, M.A. in Mathematics from the University of Texas at Austin (2010)\, and ScD in Computer Science from the Massachusetts Institute of Technology (2016). \n\nHe is broadly interested in the mathematical and algorithmic foundations of trustworthy machine perception\, learning\, and control. His work has been recognized with the IEEE Transactions on Robotics Best Paper Award (2024)\, an Honorable Mention for the IEEE Transactions on Robotics Best Paper Award (2021)\, a Best Student Paper Award at Robotics: Science and Systems (2020)\, a Best Paper Award at the International Workshop on the Algorithmic Foundations of Robotics (2016)\, and selection as an RSS Pioneer (2019).
URL:https://tilos.ai/event/tilos-sdsu-seminar-with-david-rosen-northeastern/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/11/rosen-david-scaled-e1762371210779.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251201
DTEND;VALUE=DATE:20251203
DTSTAMP:20260403T140323
CREATED:20250903T222016Z
LAST-MODIFIED:20250908T162145Z
UID:7473-1764547200-1764719999@tilos.ai
SUMMARY:Workshop on Topology\, Algebra\, and Geometry in Data Science (co-located with NeurIPS 2025)
DESCRIPTION:We are thrilled to announce the first official TAG-DS Stand-Alone Event–TAG… We’re it! This will be a two day event\, December 1 & 2\, 2025\, featuring keynotes\, poster sessions\, spotlight talks\, collaboration activities\, and community development. The dates and location were selected to align with NeurIPS 2025–twice the fun! The event will be hosted on the University of California San Diego campus both days and is readily accessible by public transit from downtown for those already planning to attend NeurIPS. There will be an associated Proceedings of Machine Learning Research volume for papers submitted to the archival track.
URL:https://tilos.ai/event/topology-algebra-and-geometry-in-data-science-2025/
LOCATION:UC San Diego\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/TAG-DS_logo-1-e1756938002600.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251203T110000
DTEND;TZID=America/Los_Angeles:20251203T120000
DTSTAMP:20260403T140323
CREATED:20250924T154049Z
LAST-MODIFIED:20260227T215023Z
UID:7606-1764759600-1764763200@tilos.ai
SUMMARY:TILOS-SDSU Seminar: 95 Percent: Bridging the Gap Between Prototype and Product
DESCRIPTION:Jeremy Schwartz\, Zoox \nAbstract: When transitioning from the academic world to the professional world of engineering\, one of the most common pitfalls is failing to understand the difference between a compelling prototype and a successful product. This talk will focus on that distinction. We will discuss the differences between them\, and the work required to evolve a good prototype into a real product. We will also discuss some common pitfalls encountered in product development\, and some of the practical software design considerations to keep in mind for development of robust\, mature code. The talk will include examples from my background developing robotic systems for air\, space\, and ground. \n\nJeremy Schwartz is a robotics engineer at Zoox with expertise in a wide variety of areas of mechanical and electrical engineering and computer science. His primary professional expertise is in autonomy and behavioral algorithms\, and he has worked in the aerospace industry as well as ground robotics\, specializing in autonomous systems of all kinds.
URL:https://tilos.ai/event/tilos-sdsu-seminar-with-jeremy-schwartz-of-zoox/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/schwartz-jeremy-e1758728403382.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251203T130000
DTEND;TZID=America/Los_Angeles:20251203T140000
DTSTAMP:20260403T140323
CREATED:20250930T163903Z
LAST-MODIFIED:20260304T210653Z
UID:7627-1764766800-1764770400@tilos.ai
SUMMARY:Optimization for AI and ML Seminar: Training Neural Networks at Any Scale
DESCRIPTION:Volkan Cevher\, École Polytechnique Fédérale de Lausanne \nAbstract: At the heart of deep learning’s transformative impact lies the concept of scale–encompassing both data and computational resources\, as well as their interaction with neural network architectures. Scale\, however\, presents critical challenges\, such as increased instability during training and prohibitively expensive model-specific tuning. Given the substantial resources required to train such models\, formulating high-confidence scaling hypotheses backed by rigorous theoretical research has become paramount. \nTo bridge theory and practice\, the talk explores a key mathematical ingredient of scaling in tandem with scaling theory: the numerical solution algorithms commonly employed in deep learning\, spanning domains from vision to language models. We unify these algorithms under a common master template\, making their foundational principles transparent. In doing so\, we reveal the interplay between adaptation to smoothness structures via online learning and the exploitation of optimization geometry through non-Euclidean norms. Our exposition moves beyond simply building larger models–it emphasizes strategic scaling\, offering insights that promise to advance the field while economizing on resources. \n\nVolkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara\, Turkey\, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta\, GA in 2005. He was a Research Scientist with the University of Maryland\, College Park from 2006-2007 and also with Rice University in Houston\, TX\, from 2008-2009. Currently\, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include machine learning\, signal processing theory\, optimization theory and methods\, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research award in 2018\, the IEEE Signal Processing Society Best Paper Award in 2016\, a Best Paper Award at CAMSAP in 2015\, a Best Paper Award at SPARS in 2009\, and an ERC CG in 2016 as well as an ERC StG in 2011.
URL:https://tilos.ai/event/optimization-for-ai-and-ml-seminar-with-volkan-cevher-epfl/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/cevher-volkan-e1759250260485.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251204T120000
DTEND;TZID=America/Los_Angeles:20251204T140000
DTSTAMP:20260403T140323
CREATED:20251028T204347Z
LAST-MODIFIED:20251121T020250Z
UID:7692-1764849600-1764856800@tilos.ai
SUMMARY:Networking Lunch Reception at NeurIPS 2025
DESCRIPTION:TILOS will host a networking lunch reception during NeurIPS 2025 at Mezé Greek Fusion from 12:00-2:00pm on Thursday\, December 4\, 2025. This event is open to all NeurIPS attendees affiliated with any of the NSF AI Research Institutes\, as well as invited industry partners. Join us to connect with colleagues across the network of NSF AI Institutes\, share research interests\, and explore opportunities for collaboration. \nRegistration has closed. Please contact tilos@ucsd.edu with any questions. \n? Date: Thursday\, December 4\, 2025? Time: 12:00 – 2:00pm PST? Location: Mezé Greek Fusion (3 blocks from the conference venue)
URL:https://tilos.ai/event/networking-lunch-reception-at-neurips-2025/
LOCATION:Mezé Greek Fusion\, San Diego\, CA\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/08/TILOS_Tree_Icon-e1743456398274-3qc6Qj.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251205T110000
DTEND;TZID=America/Los_Angeles:20251205T120000
DTSTAMP:20260403T140323
CREATED:20251014T194842Z
LAST-MODIFIED:20260304T210702Z
UID:7652-1764932400-1764936000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Stochastic-Gradient and Diagonal-Scaling Algorithms for Constrained Optimization and Learning
DESCRIPTION:Frank E. Curtis\, Lehigh University \nAbstract: I will motivate and provide an overview of recent efforts in my research group on the design and analysis of stochastic-gradient-based algorithms for solving constrained optimization problems. I will focus in particular on our motivation for informed supervised learning\, where constraints in the training problem can be used to impose prior knowledge on the properties that should be possessed by a trained prediction model. In addition\, I will provide a detailed look at our newest extensions of heavy-ball and Adam schemes from the unconstrained to the equality-constrained setting\, for which we have shown state-of-the-art convergence guarantees. I will demonstrate the impressive practical performance of our methods using a few informed supervised learning problems. \n\nFrank E. Curtis is a Professor in the Department of Industrial and Systems Engineering at Lehigh University\, where he has been employed since 2009. He received a bachelor’s degree from the College of William and Mary in 2003 with a double major in Computer Science and Mathematics\, received a master’s degree in 2004 and Ph.D. degree in 2007 from the Department of Industrial Engineering and Management Science at Northwestern University\, and spent two years as a Postdoctoral Researcher in the Courant Institute of Mathematical Sciences at New York University from 2007 until 2009. His research focuses on the design\, analysis\, and implementation of numerical methods for solving large-scale nonlinear optimization problems. He received an Early Career Award from the Advanced Scientific Computing Research (ASCR) program of the U.S. Department of Energy (DoE)\, and has received funding from various programs of the U.S. National Science Foundation (NSF)\, including through a TRIPODS Phase I grant awarded to him and his collaborators at Lehigh\, Northwestern\, and Boston University. He has also received funding from the U.S. Office of Naval Research (ONR) and DoE’s Advanced Research Projects Agency-Energy (ARPA-E). He received\, along with Leon Bottou (Meta AI) and Jorge Nocedal (Northwestern)\, the 2021 SIAM/MOS Lagrange Prize in Continuous Optimization. He was awarded\, with James V. Burke (U. of Washington)\, Adrian Lewis (Cornell)\, and Michael Overton (NYU)\, the 2018 INFORMS Computing Society Prize. He and team members Daniel Molzahn (Georgia Tech)\, Andreas Waechter (Northwestern)\, Ermin Wei (Northwestern)\, and Elizabeth Wong (UC San Diego) were awarded second place in the ARPA-E Grid Optimization Competition in 2020. He currently serves as Area Editor for Continuous Optimization for Mathematics of Operations Research and serves as an Associate Editor for Mathematical Programming\, SIAM Journal on Optimization\, Operations Research\, IMA Journal of Numerical Analysis\, and Mathematical Programming Computation. He previously served as the Vice Chair for Nonlinear Programming for the INFORMS Optimization Society\, and is currently very active in professional societies and groups related to mathematical optimization\, including INFORMS\, the Mathematics Optimization Society\, and the SIAM Activity Group on Optimization.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-frank-e-curtis-lehigh-university/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/curtis-frank-e1760471303881.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251206
DTEND;VALUE=DATE:20251207
DTSTAMP:20260403T140323
CREATED:20250903T220623Z
LAST-MODIFIED:20250908T161741Z
UID:7469-1764979200-1765065599@tilos.ai
SUMMARY:NeurIPS 2025 Workshop on Differentiable Learning of Combinatorial Algorithms
DESCRIPTION:Combinatorial algorithms are fundamental across a wide range of domains\, owing to their ability to model optimization and decision-making tasks under complex constraints. These algorithms underpin practical applications such as vehicle routing\, network and chip design\, clustering and information retrieval. Combinatorial problems are also prominent in various areas of machine learning such as natural language processing and robotics. Recent research has focused on leveraging neural networks to design novel combinatorial algorithms and to come up with techniques that allow seamless integration of classic combinatorial algorithms in differentiable neural network architectures. Developments in this field\, commonly referred to as neural combinatorial optimization\, have raised several fundamental questions that span both theory and practice. \nIn this workshop\, we take a broad perspective on designing differentiable algorithms for combinatorial optimization. The goal of the workshop is to explore novel ideas in the design and applications of neural combinatorial optimization\, as well as to improve our theoretical understanding of existing methods.
URL:https://tilos.ai/event/neurips-2025-workshop-on-differentiable-learning-of-combinatorial-algorithms-diffcoalg/
LOCATION:San Diego Convention Center\, San Diego\, CA\, United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/NeurIPS-logo-square-e1756938657121.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251206
DTEND;VALUE=DATE:20251207
DTSTAMP:20260403T140323
CREATED:20250908T160038Z
LAST-MODIFIED:20250908T161418Z
UID:7552-1764979200-1765065599@tilos.ai
SUMMARY:NeurIPS 2025 Workshop on Optimization for Machine Learning
DESCRIPTION:Optimization lies at the heart of many machine learning algorithms and enjoys great interest in our community. Indeed\, this intimate relation of optimization with ML is the key motivation for the OPT series of workshops. We aim to foster discussion\, discovery\, and dissemination of state-of-the-art research in optimization relevant to ML. \nThe focus of OPT 2025 is on “Statistics Meets Optimization”. Since its inception\, stochastic optimization has been grounded in statistical principles. Today\, many of the most pressing challenges in machine learning—such as generalization bounds\, the training dynamics of overparameterized models\, and the development of generative models—are directly inspired by statistical thinking. At the same time\, the scale and complexity of modern datasets\, along with the increasingly rich model classes used to represent them\, pose new questions about how optimization algorithms interact with these structures—both computationally and statistically. For example\, what role do data symmetries play in shaping optimization trajectories? How do statistical properties of the data affect the adaptivity and efficiency of learning algorithms? And how can optimization approaches be designed to scale with data while still preserving desirable statistical behavior? OPT 2025 will explore these questions with the goal of building bridges between the statistics and optimization communities\, and highlighting their shared impact on the theory and practice of machine learning.
URL:https://tilos.ai/event/neurips-workshop-on-optimization-for-machine-learning/
LOCATION:San Diego Convention Center\, San Diego\, CA\, United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/NeurIPS-logo-square-e1756938657121.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251206
DTEND;VALUE=DATE:20251207
DTSTAMP:20260403T140323
CREATED:20250908T161144Z
LAST-MODIFIED:20251031T180834Z
UID:7557-1764979200-1765065599@tilos.ai
SUMMARY:NeurIPS 2025 Workshop on Imageomics: Discovering Biological Knowledge from Images Using AI
DESCRIPTION:Imageomics is an emerging interdisciplinary field at the crossroads of machine learning (ML)\, computer vision (CV)\, and biological sciences. It leverages visual data—from microscopic images of single-cell species to videos of megafauna—to extract and analyze biological information\, specifically traits. By grounding ML models in existing scientific knowledge\, Imageomics aims to make traits computable from images\, facilitating insights into the evolution and function of living organisms. Imageomics poses research problems that resonate with the broad machine-learning community: multimodal representation learning\, object detection and tracking\, few-shot learning\, imbalanced-class learning\, video understanding\, 3D modeling\, hierarchical learning\, etc. When people leverage ML tools to solve biological questions\, the foundational bridges between ML and biological sciences also provide opportunities to address key challenges in ML\, creating a virtuous cycle between the two fields. \nWe welcome participation from anyone interested in learning about the field of Imageomics\, including: \n\nBiological Scientists who are interested in applying ML and CV to their research\, or who want to learn how to use ML tools to analyze biological images.\nMachine Learning Researchers who are interested in applying their expertise to biological image data\, or who want to learn about the unique challenges and opportunities in this domain.\n\nThe workshop will feature keynote talks\, paper presentations\, and discussions on the latest research in Imageomics. We encourage participants to submit papers and demos related to the topics outlined in the Call For Papers. The workshop will also provide opportunities for networking and collaboration among researchers from diverse backgrounds.
URL:https://tilos.ai/event/neurips-workhop-on-imageomics-discovering-biological-knowledge-from-images-using-ai/
LOCATION:San Diego Convention Center\, San Diego\, CA\, United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/NeurIPS-logo-square-e1756938657121.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251207
DTEND;VALUE=DATE:20251208
DTSTAMP:20260403T140323
CREATED:20250903T215535Z
LAST-MODIFIED:20250908T161649Z
UID:7462-1765065600-1765151999@tilos.ai
SUMMARY:NeurIPS 2025 Workshop on New Perspectives in Advancing Graph Machine Learning
DESCRIPTION:Graphs serve as a powerful representational framework for machine learning\, and their integration has substantially advanced the field. Indeed\, extensive studies have pushed forward graph machine learning (GML) in both theory and applications. Recently\, new perspectives have been emerging in the machine learning community\, including algebraic–topological analyses\, foundation models\, generative models\, and large models in applications. Leveraging these ideas for core graph machine learning holds a lot of promise\, including the dual benefit of deeper theoretical insight\, new capabilities and more powerful\, application-aligned algorithms and models. The aim of this workshop is to explore and connect these new perspectives on GML\, and to identify overarching challenges and tools – in terms of theory\, methodology\, and modeling.
URL:https://tilos.ai/event/neurips-2025-workshop-new-perspectives-in-advancing-graph-machine-learning/
LOCATION:San Diego Convention Center\, San Diego\, CA\, United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2025/09/NeurIPS-logo-square-e1756938657121.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251208T100000
DTEND;TZID=America/Los_Angeles:20251208T110000
DTSTAMP:20260403T140323
CREATED:20251021T125343Z
LAST-MODIFIED:20260227T214449Z
UID:7677-1765188000-1765191600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Incentivizing Emergent Behaviors for LLMs via Reinforcement Learning
DESCRIPTION:Yi Wu\, Tsinghua University \nAbstract: Reinforcement Learning (RL) has become a powerful post-training method for eliciting advanced behaviors in large language models (LLMs). This talk presents recent results showing how RL can incentivize the emergence of LLM capabilities across three domains: (1) multi-player deduction game\, Werewolf\, where RL-trained LLM agents develop strategic behaviors and outperform strong human players; (2) agentic search\, where large-scale RL enables a 32B model to run multi-step search to answer non-trivial questions beyond commercial baselines; and (3) efficient reasoning\, where RL mitigates over-thinking and improves both reliability and compute efficiency. \nThe papers can be found at \n\nWerewolf: https://arxiv.org/abs/2310.18940 (ICML24)\, https://arxiv.org/abs/2502.04686 (ICML25)\nASearcher: https://arxiv.org/abs/2508.07976\nThinking Efficiency: https://www.arxiv.org/abs/2506.07104 (NeurIPS25)\n\nAll the projects are trained using our large-scale agentic RL system\, AReaL\, which is open-source at https://github.com/inclusionAI/AReaL with its paper at https://arxiv.org/abs/2505.24298 (NeurIPS25). \n\nYi Wu is an assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS)\, Tsinghua University. He obtained his Ph.D. from UC Berkeley and was a researcher at OpenAI from 2019 to 2020. His research focuses on reinforcement learning\, multi-agent learning\, and LLM agents. His representative works include the value iteration network\, the MADDPG/MAPPO algorithm\, OpenAI’s hide-and-seek project\, and the AReaL project. He received the best paper award at NIPS 2016\, the best demo award finalist at ICRA 2024\, and MIT TR35 Asia Pacific 2025 award.
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-yi-wu-tsinghua-university/
LOCATION:Qualcomm Conference Center Room B (Jacobs Hall first floor) and Virtual\, 9736 Engineers Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/wu-yi.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260109T110000
DTEND;TZID=America/Los_Angeles:20260109T120000
DTSTAMP:20260403T140323
CREATED:20251014T195932Z
LAST-MODIFIED:20260304T210221Z
UID:7661-1767956400-1767960000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Randomized linear algebra with subspace injections
DESCRIPTION:Joel Tropp\, Caltech \nAbstract: To achieve the greatest possible speed\, practitioners regularly implement randomized algorithms for low-rank approximation and least-squares regression with structured dimension reduction maps. This talk outlines a new perspective on structured dimension reduction\, based on the injectivity properties of the dimension reduction map. This approach provides sharper bounds for sparse dimension reduction maps\, and it leads to exponential improvements for tensor-product dimension reduction. Empirical evidence confirms that these types of structured random matrices offer exemplary performance for a range of synthetic problems and contemporary scientific applications. \nJoint work with Chris Camaño\, Ethan Epperly\, and Raphael Meyer; available at arXiv:2508.21189. \n\nJoel A. Tropp is Steele Family Professor of Applied & Computational Mathematics at the California Institute of Technology. His research centers on applied mathematics\, machine learning\, data science\, numerical algorithms\, and random matrix theory. Some of his best-known contributions include matching pursuit algorithms\, randomized SVD algorithms\, matrix concentration inequalities\, and statistical phase transitions. Prof. Tropp attained the Ph.D. degree in Computational Applied Mathematics at the University of Texas at Austin in 2004\, and he joined Caltech in 2007. He won the PECASE in 2008\, and he was recognized as a Highly Cited Researcher in Computer Science each year from 2014–2018. He is co-founder of the SIAM Journal on Mathematics of Data Science (SIMODS)\, and he was co-chair of the inaugural 2020 SIAM Conference on the Mathematics of Data Science. Prof. Tropp was elected SIAM Fellow in 2019\, IEEE Fellow in 2020\, and IMS Fellow in 2024. He received the 2025 Richard P. Feynman Prize for Excellence in Teaching at Caltech. He is an invited speaker at the 2026 International Congress of Mathematicians (ICM).
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-joel-tropp-caltech/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/tropp-joel-e1760471957302.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260111
DTEND;VALUE=DATE:20260117
DTSTAMP:20260403T140323
CREATED:20251209T233013Z
LAST-MODIFIED:20251209T233013Z
UID:7981-1768089600-1768607999@tilos.ai
SUMMARY:Gordon Research Conference on Embodied Intelligence
DESCRIPTION:The Robotics GRC is a premier\, international scientific conference focused on advancing the frontiers of science through the presentation of cutting-edge and unpublished research\, prioritizing time for discussion after each talk and fostering informal interactions among scientists of all career stages. The conference program includes an array of speakers and discussion leaders from institutions and organizations worldwide\, concentrating on the latest developments in the field. The conference is five days long and held in a remote location to increase the sense of camaraderie and create scientific communities\, with lasting collaborations and friendships. In addition to premier talks\, the conference has designated time for poster sessions from individuals of all career stages\, and afternoon free time and communal meals allow for informal networking opportunities with leaders in the field. \nThis year’s conference will focus on adaptive behavior and learning in animals and robots. We will explore how biological inspiration drives advancements in robotics\, from simple reactive behaviors to complex planning and learning systems. Insights from biomechanics\, neuroscience\, and animal studies are increasingly shaping the design and control of robots\, making them more robust and adaptable. A key focus will be on embodied intelligence\, enabling robots to excel in locomotion\, manipulation\, and interactions with other agents. \nConversely\, robotics research is also contributing to biology. Studies of perception and action in robotic systems are leading to new mathematical models that help integrative biologists understand locomotion\, manipulation\, and collective behavior in animals. \nBy bringing together experts from robotics\, biomechanics\, and neuroscience\, this conference aims to foster cross-disciplinary insights that will push the boundaries of both fields. \nJoin us for a dynamic exchange of ideas at the intersection of robotics and biology\, where engineering meets evolution.
URL:https://tilos.ai/event/gordon-research-conference-on-embodied-intelligence/
LOCATION:Four Points Sheraton / Holiday Inn Express\, 1050 Schooner Drive\, Ventura\, CA\, 93001\, United States
CATEGORIES:TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/12/fourpoints.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260128T110000
DTEND;TZID=America/Los_Angeles:20260128T120000
DTSTAMP:20260403T140323
CREATED:20251031T211533Z
LAST-MODIFIED:20260227T213734Z
UID:7725-1769598000-1769601600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Safety\, Representations\, and Generative Learning in Dynamical Systems
DESCRIPTION:Koushil Sreenath\, UC Berkeley \nAbstract: This talk explores the interplay between model-based guarantees and learning-based flexibility in the control of dynamical systems. I begin with safety-critical control using control barrier functions (CBFs)\, highlighting that while CBFs enforce state constraints\, they may induce unstable internal dynamics. I introduce conditions under which CBF-based safety filters ensure boundedness of the full system state. I then transition to learning representations of hybrid dynamical systems. I present a framework that learns continuous neural representations by exploiting the geometric structure induced by guards and resets\, enabling accurate flow prediction without explicit mode switching. Finally\, I discuss generative learning approaches for control\, emphasizing guided diffusion models that jointly represent states and actions. Through applications to agile humanoid locomotion\, motion synthesis\, and dynamic manipulation\, I demonstrate how generative models can produce versatile\, long-horizon behaviors while respecting physical constraints. Together\, these results highlight how structure\, geometry\, and learning can bridge safety guarantees and expressive control in complex dynamical systems. \n\nKoushil Sreenath is an Associate Professor of Mechanical Engineering\, at UC Berkeley. He received a Ph.D. degree in Electrical Engineering and Computer Science and a M.S. degree in Applied Mathematics from the University of Michigan at Ann Arbor\, MI\, in 2011. He was a Postdoctoral Scholar at the GRASP Lab at University of Pennsylvania from 2011 to 2013 and an Assistant Professor at Carnegie Mellon University from 2013 to 2017. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion was featured on The Discovery Channel\, CNN\, ESPN\, FOX\, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum\, New Scientist\, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book. He received the NSF CAREER\, Hellman Fellow\, Google Faculty Research Award in Robotics\, and Best Paper Awards at Learning for Dynamics and Control (L4DC) and Robotics: Science and Systems (RSS).
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-koushil-sreenath-uc-berkeley/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/sreenath-koushil-1-e1769450413875.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260130T110000
DTEND;TZID=America/Los_Angeles:20260130T120000
DTSTAMP:20260403T140323
CREATED:20251014T200143Z
LAST-MODIFIED:20260304T210210Z
UID:7663-1769770800-1769774400@tilos.ai
SUMMARY:[CANCELED] Optimization for ML and AI Seminar: Fantastic Pretraining Optimizers and Where to Find Them
DESCRIPTION:Tengyu Ma\, Stanford \nAbstract: AdamW has long been the dominant optimizer in language model pretraining\, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two issues\, we conduct a systematic study of ten deep learning optimizers across four model scales (0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum). We find that fair and informative comparisons require rigorous hyperparameter tuning and evaluations across a range of model scales and data-to-model ratios\, performed at the end of training. First\, optimal hyperparameters for one optimizer may be suboptimal for another\, making blind hyperparameter transfer unfair. Second\, the actual speedup of many proposed optimizers over well-tuned baselines is lower than claimed and decreases with model size to only 1.1x for 1.2B parameter models. Thirdly\, comparing intermediate checkpoints before reaching the target training budgets can be misleading\, as rankings between two optimizers can flip during training due to learning rate decay. Through our thorough investigation\, we find that all the fastest optimizers such as Muon and Soap\, use matrices as preconditioners—multiplying gradients with matrices rather than entry-wise scalars. However\, the speedup of matrix-based optimizers is inversely proportional to model scale\, decreasing from 1.4x over AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models. \n\nTengyu Ma is an assistant professor of computer science at Stanford University. His research interests broadly include topics in machine learning\, algorithms and their theory\, such as deep learning\, (deep) reinforcement learning\, pre-training / foundation models\, robustness\, non-convex optimization\, distributed optimization\, and high-dimensional statistics. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-tengyu-ma-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/ma-tengyu-e1760473083457.jpg
END:VEVENT
END:VCALENDAR