• TILOS-SDSU Seminar: 95 Percent: Bridging the Gap Between Prototype and Product

    Lamden Hall 341 (SDSU) and Virtual San Diego, CA, United States

    Jeremy Schwartz, Zoox Abstract: When transitioning from the academic world to the professional world of engineering, one of the most common pitfalls is failing to understand the difference between a compelling prototype and a successful product. This talk will focus on that distinction. We will discuss the differences between them, and the work required to […]

  • Optimization for AI and ML Seminar: Training Neural Networks at Any Scale

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Volkan Cevher, École Polytechnique Fédérale de Lausanne Abstract: At the heart of deep learning’s transformative impact lies the concept of scale--encompassing both data and computational resources, as well as their interaction with neural network architectures. Scale, however, presents critical challenges, such as increased instability during training and prohibitively expensive model-specific tuning. Given the substantial resources […]

  • Networking Lunch Reception at NeurIPS 2025

    Mezé Greek Fusion San Diego, CA, United States

    TILOS will host a networking lunch reception during NeurIPS 2025 at Mezé Greek Fusion from 12:00-2:00pm on Thursday, December 4, 2025. This event is open to all NeurIPS attendees affiliated with any of the NSF AI Research Institutes, as well as invited industry partners. Join us to connect with colleagues across the network of NSF […]

  • Optimization for ML and AI Seminar: Stochastic-Gradient and Diagonal-Scaling Algorithms for Constrained Optimization and Learning

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Frank E. Curtis, Lehigh University Abstract: I will motivate and provide an overview of recent efforts in my research group on the design and analysis of stochastic-gradient-based algorithms for solving constrained optimization problems. I will focus in particular on our motivation for informed supervised learning, where constraints in the training problem can be used to […]

  • Optimization for ML and AI Seminar: Randomized linear algebra with subspace injections

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Joel Tropp, Caltech Abstract: To achieve the greatest possible speed, practitioners regularly implement randomized algorithms for low-rank approximation and least-squares regression with structured dimension reduction maps. This talk outlines a new perspective on structured dimension reduction, based on the injectivity properties of the dimension reduction map. This approach provides sharper bounds for sparse dimension reduction […]

  • Gordon Research Conference on Embodied Intelligence

    Four Points Sheraton / Holiday Inn Express 1050 Schooner Drive, Ventura, CA, United States

    The Robotics GRC is a premier, international scientific conference focused on advancing the frontiers of science through the presentation of cutting-edge and unpublished research, prioritizing time for discussion after each talk and fostering informal interactions among scientists of all career stages. The conference program includes an array of speakers and discussion leaders from institutions and […]

  • Canceled [CANCELED] Optimization for ML and AI Seminar: Fantastic Pretraining Optimizers and Where to Find Them

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Tengyu Ma, Stanford Abstract: AdamW has long been the dominant optimizer in language model pretraining, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two […]

  • Optimization for ML and AI Seminar: Extended Convex Lifting for Policy Optimization in Control

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Yang Zheng, UC San Diego Abstract: Direct policy search has achieved great empirical success in reinforcement learning. Many recent studies have revisited its theoretical foundation for continuous control, which reveals elegant nonconvex geometry in various benchmark problems. In this talk, we introduce an Extended Convex Lifting (ECL) framework, which reveals hidden convexity in classical optimal […]

  • Optimization for ML and AI Seminar: (De)regularized Wasserstein Gradient Flows via Reproducing Kernels

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Bharath Sriperumbudur, Pennsylvania State University Abstract: Wasserstein gradient flows have become a popular tool in machine learning with applications in sampling, variational inference, generative modeling, and reinforcement learning, among others. The Wasserstein gradient flow (WGF) involves minimizing a probability functional over the Wasserstein space (by taking into account the intrinsic geometry of the Wasserstein space). […]

  • Optimization for ML and AI Seminar: Transformers Learn Generalizable Chain-of-Thought Reasoning via Gradient Descent

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Yuejie Chi, Yale Abstract: Transformers have demonstrated remarkable chain-of-thought reasoning capabilities, yet, the underlying mechanisms by which they acquire and extrapolate these capabilities remain limited. This talk presents a theoretical analysis of transformers trained via gradient descent for symbolic reasoning and state tracking tasks with increasing problem complexity. Our analysis reveals the coordination of multi-head […]