• TILOS-HDSI Seminar: Kinetic Theory Perspective of Foundation Models for Physics

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Maarten de Hoop, Rice University Abstract: We present a kinetic theory perspective of foundation models for physics. We begin with providing a mathematical framework for analyzing transformers. To uniformly address their expressivity, we consider the case that the mappings are conditioned on a context represented by a probability distribution of tokens. That is, transformers become […]

  • TILOS-HDSI Seminar: Neuromorphic LLMs

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Jason Eshraghian, UC Santa Cruz Abstract: This talk will show you what neuromorphic computing can do when an academic lab accidentally pulls $2-million of GPU-hours. We will showcase a series of frontier reasoning LLMs developed out of an academic lab, from data curation and pre-training to post-training and alignment. These models surpass leading LLMs from […]

  • Optimization for ML and AI Seminar: (De)regularized Wasserstein Gradient Flows via Reproducing Kernels

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Bharath Sriperumbudur, Pennsylvania State University Abstract: Wasserstein gradient flows have become a popular tool in machine learning with applications in sampling, variational inference, generative modeling, and reinforcement learning, among others. The Wasserstein gradient flow (WGF) involves minimizing a probability functional over the Wasserstein space (by taking into account the intrinsic geometry of the Wasserstein space). […]

  • Optimization for ML and AI Seminar: Transformers Learn Generalizable Chain-of-Thought Reasoning via Gradient Descent

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Yuejie Chi, Yale Abstract: Transformers have demonstrated remarkable chain-of-thought reasoning capabilities, yet, the underlying mechanisms by which they acquire and extrapolate these capabilities remain limited. This talk presents a theoretical analysis of transformers trained via gradient descent for symbolic reasoning and state tracking tasks with increasing problem complexity. Our analysis reveals the coordination of multi-head […]

  • TILOS-SDSU ExpandAI Workshop

    Qualcomm Conference Center (Jacobs Hall first floor) 9736 Engineers Ln, La Jolla, CA, United States

    <!-- --> Agenda 1:00 – 1:10 pm: Welcome and opening remarks 1:10 – 1:30 pm: Invited talk by Dr. Lily Weng, Assistant Professor, Halıcıoğlu Data Science Institute and Department of Computer Science and Engineering, UC San Diego 1:30 - 1:50 pm: Invited talk by Dr. Reza Akhavian, Associate Professor and Jim Ryan Endowed Chair in Construction Engineering […]

  • TILOS-SDSU Seminar: Autopilots Need Parachutes: Reliability Lessons from LLM-Automated Embedded AI Systems

    Lamden Hall 341 (SDSU) and Virtual San Diego, CA, United States

    Roberto Morabito, EURECOM Abstract: Embedded AI systems are becoming increasingly complex to develop and maintain, requiring specialized workflows that span data processing, model conversion, optimization, and deployment across heterogeneous hardware platforms. Recently, large language models have emerged as a promising tool to automate parts of this lifecycle. In this talk, I present recent work investigating […]

  • TILOS-Optimization for ML and AI Seminar: Implicit bias results for Muon, Adam, and Friends

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Matus Telgarsky, New York University Abstract: This talk will give both an empirical overview and a few simple bonds controlling the optimization path, or implicit bias, of modern optimization methods such as Adam and Muon (and Friends). The talk will begin with empirical results demonstrating the implicit bias phenomenon with shallow networks and also transformers […]

  • TILOS-HDSI Seminar: Engineering Interpretable and Faithful AI Systems

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    René Vidal, University of Pennsylvania Abstract: Large Language Models (LLMs) and Vision Language Models (VLMs) have achieved remarkable performance across a wide range of tasks. However, their growing deployment has exposed fundamental limitations in faithfulness, safety, and transparency. In this talk, I will present a unified perspective on addressing these challenges through principled model interventions […]

  • Optimization for ML and AI Seminar: A survey of the mixing times of the Proximal Sampler algorithm

    HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United States

    Andre Wibisono, Yale University Abstract: Sampling is a fundamental algorithmic task with many connections to optimization. In this talk, we survey a recent algorithm for sampling known as the Proximal Sampler, which can be seen as a proximal discretization of the continuous-time Langevin dynamics, and achieves the current state-of-the-art iteration complexity for sampling in discrete […]