BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260227T110000
DTEND;TZID=America/Los_Angeles:20260227T120000
DTSTAMP:20260404T005317
CREATED:20251003T192706Z
LAST-MODIFIED:20260304T205819Z
UID:7637-1772190000-1772193600@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: (De)regularized Wasserstein Gradient Flows via Reproducing Kernels
DESCRIPTION:Bharath Sriperumbudur\, Pennsylvania State University \nAbstract: Wasserstein gradient flows have become a popular tool in machine learning with applications in sampling\, variational inference\, generative modeling\, and reinforcement learning\, among others. The Wasserstein gradient flow (WGF) involves minimizing a probability functional over the Wasserstein space (by taking into account the intrinsic geometry of the Wasserstein space). In this work\, we introduce approximate/regularized Wasserstein gradient flows in two different settings: (a) approximate the probability functional and (b) approximate the Wasserstein geometry. In (a)\, we consider the probability functional to be chi^2-divergence\, whose WGF is difficult to implement. To this end\, we propose a (de)-regularization of the Maximum Mean Discrepancy (DrMMD) as an approximation of chi^2-divergence and develop an approximate WGF\, which is easy to implement and has applications in generative modeling. On the other hand\, in the setting of (b)\, we use Kullback-Leibler divergence as the probability functional and develop an approximation to the Wassertein geometry\, which allows for an efficient implementation than that of the exact WGF\, with applications in sampling. In both settings\, we present a variety of theoretical results that relate the approximate flow to the exact flow and demonstrate the superiority of the approximate flows via numerical simulations. \n\nBharath Sriperumbudur is a professor in the Department of Statistics (with a courtesy appointment in the Department of Mathematics) at the Pennsylvania State University. His research interests include non-parametric statistics\, machine learning\, statistical learning theory\, optimal transport and gradient flows\, regularization and inverse problems\, reproducing kernel spaces in probability and statistics\, functional and topological data analysis.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-bharath-sriperumbudur-penn-state/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/sriperumbudur-bharath-e1759519613665.jpg
END:VEVENT
END:VCALENDAR