BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251001T110000
DTEND;TZID=America/Los_Angeles:20251001T120000
DTSTAMP:20260404T021214
CREATED:20250828T192015Z
LAST-MODIFIED:20260304T210603Z
UID:7259-1759316400-1759320000@tilos.ai
SUMMARY:TILOS-HDSI Seminar: A New Paradigm for Learning with Distribution Shift
DESCRIPTION:Adam Klivans\, The University of Texas at Austin \nAbstract: We revisit the fundamental problem of learning with distribution shift\, where a learner is given labeled samples from training distribution D\, unlabeled samples from test distribution D′ and is asked to output a classifier with low test error. The standard approach in this setting is to prove a generalization bound in terms of some notion of distance between D and D′. These distances\, however\, are difficult to compute\, and this has been the main stumbling block for efficient algorithm design over the last two decades. \nWe sidestep this issue and define a new model called TDS learning\, where a learner runs a test on the training set and is allowed to reject if this test detects distribution shift relative to a fixed output classifier. This approach leads to the first set of efficient algorithms for learning with distribution shift that do not take any assumptions on the test distribution. Finally\, we discuss how our techniques have recently been used to solve longstanding problems in supervised learning with contamination. \n\nAdam Klivans is a Professor of Computer Science at the University of Texas at Austin and Director of the NSF AI Institute for Foundations of Machine Learning (IFML). His research interests lie in machine learning and theoretical computer science\, in particular\, Learning Theory\, Computational Complexity\, Pseudorandomness\, Limit Theorems\, and Gaussian Space. Dr. Klivans is a recipient of the NSF CAREER Award and serves on the editorial board for the Theory of Computing and Machine Learning Journal.
URL:https://tilos.ai/event/tilos-seminar-with-adam-klivans/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/klivans-adam-e1756405638325.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251024T110000
DTEND;TZID=America/Los_Angeles:20251024T120000
DTSTAMP:20260404T021214
CREATED:20250925T175700Z
LAST-MODIFIED:20260304T210610Z
UID:7611-1761303600-1761307200@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: High-dimensional Optimization with Applications to Compute-Optimal Neural Scaling Laws
DESCRIPTION:Courtney Paquette\, McGill University \nAbstract: Given the massive scale of modern ML models\, we now only get a single shot to train them effectively. This restricts our ability to test multiple architectures and hyper-parameter configurations. Instead\, we need to understand how these models scale\, allowing us to experiment with smaller problems and then apply those insights to larger-scale models. In this talk\, I will present a framework for analyzing scaling laws in stochastic learning algorithms using a power-law random features model (PLRF)\, leveraging high-dimensional probability and random matrix theory. I will then use this scaling law to address the compute-optimal question: How should we choose model size and hyper-parameters to achieve the best possible performance in the most compute-efficient manner? Then using this PLRF model\, I will devise a new momentum-based algorithm that (provably) improves the scaling law exponent. Finally\, I will present some numerical experiments on LSTMs that show how this new stochastic algorithm can be applied to real data to improve the compute-optimal exponent. \n\nCourtney Paquette is an assistant professor at McGill University in the Mathematics and Statistics department\, a CIFAR AI Chair (MILA)\, and an active member of the Montreal Machine Learning Optimization Group (MTL MLOpt) at MILA. Her research broadly focuses on designing and analyzing algorithms for large-scale optimization problems\, motivated by applications in data science\, and using techniques that draw from a variety of fields\, including probability\, complexity theory\, and convex and nonsmooth analysis. Dr. Paquette is a lead organizer of the OPT-ML Workshop at NeurIPS since 2020\, and a lead organizer (and original creator) of the High-dimensional Learning Dynamics (HiLD) Workshop at ICML.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-courtney-paquette-mcgill-university/
LOCATION:CSE 1242 and Virtual\, 3235 Voigt Dr\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/paquette-courtney-scaled-e1758822988381.jpg
END:VEVENT
END:VCALENDAR