TILOS-OPTML++ Seminar: Constant Regret in Online Decision-Making

Virtual

Siddhartha Banerjee, Cornell University Abstract: I will present a class of finite-horizon control problems, where we see a random stream of arrivals, need to select actions in each step, and where the final objective depends only on the aggregate type-action counts; this includes many widely-studied control problems including online resource-allocation, dynamic pricing, generalized assignment, online […]

TILOS-OPTML++ Seminar: Sums of Squares: from Algebra to Analysis

Virtual

Francis Bach, NRIA, ENS, and PSL Paris Abstract: The representation of non-negative functions as sums of squares has become an important tool in many modeling and optimization tasks. Traditionally applied to polynomial functions, it requires rich tools from algebraic geometry that led to many developments in the last twenty years. In this talk, I will […]

TILOS Seminar: The Hidden Convex Optimization Landscape of Deep Neural Networks

Virtual

Mert Pilanci, Stanford University Abstract: Since deep neural network training problems are inherently non-convex, their recent dramatic success largely relies on non-convex optimization heuristics and experimental findings. Despite significant advancements, the non-convex nature of neural network training poses two central challenges: first, understanding the underlying mechanisms that contribute to model performance, and second, achieving efficient […]

TILOS Seminar: Machine Learning from Weak, Noisy, and Biased Supervision

Virtual

Masashi Sugiyama, University of Tokyo and RIKEN Abstract: In statistical inference and machine learning, we face a variety of uncertainties such as training data with insufficient information, label noise, and bias. In this talk, I will give an overview of our research on reliable machine learning, including weakly supervised classification (positive unlabeled classification, positive confidence classification, […]

TILOS-OPTML++ Seminar: Optimization, Robustness and Privacy in Deep Neural Networks: Insights from the Neural Tangent Kernel

Virtual

Marco Mondelli, Institute of Science and Technology Austria Abstract: A recent line of work has analyzed the properties of deep over-parameterized neural networks through the lens of the Neural Tangent Kernel (NTK). In this talk, I will show how concentration bounds on the NTK (and, specifically, on its smallest eigenvalue) provide insights on (i) the […]