BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231108T110000
DTEND;TZID=America/Los_Angeles:20231108T120000
DTSTAMP:20260405T135759
CREATED:20250828T203219Z
LAST-MODIFIED:20250828T203236Z
UID:7324-1699441200-1699444800@tilos.ai
SUMMARY:TILOS-OPTML++ Seminar: Optimization\, Robustness and Privacy in Deep Neural Networks: Insights from the Neural Tangent Kernel
DESCRIPTION:Marco Mondelli\, Institute of Science and Technology Austria \nAbstract: A recent line of work has analyzed the properties of deep over-parameterized neural networks through the lens of the Neural Tangent Kernel (NTK). In this talk\, I will show how concentration bounds on the NTK (and\, specifically\, on its smallest eigenvalue) provide insights on (i) the optimization of the network via gradient descent\, (ii) its adversarial robustness\, and (iii) its privacy guarantees. I will start by proving tight bounds on the smallest eigenvalue of the NTK for deep neural networks with minimum over-parameterization. This implies that the network optimized by gradient descent interpolates the training dataset (i.e.\, reaches 0 training loss)\, as soon as the number of parameters is information-theoretically optimal. Next\, I will focus on two properties of the interpolating solution: robustness and privacy. A thought-provoking paper by Bubeck and Sellke has proposed a “universal law of robustness”: interpolating smoothly the data necessarily requires many more parameters than simple memorization. By providing sharp bounds on random features (RF) and NTK models\, I will show that\, while the RF model is never robust (regardless of the over-parameterization)\, the NTK model saturates the universal law of robustness\, addressing a conjecture by Bubeck\, Li and Nagaraj. Finally\, I will study the safety of RF and NTK models against a family of powerful black-box information retrieval attacks: the proposed analysis shows that safety provably strengthens with an increase in the generalization capability\, unveiling the role of the model and of its activation function. \n\nMarco Mondelli received the B.S. and M.S. degree in Telecommunications Engineering from the University of Pisa\, Italy\, in 2010 and 2012\, respectively. In 2016\, he obtained his Ph.D. degree in Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne (EPFL)\, Switzerland. He is currently an Assistant Professor at the Institute of Science and Technology Austria (ISTA). Prior to that\, he was a Postdoctoral Scholar in the Department of Electrical Engineering at Stanford University\, USA\, from February 2017 to August 2019. He was also a Research Fellow with the Simons Institute for the Theory of Computing\, UC Berkeley\, USA\, for the program on Foundations of Data Science from August to December 2018. His research interests include data science\, machine learning\, information theory\, and modern coding theory. He was the recipient of a number of fellowships and awards\, including the Jack K. Wolf ISIT Student Paper Award in 2015\, the STOC Best Paper Award in 2016\, the EPFL Doctorate Award in 2018\, the Simons-Berkeley Research Fellowship in 2018\, the Lopez-Loreta Prize in 2019\, and Information Theory Society Best Paper Award in 2021.
URL:https://tilos.ai/event/optimization-robustness-and-privacy-in-deep-neural-networks-insights-from-the-neural-tangent-kernel/
LOCATION:Virtual
CATEGORIES:TILOS - OPTML++ Seminar Series,TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/08/mondelli-marco-scaled-e1711659727954-z3UC0d.jpg
END:VEVENT
END:VCALENDAR