BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251203T110000
DTEND;TZID=America/Los_Angeles:20251203T120000
DTSTAMP:20260404T044956
CREATED:20250924T154049Z
LAST-MODIFIED:20260227T215023Z
UID:7606-1764759600-1764763200@tilos.ai
SUMMARY:TILOS-SDSU Seminar: 95 Percent: Bridging the Gap Between Prototype and Product
DESCRIPTION:Jeremy Schwartz\, Zoox \nAbstract: When transitioning from the academic world to the professional world of engineering\, one of the most common pitfalls is failing to understand the difference between a compelling prototype and a successful product. This talk will focus on that distinction. We will discuss the differences between them\, and the work required to evolve a good prototype into a real product. We will also discuss some common pitfalls encountered in product development\, and some of the practical software design considerations to keep in mind for development of robust\, mature code. The talk will include examples from my background developing robotic systems for air\, space\, and ground. \n\nJeremy Schwartz is a robotics engineer at Zoox with expertise in a wide variety of areas of mechanical and electrical engineering and computer science. His primary professional expertise is in autonomy and behavioral algorithms\, and he has worked in the aerospace industry as well as ground robotics\, specializing in autonomous systems of all kinds.
URL:https://tilos.ai/event/tilos-sdsu-seminar-with-jeremy-schwartz-of-zoox/
LOCATION:Lamden Hall 341 (SDSU) and Virtual\, San Diego\, CA\, 92182\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/schwartz-jeremy-e1758728403382.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251203T130000
DTEND;TZID=America/Los_Angeles:20251203T140000
DTSTAMP:20260404T044956
CREATED:20250930T163903Z
LAST-MODIFIED:20260304T210653Z
UID:7627-1764766800-1764770400@tilos.ai
SUMMARY:Optimization for AI and ML Seminar: Training Neural Networks at Any Scale
DESCRIPTION:Volkan Cevher\, École Polytechnique Fédérale de Lausanne \nAbstract: At the heart of deep learning’s transformative impact lies the concept of scale–encompassing both data and computational resources\, as well as their interaction with neural network architectures. Scale\, however\, presents critical challenges\, such as increased instability during training and prohibitively expensive model-specific tuning. Given the substantial resources required to train such models\, formulating high-confidence scaling hypotheses backed by rigorous theoretical research has become paramount. \nTo bridge theory and practice\, the talk explores a key mathematical ingredient of scaling in tandem with scaling theory: the numerical solution algorithms commonly employed in deep learning\, spanning domains from vision to language models. We unify these algorithms under a common master template\, making their foundational principles transparent. In doing so\, we reveal the interplay between adaptation to smoothness structures via online learning and the exploitation of optimization geometry through non-Euclidean norms. Our exposition moves beyond simply building larger models–it emphasizes strategic scaling\, offering insights that promise to advance the field while economizing on resources. \n\nVolkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara\, Turkey\, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta\, GA in 2005. He was a Research Scientist with the University of Maryland\, College Park from 2006-2007 and also with Rice University in Houston\, TX\, from 2008-2009. Currently\, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include machine learning\, signal processing theory\, optimization theory and methods\, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research award in 2018\, the IEEE Signal Processing Society Best Paper Award in 2016\, a Best Paper Award at CAMSAP in 2015\, a Best Paper Award at SPARS in 2009\, and an ERC CG in 2016 as well as an ERC StG in 2011.
URL:https://tilos.ai/event/optimization-for-ai-and-ml-seminar-with-volkan-cevher-epfl/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/09/cevher-volkan-e1759250260485.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251205T110000
DTEND;TZID=America/Los_Angeles:20251205T120000
DTSTAMP:20260404T044956
CREATED:20251014T194842Z
LAST-MODIFIED:20260304T210702Z
UID:7652-1764932400-1764936000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Stochastic-Gradient and Diagonal-Scaling Algorithms for Constrained Optimization and Learning
DESCRIPTION:Frank E. Curtis\, Lehigh University \nAbstract: I will motivate and provide an overview of recent efforts in my research group on the design and analysis of stochastic-gradient-based algorithms for solving constrained optimization problems. I will focus in particular on our motivation for informed supervised learning\, where constraints in the training problem can be used to impose prior knowledge on the properties that should be possessed by a trained prediction model. In addition\, I will provide a detailed look at our newest extensions of heavy-ball and Adam schemes from the unconstrained to the equality-constrained setting\, for which we have shown state-of-the-art convergence guarantees. I will demonstrate the impressive practical performance of our methods using a few informed supervised learning problems. \n\nFrank E. Curtis is a Professor in the Department of Industrial and Systems Engineering at Lehigh University\, where he has been employed since 2009. He received a bachelor’s degree from the College of William and Mary in 2003 with a double major in Computer Science and Mathematics\, received a master’s degree in 2004 and Ph.D. degree in 2007 from the Department of Industrial Engineering and Management Science at Northwestern University\, and spent two years as a Postdoctoral Researcher in the Courant Institute of Mathematical Sciences at New York University from 2007 until 2009. His research focuses on the design\, analysis\, and implementation of numerical methods for solving large-scale nonlinear optimization problems. He received an Early Career Award from the Advanced Scientific Computing Research (ASCR) program of the U.S. Department of Energy (DoE)\, and has received funding from various programs of the U.S. National Science Foundation (NSF)\, including through a TRIPODS Phase I grant awarded to him and his collaborators at Lehigh\, Northwestern\, and Boston University. He has also received funding from the U.S. Office of Naval Research (ONR) and DoE’s Advanced Research Projects Agency-Energy (ARPA-E). He received\, along with Leon Bottou (Meta AI) and Jorge Nocedal (Northwestern)\, the 2021 SIAM/MOS Lagrange Prize in Continuous Optimization. He was awarded\, with James V. Burke (U. of Washington)\, Adrian Lewis (Cornell)\, and Michael Overton (NYU)\, the 2018 INFORMS Computing Society Prize. He and team members Daniel Molzahn (Georgia Tech)\, Andreas Waechter (Northwestern)\, Ermin Wei (Northwestern)\, and Elizabeth Wong (UC San Diego) were awarded second place in the ARPA-E Grid Optimization Competition in 2020. He currently serves as Area Editor for Continuous Optimization for Mathematics of Operations Research and serves as an Associate Editor for Mathematical Programming\, SIAM Journal on Optimization\, Operations Research\, IMA Journal of Numerical Analysis\, and Mathematical Programming Computation. He previously served as the Vice Chair for Nonlinear Programming for the INFORMS Optimization Society\, and is currently very active in professional societies and groups related to mathematical optimization\, including INFORMS\, the Mathematics Optimization Society\, and the SIAM Activity Group on Optimization.
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-frank-e-curtis-lehigh-university/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/curtis-frank-e1760471303881.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20251208T100000
DTEND;TZID=America/Los_Angeles:20251208T110000
DTSTAMP:20260404T044956
CREATED:20251021T125343Z
LAST-MODIFIED:20260227T214449Z
UID:7677-1765188000-1765191600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Incentivizing Emergent Behaviors for LLMs via Reinforcement Learning
DESCRIPTION:Yi Wu\, Tsinghua University \nAbstract: Reinforcement Learning (RL) has become a powerful post-training method for eliciting advanced behaviors in large language models (LLMs). This talk presents recent results showing how RL can incentivize the emergence of LLM capabilities across three domains: (1) multi-player deduction game\, Werewolf\, where RL-trained LLM agents develop strategic behaviors and outperform strong human players; (2) agentic search\, where large-scale RL enables a 32B model to run multi-step search to answer non-trivial questions beyond commercial baselines; and (3) efficient reasoning\, where RL mitigates over-thinking and improves both reliability and compute efficiency. \nThe papers can be found at \n\nWerewolf: https://arxiv.org/abs/2310.18940 (ICML24)\, https://arxiv.org/abs/2502.04686 (ICML25)\nASearcher: https://arxiv.org/abs/2508.07976\nThinking Efficiency: https://www.arxiv.org/abs/2506.07104 (NeurIPS25)\n\nAll the projects are trained using our large-scale agentic RL system\, AReaL\, which is open-source at https://github.com/inclusionAI/AReaL with its paper at https://arxiv.org/abs/2505.24298 (NeurIPS25). \n\nYi Wu is an assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS)\, Tsinghua University. He obtained his Ph.D. from UC Berkeley and was a researcher at OpenAI from 2019 to 2020. His research focuses on reinforcement learning\, multi-agent learning\, and LLM agents. His representative works include the value iteration network\, the MADDPG/MAPPO algorithm\, OpenAI’s hide-and-seek project\, and the AReaL project. He received the best paper award at NIPS 2016\, the best demo award finalist at ICRA 2024\, and MIT TR35 Asia Pacific 2025 award.
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-yi-wu-tsinghua-university/
LOCATION:Qualcomm Conference Center Room B (Jacobs Hall first floor) and Virtual\, 9736 Engineers Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/wu-yi.jpg
END:VEVENT
END:VCALENDAR