BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260427
DTEND;VALUE=DATE:20260428
DTSTAMP:20260420T225054
CREATED:20260121T215144Z
LAST-MODIFIED:20260402T165134Z
UID:8026-1777248000-1777334399@tilos.ai
SUMMARY:ICLR 2026 Workshop: Principled Design for Trustworthy AI - Interpretability\, Robustness\, and Safety across Modalities
DESCRIPTION:Modern AI systems\, particularly large language models\, vision-language models\, and deep vision networks\, are increasingly deployed in high-stakes settings such as healthcare\, autonomous driving\, and legal decisions. Yet\, their lack of transparency\, fragility to distributional shifts between train/test environments\, and representation misalignment in emerging tasks and data/feature modalities raise serious concerns about their trustworthiness. \nThis workshop focuses on developing trustworthy AI systems by principled design: models that are interpretable\, robust\, and aligned across the full lifecycle – from training and evaluation to inference-time behavior and deployment. We aim to unify efforts across modalities (language\, vision\, audio\, and time series) and across technical areas of trustworthiness spanning interpretability\, robustness\, uncertainty\, and safety.
URL:https://tilos.ai/event/iclr-2026-workshop-principled-design-for-trustworthy-ai-interpretability-robustness-and-safety-across-modalities/
LOCATION:ICLR 2026\, Riocentro Convention and Event Center\, Rio de Janiero\, Brazil
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/01/rio.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260429T110000
DTEND;TZID=America/Los_Angeles:20260429T120000
DTSTAMP:20260420T225054
CREATED:20260408T184918Z
LAST-MODIFIED:20260420T160320Z
UID:8260-1777460400-1777464000@tilos.ai
SUMMARY:TILOS-SDSU Seminar: A Modular AgenticAI Architecture for Commercially Scalable and Compliant Robotics
DESCRIPTION:Sahil Rajesh Dhayalkar\, Brain Corporation \nAbstract: Autonomous navigation in dynamic environments faces immense challenges. Traditional rigid\, rules-based systems often fail due to a lack of semantic understanding needed to adapt to continuous environmental shifts. Conversely\, emerging end-to-end Vision-Language-Action (VLA) models introduce a critical “black box” dilemma; they inherently lack the explicit application context\, deterministic guardrails\, and data efficiency required for rigorous enterprise safety and compliance (e.g.\, SOC2). To address this\, Brain Corp\, in collaboration with UCSD\, proposes a robust hybrid architecture underpinning the BrainOS platform. In this framework\, visual inputs (via VLMs) and task commands (via LLMs) feed directly into a distinct Perception block anchored by a Contextual Grounding Layer with Semantic Mapping. This rich\, grounded perception then informs a hybrid Action block\, where the reasoning capabilities of VLA models operate safely alongside proven deterministic controls such as deep learning\, reinforcement learning\, model predictive control\, etc. Crucially\, an underlying Directed Safety Layer and strict Enterprise Infrastructure wrap this entire process. By isolating adaptable AI reasoning from hard-coded physical controls\, this architecture provides a framework designed to securely manage the unpredictable realities of varied environments. Ultimately\, this approach addresses the compliance bottleneck\, laying the foundation to scale safely across diverse commercial applications and power the continuous\, real-world data engine necessary to fuel next-generation physical AI. \n\nSahil Rajesh Dhayalkar is a Staff Autonomy Engineer and Perception Team Lead at Brain Corporation. He specializes in architecting real-time perception pipelines across LiDAR\, RGB\, and depth sensors\, with his work currently deployed on production robots in dynamic commercial environments. During his tenure\, he has pioneered the real-time computer vision pipeline for on-robot object detection at the edge\, spearheaded “Localize From Anywhere\,” a global localization system utilizing Vision-Language Models and RGB images\, and auto-calibration\, a targetless calibration of ranging sensors on robots. He holds a Master’s degree in Computer Science from Arizona State University. His research interests include robotic perception\, large language models\, deep learning\, neuro-symbolic reasoning\, and optimizations. \nZoom: https://SDSU.zoom.us/j/85839493408
URL:https://tilos.ai/event/tilos-sdsu-seminar-a-modular-agenticai-architecture-for-commercially-scalable-and-compliant-robotics/
LOCATION:TBA
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/04/dhayalkar-sahil-e1775674061221.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260506T110000
DTEND;TZID=America/Los_Angeles:20260506T120000
DTSTAMP:20260420T225054
CREATED:20251013T161935Z
LAST-MODIFIED:20251014T195232Z
UID:7644-1778065200-1778068800@tilos.ai
SUMMARY:TILOS-HDSI Seminar with Ellen Vitercik (Stanford)
DESCRIPTION:Title and abstract TBA… \n\nEllen Vitercik is an Assistant Professor at Stanford with a joint appointment between the Management Science and Engineering department and the Computer Science department. Her research interests include machine learning\, algorithm design\, discrete and combinatorial optimization\, and the interface between economics and computation. Before joining Stanford\, Dr. Vitercik was a Miller fellow at UC Berkeley\, hosted by Michael Jordan and Jennifer Chayes. She received a PhD in Computer Science from Carnegie Mellon University\, advised by Nina Balcan and Tuomas Sandholm. Dr. Vitercik has been recognized by a Schmidt Sciences AI2050 Early Career Fellowship and an NSF CAREER award. Her thesis won the SIGecom Doctoral Dissertation Award\, the CMU School of Computer Science Distinguished Dissertation Award\, and the Honorable Mention Victor Lesser Distinguished Dissertation Award. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-ellen-vitercik-stanford/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/vitericik-ellen-e1760372346890.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260508T100000
DTEND;TZID=America/Los_Angeles:20260508T110000
DTSTAMP:20260420T225054
CREATED:20260408T183052Z
LAST-MODIFIED:20260408T183052Z
UID:8257-1778234400-1778238000@tilos.ai
SUMMARY:Optimization for ML and AI Seminar: Fantastic Pretraining Optimizers and Where to Find Them
DESCRIPTION:Tengyu Ma\, Stanford \nAbstract: AdamW has long been the dominant optimizer in language model pretraining\, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two issues\, we conduct a systematic study of ten deep learning optimizers across four model scales (0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum). We find that fair and informative comparisons require rigorous hyperparameter tuning and evaluations across a range of model scales and data-to-model ratios\, performed at the end of training. First\, optimal hyperparameters for one optimizer may be suboptimal for another\, making blind hyperparameter transfer unfair. Second\, the actual speedup of many proposed optimizers over well-tuned baselines is lower than claimed and decreases with model size to only 1.1x for 1.2B parameter models. Thirdly\, comparing intermediate checkpoints before reaching the target training budgets can be misleading\, as rankings between two optimizers can flip during training due to learning rate decay. Through our thorough investigation\, we find that all the fastest optimizers such as Muon and Soap\, use matrices as preconditioners—multiplying gradients with matrices rather than entry-wise scalars. However\, the speedup of matrix-based optimizers is inversely proportional to model scale\, decreasing from 1.4x over AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models. \n\nTengyu Ma is an assistant professor of computer science at Stanford University. His research interests broadly include topics in machine learning\, algorithms and their theory\, such as deep learning\, (deep) reinforcement learning\, pre-training / foundation models\, robustness\, non-convex optimization\, distributed optimization\, and high-dimensional statistics. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-fantastic-pretraining-optimizers-and-where-to-find-them/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/ma-tengyu-e1760473083457.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260513T110000
DTEND;TZID=America/Los_Angeles:20260513T120000
DTSTAMP:20260420T225054
CREATED:20260223T175317Z
LAST-MODIFIED:20260310T183326Z
UID:8092-1778670000-1778673600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: ComPO: Preference Alignment via Comparison Oracles
DESCRIPTION:Tianyi Lin\, Columbia University \nDirect alignment methods are increasingly used for aligning large language models (LLMs) with human preferences. However\, these methods suffer from the likelihood displacement\, which can be driven by noisy preference pairs that induce similar likelihood for preferred and dis-preferred responses. To address this issue\, we consider doing derivative-free optimization based on comparison oracles. First\, we propose a new preference alignment method via comparison oracles and provide convergence guarantees for its basic mechanism. Second\, we improve our method using some heuristics and conduct the experiments to demonstrate the flexibility and compatibility of practical mechanisms in improving the performance of LLMs using noisy preference pairs. Evaluations are conducted across multiple base and instruction-tuned models with different benchmarks. Experimental results show the effectiveness of our method as an alternative to addressing the limitations of existing methods. A highlight of our work is that we evidence the importance of designing specialized methods for preference pairs with distinct likelihood margins. \n\nTianyi Lin is an assistant professor in the Department of Industrial Engineering and Operations Research (IEOR) at Columbia University. His research interests lie in generative artificial intelligence\, optimization for machine learning\, game theory\, social and economic network\, and optimal transport. He obtained his Ph.D. in Electrical Engineering and Computer Science at UC Berkeley\, where he was advised by Professor Michael Jordan and was associated with the Berkeley Artificial Intelligence Research (BAIR) group. From 2023 to 2024\, he was a postdoctoral researcher at the Laboratory for Information & Decision Systems (LIDS) at Massachusetts Institute of Technology\, working with Professor Asuman Ozdaglar. Prior to that\, he received a B.S. in Mathematics from Nanjing University\, a M.S. in Pure Mathematics and Statistics from University of Cambridge and a M.S. in Operations Research from UC Berkeley. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-compo-preference-alignment-via-comparison-oracles/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/lin-tianyi-e1771869179855.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260515T100000
DTEND;TZID=America/Los_Angeles:20260515T110000
DTSTAMP:20260420T225054
CREATED:20260413T175443Z
LAST-MODIFIED:20260413T175503Z
UID:8269-1778839200-1778842800@tilos.ai
SUMMARY:Optimization for ML and AI Seminar with Nigel Goldenfeld (UC San Diego)
DESCRIPTION:Nigel Goldenfeld\, UC San Diego \nAbstract: TBA \n\nNigel Goldenfeld holds the Chancellor’s Distinguished Professorship in Physics at UC San Diego\, which he joined in Fall 2021 after 36 years at the University of Illinois at Urbana-Champaign (UIUC). Nigel’s research spans condensed matter theory\, the theory of living systems\, hydrodynamics and non-equilibrium statistical physics.  \nNigel received his PhD in theoretical physics from the University of Cambridge (UK) in 1982\, and from 1982-1985 was a postdoctoral fellow at the Institute for Theoretical Physics at UC Santa Barbara\, where his work on the dynamics of snowflake growth helped launch the modern theory of pattern formation in nature. He joined the condensed matter theory group at the Department of Physics at UIUC in 1985\, where his work was instrumental to the discovery of d-wave pairing in high temperature superconductors. Nigel’s interests in biology include microbial ecology\, evolution and systems biology. He was a founding member of the Institute for Genomic Biology at UIUC\, where he led the Biocomplexity Group and directed the NASA Astrobiology Institute for Universal Biology. During the COVID-19 pandemic\, he pivoted from his experience in mathematical modeling of bacteria and viruses to computational epidemiology\, advising the Governor of Illinois\, and helping devise\, set up and run the COVID saliva testing system at UIUC\, which provided ~12 hour turnaround of PCR tests to the 50\,000 people in the campus community and eventually to over 1700 schools and other institutions in Illinois and beyond. Nigel has served on the editorial boards of several journals\, including The Philosophical Transactions of the Royal Society\, Physical Biology and the International Journal of Theoretical and Applied Finance. Selected honors include: Alfred P. Sloan Foundation Fellow\, University Scholar of the University of Illinois\, the Xerox Award for research\, the A. Nordsieck award for excellence in graduate teaching and the American Physical Society’s Leo P. Kadanoff Prize 2020. Nigel is a Fellow of the American Physical Society\, a Fellow of the American Academy of Arts and Sciences\, a Fellow of the Royal Society (UK) and a Member of the US National Academy of Sciences. \nZoom: https://bit.ly/Opt-AI-ML
URL:https://tilos.ai/event/optimization-for-ml-and-ai-seminar-with-nigel-goldenfeld-uc-san-diego/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series,TILOS Sponsored Event
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/04/goldenfeld-nigel-e1776102861254.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260520T110000
DTEND;TZID=America/Los_Angeles:20260520T120000
DTSTAMP:20260420T225054
CREATED:20260227T004426Z
LAST-MODIFIED:20260227T004426Z
UID:8112-1779274800-1779278400@tilos.ai
SUMMARY:TILOS-HDSI Seminar with Andrej Risteski (Carnegie Mellon)
DESCRIPTION:Title and abstract TBA… \n\nAndrej Risteski is an Associate Professor at the Machine Learning Department in Carnegie Mellon University. Prior to that\, he was a Norbert Wiener Research Fellow jointly in the Applied Math department and IDSS at MIT. Dr. Risteski received his PhD in the Computer Science Department at Princeton University under the advisement of Sanjeev Arora. \nDr. Risteski’s research interests lie in the intersection of machine learning\, statistics\, and theoretical computer science\, spanning topics like (probabilistic) generative models\, algorithmic tools for learning and inference\, representation and self-supervised learning\, out-of-distribution generalization and applications of neural approaches to natural language processing and scientific domains. The broad goal of his research is principled and mathematical understanding of statistical and algorithmic problems arising in modern machine learning paradigms. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-with-andrej-risteski-carnegie-mellon/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/png:https://tilos.ai/wp-content/uploads/2026/02/risteski-andrej-e1772152946152.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260603
DTEND;VALUE=DATE:20260604
DTSTAMP:20260420T225054
CREATED:20260224T210719Z
LAST-MODIFIED:20260324T200137Z
UID:8102-1780444800-1780531199@tilos.ai
SUMMARY:CVPR 2026 Workshop: Trustworthy\, Robust\, Uncertainty-Aware\, and Explainable Visual Intelligence and Beyond (TRUE-V)
DESCRIPTION:Contemporary vision models and vision–language models are increasingly deployed in high-stakes domains\, yet remain opaque\, fragile\, and difficult to align across tasks and modalities. This workshop aim to foster dialogue on the urgent need for transparent\, reliable\, and safe computer vision systems\, especially in critical domains such as healthcare\, transportation\, and legal decision making. It brings together research on interpretability\, robustness\, uncertainty\, and alignment under a unified design paradigm\, encouraging cross-disciplinary exchange on shared technical and societal challenges. By promoting responsible design and deployment\, the workshop seeks to advance forward-looking solutions for visual intelligence that enhance accountability and public trust.
URL:https://tilos.ai/event/cvpr-2026-workshop/
LOCATION:IEEE/CVF Conference on Computer Vision and Pattern Recognition\, Denver\, CO\, United States
CATEGORIES:TILOS Sponsored Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2026/02/CVPR_Denver_2026.jpg
END:VEVENT
END:VCALENDAR