BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://tilos.ai
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260408T110000
DTEND;TZID=America/Los_Angeles:20260408T120000
DTSTAMP:20260403T233852
CREATED:20251008T180712Z
LAST-MODIFIED:20260330T151101Z
UID:7641-1775646000-1775649600@tilos.ai
SUMMARY:TILOS-HDSI Seminar: Engineering Interpretable and Faithful AI Systems
DESCRIPTION:René Vidal\, University of Pennsylvania \nAbstract: Large Language Models (LLMs) and Vision Language Models (VLMs) have achieved remarkable performance across a wide range of tasks. However\, their growing deployment has exposed fundamental limitations in faithfulness\, safety\, and transparency. In this talk\, I will present a unified perspective on addressing these challenges through principled model interventions and interpretable decision-making frameworks. I first introduce Information Pursuit (IP)\, an interpretable-by-design prediction framework that replaces opaque reasoning with a sequence of informative\, user-interpretable queries\, yielding concise explanations alongside accurate predictions. I then present Parsimonious Concept Engineering (PaCE)\, an approach that improves faithfulness and alignment by selectively removing undesirable internal activations\, mitigating hallucinations and biased language while preserving linguistic competence. Results across text\, vision\, and medical tasks illustrate how these ideas advance transparency without sacrificing performance. Together\, these contributions point toward a broader direction for building AI systems that are powerful\, faithful\, and aligned with human values. \n\nRené Vidal is the Penn Integrates Knowledge and Rachleff University Professor of Electrical and Systems Engineering and Radiology at the University of Pennsylvania\, where he directs the Center for Innovation in Data Engineering and Science (IDEAS) and serves as Co-Chair of Penn AI. He is also an Amazon Scholar\, Affiliated Chief Scientist at NORCE\, and former Associate Editor-in-Chief of IEEE Transactions on Pattern Analysis and Machine Intelligence. Professor Vidal’s research advances the mathematical foundations of deep learning and trustworthy AI\, with broad impact across computer vision and biomedical data science. His contributions have been recognized with major honors\, including the IEEE Edward J. McCluskey Technical Achievement Award\, the D’Alembert Faculty Award\, the J.K. Aggarwal Prize\, the ONR Young Investigator Award\, the NSF CAREER Award\, and best paper awards in machine learning\, computer vision\, signal processing\, control\, and medical robotics. He is a Fellow of ACM\, AIMBE\, IEEE\, and IAPR\, and a Sloan Fellow. \nZoom: https://bit.ly/TILOS-Seminars
URL:https://tilos.ai/event/tilos-hdsi-seminar-engineering-interpretable-and-faithful-ai-systems/
LOCATION:HDSI 123 and Virtual\, 3234 Matthews Ln\, La Jolla\, CA\, 92093\, United States
CATEGORIES:TILOS Seminar Series
ATTACH;FMTTYPE=image/jpeg:https://tilos.ai/wp-content/uploads/2025/10/rene-vidal-e1759946821354.jpg
END:VEVENT
END:VCALENDAR