Recorded Talks: Workshops and Tutorials
Tutorial on AI Alignment (part 2 of 2): Methodologies for AI Alignment
Ahmad Beirami, Google DeepMind
Hamed Hassani, University of Pennsylvania
The second part of the tutorial focuses on AI alignment techniques and is structured as three segments: In the first segment, we examine black-box techniques aimed at aligning models towards various goals (e.g., safety), such as controlled decoding and the best-of-N algorithm. In the second segment, we will also consider efficiency, where we examine information-theoretic techniques designed to improve inference latency, such as model compression or speculative decoding. If time permits, in the final segment, we discuss inference-aware alignment, which is a framework to align models to work better with inference-time compute algorithms.
Tutorial on AI Alignment (part 1 of 2): Safety Vulnerabilities of Current Frontier Models
Ahmad Beirami, Google DeepMind
Hamed Hassani, University of Pennsylvania
In recent years, large language models have been used to solve a multitude of natural language tasks. In the first part of the tutorial, we start by giving a brief overview of the history of language modeling and the fundamental techniques that led to the development of the modern language models behind Claude, Gemini, GPT, and Llama. We then dive into the safety failure modes of the current frontier models. Specifically, we will explain that, despite efforts to align large language models (LLMs) with human intentions, popular LLMs are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. We review the current state of the jailbreaking literature, including new questions about robust generalization, discussions of open-box and black-box attacks on LLMs, defenses against jailbreaking attacks, and a new leaderboard to evaluate the robust generalization of production LLMs.
The focus of the first session will be mostly on safety vulnerabilities of the frontier LLMs. In the second session, we will focus on the current methodologies that aim to mitigate these vulnerabilities and more generally align language models with human standards.
IEEE Seasonal School: Manufacturability, Testing, Reliability, and Security
00:00:00 - Introduction
01:51:00 - "Machine Learning for DFM", Bei Yu, Associate Professor, Chinese University of Hong Kong
59:54:00 - "ML for Testing and Yield", Li-C. Wang, Professor, UC Santa Barbara
01:59:00 - "ML for Cross-Layer Reliability and Security", Muhammad Shafique, Professor of Computer Engineering, NYU Abu Dhabi
IEEE Seasonal School: Standard Platforms for ML in EDA and IC Design
00:00:00 - Introduction
00:02:45 - "Exchanging EDA data for AI/ML using Standard API", Kerim Kalafala, Senior Technical Staff Member, IBM (co-chair, AI/ML for EDA Special Interest Group, Si2), Richard Taggart, Senior Software Engineering Manager, IBM and Akhilesh Kumar, Principal R&D Engineer, Ansys
01:29:30 - "IEEE CEDA DATC RDF and METRICS2.1: Toward a Standard Platform for ML-Enabled EDA and IC Design", Jinwook Jung, Research Staff Member, IBM Research
IEEE Seasonal School: Applications / Future Frontiers
00:00:00 - Introduction
01:23:00 - "Automating Analog Layout: Why This Time is Different", Sachin Sapatnekar, Professor, University of Minnesota
57:20:00 - "Machine Learning-Powered Tools and Methodologies for 3D Integration", Sung Kyu Lim, Professor, Georgia Institute of Technology
01:58:18 - "ML for Verification", Shobha Vasudevan, Researcher at Google and Adjunct Professor at UIUC
IEEE Seasonal School: Deep / Reinforcement Learning
00:00:00 - Introduction
02:30:00 - "Machine Learning for EDA Optimization", Mark Ren, Senior Manager, NVIDIA Research
01:02:30 - "Learning to Optimize", Ismail Bustany, Fellow, AMD
02:01:50 - "Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning", Joe Jiang, Staff Software Engineer and Manager, Google Brain