Recorded Talks: TILOS Seminar Series
The Synergy between Machine Learning and the Natural Sciences
Max Welling, Research Chair in Machine Learning, University of Amsterdam
Traditionally machine learning has been heavily influenced by neuroscience (hence the name artificial neural networks) and physics (e.g. MCMC, Belief Propagation, and Diffusion based Generative AI). We have recently witnessed that the flow of information has also reversed, with new tools developed in the ML community impacting physics, chemistry and biology. Examples include faster DFT, Force-Field accelerated MD simulations, PDE Neural Surrogate models, generating druglike molecules, and many more. In this talk I will review the exciting opportunities for further cross fertilization between these fields, ranging from faster (classical) DFT calculations and enhanced transition path sampling to traveling waves in artificial neural networks.
Prof. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies, professor at UC Irvine, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate Prof. Gerard ‘t Hooft.
TILOS Seminar: The Dissimilarity Dimension: Sharper Bounds for Optimistic Algorithms
Aldo Pacchiano, Assistant Professor, Boston University Center for Computing and Data Sciences
The principle of Optimism in the Face of Uncertainty (OFU) is one of the foundational algorithmic design choices in Reinforcement Learning and Bandits. Optimistic algorithms balance exploration and exploitation by deploying data collection strategies that maximize expected rewards in plausible models. This is the basis of celebrated algorithms like the Upper Confidence Bound (UCB) for multi-armed bandits. For nearly a decade, the analysis of optimistic algorithms, including Optimistic Least Squares, in the context of rich reward function classes has relied on the concept of eluder dimension, introduced by Russo and Van Roy in 2013. In this talk we shed light on the limitations of the eluder dimension in capturing the true behavior of optimistic strategies in the realm of function approximation. We remediate these by introducing a novel statistical measure, the “dissimilarity dimension”. We show it can be used to provide sharper sample analysis of algorithms like Optimistic Least Squares by establishing a link between regret and the dissimilarity dimension. To illustrate this, we will show that some function classes have arbitrarily large eluder dimension but constant dissimilarity. Our regret analysis draws inspiration from graph theory and may be of interest to the mathematically minded beyond the field of statistical learning theory. This talk sheds new light on the fundamental principle of optimism and its algorithms in the function approximation regime, advancing our understanding of these concepts.
TILOS Seminar: Building Personalized Decision Models with Federated Human Preferences
Aadirupa Saha, Research Scientist at Apple
Customer statistics collected in several real-world systems have reflected that users often prefer eliciting their liking for a given pair of items, say (A,B), in terms of relative queries like: “Do you prefer Item A over B?”, rather than their absolute counterparts: “How much do you score items A and B on a scale of [0-10]?”. Drawing inspirations, in the search for a more effective feedback collection mechanism, led to the famous formulation of Dueling Bandits (DB), which is a widely studied online learning framework for efficient information aggregation from relative/comparative feedback. However despite the novel objective, unfortunately, most of the existing DB techniques were limited only to simpler settings of finite decision spaces, and stochastic environments, which are unrealistic in practice.
In this talk, we will start with the basic problem formulations for DB and familiarize ourselves with some of the breakthrough results. Following this, will dive deeper into a more practical framework of contextual dueling bandits (C-DB) where the goal of the learner is to make personalized predictions based on the user contexts. We will see a new algorithmic approach that can efficiently achieve the optimal O(sqrt T) regret performance for this problem, resolving an open problem from Dudík et al. [COLT, 2015]. In the last part of the talk, we will extend the aforementioned models to a federated framework, which entails developing preference-driven prediction models for distributed environments for creating large-scale personalized systems, including recommender systems and chatbot interactions. Apart from exploiting the limited preference feedback model, the challenge lies in ensuring user privacy and reducing communication complexity in the federated setting. We will conclude the talk with some interesting open problems.
Towards Foundation Models for Graph Reasoning and AI 4 Science
Michael Galkin, Research Scientist at Intel AI Lab
Foundation models in graph learning are hard to design due to the lack of common invariances that transfer across different structures and domains. In this talk, I will give an overview of the two main tracks of my research at Intel AI: creating foundation models for knowledge graph reasoning that can run zero-shot inference on any multi-relational graphs, and foundation models for materials discovery in the AI4Science domain that capture physical properties of crystal structures and transfer to a variety of predictive and generative tasks. We will also talk about theoretical and practical challenges like scaling behavior, data scarcity, and diverse evaluation of foundation graph models.
Michael Galkin is a Research Scientist at Intel AI Lab in San Diego working on Graph Machine Learning and Geometric Deep Learning. Previously, he was a postdoc at Mila–Quebec AI Institute with Will Hamilton, Reihaneh Rabbany, and Jian Tang, focusing on many graph representation learning problems. Sometimes, Mike writes long blog posts on Medium about graph learning.
TILOS Seminar: Learning from Diverse and Small Data
Ramya Korlakai Vinayak, Assistant Professor, University of Wisconsin at Madison
In this talk, we will address this question in the following settings:
(i) In many applications, we observe count data which can be modeled as Binomial (e.g., polling, surveys, epidemiology) or Poisson (e.g., single cell RNA data) data. As a single or finite parameters do not capture the diversity of the population in such datasets, they are often modeled as nonparametric mixtures. In this setting, we will address the following question, “how well can we learn the distribution of parameters over the population without learning the individual parameters?” and show that nonparametric maximum likelihood estimators are in fact minimax optimal.
(ii) Learning preferences from human judgements using comparison queries plays a crucial role in cognitive and behavioral psychology, crowdsourcing democracy, surveys in social science applications, and recommendation systems. Models in the literature often focus on learning average preference over the population due to the limitations on the amount of data available per individual. We will discuss some recent results on how we can reliably capture diversity in preferences while pooling together data from individuals.
Ramya Korlakai Vinayak is an assistant professor in the Dept. of ECE and affiliated faculty in the Dept. of Computer Science and the Dept. of Statistics at the UW-Madison. Her research interests span the areas of machine learning, statistical inference, and crowdsourcing. Her work focuses on addressing theoretical and practical challenges that arise when learning from societal data. Prior to joining UW-Madison, Ramya was a postdoctoral researcher in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. She received her Ph.D. in Electrical Engineering from Caltech. She obtained her Masters from Caltech and Bachelors from IIT Madras. She is a recipient of the Schlumberger Foundation Faculty of the Future fellowship from 2013-15, and an invited participant at the Rising Stars in EECS workshop in 2019. She is the recipient of NSF CAREER Award 2023-2028.
TILOS Seminar: Machine Learning Training Strategies Inspired by Humans' Learning Skills
Pengtao Xie, Assistant Professor, UC San Diego
Humans, as the most powerful learners on the planet, have accumulated a lot of learning skills, such as learning through tests, interleaving learning, self-explanation, active recalling, to name a few. These learning skills and methodologies enable humans to learn new topics more effectively and efficiently. We are interested in investigating whether humans' learning skills can be borrowed to help machines to learn better. Specifically, we aim to formalize these skills and leverage them to train better machine learning (ML) models. To achieve this goal, we develop a general framework--Skillearn, which provides a principled way to represent humans' learning skills mathematically and use the formally-represented skills to improve the training of ML models. In two case studies, we apply Skillearn to formalize two learning skills of humans: learning by passing tests and interleaving learning, and use the formalized skills to improve neural architecture search.
Pengtao Xie is an assistant professor at UC San Diego. He received his PhD from the Machine Learning Department at Carnegie Mellon University in 2018. His research interests lie in machine learning inspired by human learning and its applications in healthcare. His research outcomes have been adopted by medical device companies, medical imaging centers, hospitals, etc. and have been published at top-tier artificial intelligence conferences and journals including ICML, NeurIPS, ACL, ICCV, TACL, etc. He is the recipient of the Tencent AI-Lab Faculty Award, Tencent WeChat Faculty Award, the Innovator Award presented by the Pittsburgh Business Times, the Siebel Scholars award, and the Goldman Sachs Global Leader Scholarship.
TILOS Seminar: Engineering the Future of Software with AI
Dr. Ruchir Puri, Chief Scientist, IBM Research, IBM Fellow, Vice-President IBM Corporate Technology
Software has become woven into every aspect of our society, and it will be fair to say that “Software has eaten the world”. More recently, advances in AI are starting to transform every aspect of our society as well. These two tectonic forces of transformation – “Software” and “AI” are colliding together resulting in a seismic shift – a future where software itself will be built, maintained, and operated by AI – pushing us towards a future where “Computers can program themselves!” In this talk, we will discuss these forces of “AI for Code” and how the future of software engineering is being redefined by AI.
Dr. Ruchir Puri is the Chief Scientist of IBM Research, an IBM Fellow, and Vice-President of IBM Corporate Technology. He led IBM Watson as its CTO and Chief Architect from 2016-19 and has held various technical, research, and engineering leadership roles across IBM’s AI and Research businesses. Dr. Puri is a Fellow of the IEEE, and has been an ACM Distinguished Speaker, an IEEE Distinguished Lecturer, and was awarded 2014 Asian American Engineer of the Year. Ruchir has been an adjunct professor at Columbia University, NY, and a visiting scientist at Stanford University, CA. He was honored with John Von-Neumann Chair at Institute of Discrete Mathematics at Bonn University, Germany. Dr. Puri is an inventor of over 70 United States patents and has authored over 100 scientific publications on software-hardware automation methods, microprocessor design, and optimization and AI algorithms. He is the chair of AAAI-IAAI conference that focused on industrial applications of AI. Ruchir is the recipient of the prestigious Distinguished Alumnus Award from Indian Institute of Technology (IIT), Kanpur in 2022.
TILOS Seminar: Causal Discovery for Root Cause Analysis
Professor Murat Kocaoglu, Assistant Professor, Purdue University
Cause-effect relations are crucial for several fields, from medicine to policy design as they inform us of the outcomes of our actions a priori. However, causal knowledge is hard to curate for complex systems that might be changing frequently. Causal discovery algorithms allow us to extract causal knowledge from the available data. In this talk, first, we provide a short introduction to algorithmic causal discovery. Next, we propose a novel causal discovery algorithm from a collection of observational and interventional datasets in the presence of unobserved confounders, with unknown intervention targets. Finally, we demonstrate the effectiveness of our algorithm for root-cause analysis in microservice architectures.
Dr. Kocaoglu received his B.S. degree in Electrical-Electronics Engineering with a minor in Physics from the Middle East Technical University in 2010, his M.S. degree from the Koc University, Turkey in 2012, and his Ph.D. degree from The University of Texas at Austin in 2018 under the supervision of Prof. Alex Dimakis and Prof. Sriram Vishwanath. He was a Research Staff Member in the MIT-IBM Watson AI Lab in IBM Research, Cambridge, Massachusetts from 2018 to 2020. Since 2021, he is an assistant professor in the School of ECE at Purdue University. His current research interests include causal inference and discovery, causal machine learning, deep generative models, and information theory.
TILOS Seminar: Rare Gems: Finding Lottery Tickets at Initialization
Dimitris Papailiopoulos, Associate Professor, University of Wisconsin-Madison
Large neural networks can be pruned to a small fraction of their original size, with little loss in accuracy, by following a time-consuming “train, prune, re-train” approach. Frankle & Carbin in 2019 conjectured that we can avoid this by training lottery tickets, i.e., special sparse subnetworks found at initialization, that can be trained to high accuracy. However, a subsequent line of work presents concrete evidence that current algorithms for finding trainable networks at initialization, fail simple baseline comparisons, e.g., against training random sparse subnetworks. Finding lottery tickets that train to better accuracy compared to simple baselines remains an open problem. In this work, we resolve this open problem by discovering Rare Gems: sparse, trainable networks at initialization, that achieve high accuracy even before training. When Rare Gems are trained with SGD, they achieve accuracy competitive or better than Iterative Magnitude Pruning (IMP) with warmup.
Dimitris Papailiopoulos is the Jay & Cynthia Ihlenfeld Associate Professor of Electrical and Computer Engineering at the University of Wisconsin-Madison, a faculty fellow of the Grainger Institute for Engineering, and a faculty affiliate at the Wisconsin Institute for Discovery. His research interests span machine learning, information theory, and distributed systems, with a current focus on efficient large-scale training algorithms. Before coming to Madison, Dimitris was a postdoctoral researcher at UC Berkeley and a member of the AMPLab. He earned his Ph.D. in ECE from UT Austin, under the supervision of Alex Dimakis. He received his ECE Diploma M.Sc. degree from the Technical University of Crete, in Greece. Dimitris is a recipient of the NSF CAREER Award (2019), three years of Sony Faculty Innovation Awards (2018, 2019 and 2020), a joint IEEE ComSoc/ITSoc Best Paper Award (2020), an IEEE Signal Processing Society, Young Author Best Paper Award (2015), the Vilas Associate Award (2021), the Emil Steiger Distinguished Teaching Award (2021), and the Benjamin Smith Reynolds Award for Excellence in Teaching (2019). In 2018, he co-founded MLSys, a new conference that targets research at the intersection of machine learning and systems.
TILOS Seminar: Robust and Equitable Uncertainty Estimation
Aaron Roth, Professor, University of Pennsylvania
Machine learning provides us with an amazing set of tools to make predictions, but how much should we trust particular predictions? To answer this, we need a way of estimating the confidence we should have in particular predictions of black-box models. Standard tools for doing this give guarantees that are averages over predictions. For instance, in a medical application, such tools might paper over poor performance on one medically relevant demographic group if it is made up for by higher performance on another group. Standard methods also depend on the data distribution being static—in other words, the future should be like the past.
In this lecture, I will describe new techniques to address both these problems: a way to produce prediction sets for arbitrary black-box prediction methods that have correct empirical coverage even when the data distribution might change in arbitrary, unanticipated ways and such that we have correct coverage even when we zoom in to focus on demographic groups that can be arbitrary and intersecting. When we just want correct group-wise coverage and are willing to assume that the future will look like the past, our algorithms are especially simple.
This talk is based on two papers that are joint work with Osbert Bastani, Varun Gupta, Chris Jung, Georgy Noarov, and Ramya Ramalingam.
Aaron Roth is the Henry Salvatori Professor of Computer and Cognitive Science, in the Computer and Information Sciences department at the University of Pennsylvania, with a secondary appointment in the Wharton statistics department. He is affiliated with the Warren Center for Network and Data Science, and co-director of the Networked and Social Systems Engineering (NETS) program. He is also an Amazon Scholar at Amazon AWS. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and research awards from Yahoo, Amazon, and Google. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory, learning theory, and machine learning. Together with Cynthia Dwork, he is the author of the book “The Algorithmic Foundations of Differential Privacy.” Together with Michael Kearns, he is the author of “The Ethical Algorithm.”
TILOS Seminar: On Policy Optimization Methods for Control
Maryam Fazel, Professor, University of Washington
Policy Optimization methods enjoy wide practical use in reinforcement learning (RL) for applications ranging from robotic manipulation to game-playing, partly because they are easy to implement and allow for richly parameterized policies. Yet their theoretical properties, from optimality to statistical complexity, are still not fully understood. To help develop a theoretical basis for these methods, and to bridge the gap between RL and control theoretic approaches, recent work has studied whether gradient-based policy optimization can succeed in designing feedback control policies.
In this talk, we start by showing the convergence and optimality of these methods for linear dynamical systems with quadratic costs, where despite nonconvexity, convergence to the optimal policy occurs under mild assumptions. Next, we make a connection between convex parameterizations in control theory on one hand, and the Polyak-Lojasiewicz property of the nonconvex cost function, on the other. Such a connection between the nonconvex and convex landscapes provides a unified view towards extending the results to more complex control problems.
Maryam Fazel is the Moorthy Family Professor of Electrical and Computer Engineering at the University of Washington, with adjunct appointments in Computer Science and Engineering, Mathematics, and Statistics. Maryam received her MS and PhD from Stanford University, and her BS from Sharif University of Technology in Iran, and was a postdoctoral scholar at Caltech before joining UW. She is a recipient of the NSF Career Award, UWEE Outstanding Teaching Award, and a UAI conference Best Student Paper Award with her student. She directs the Institute for Foundations of Data Science (IFDS), a multi-site NSF TRIPODS Institute. Her current research interests are in the area of optimization in machine learning and control.
TILOS Seminar: Non-convex Optimization for Linear Quadratic Gaussian (LQG) Control
Yang Zheng, Assistant Professor, UC San Diego
Recent studies have started to apply machine learning techniques to the control of unknown dynamical systems. They have achieved impressive empirical results. However, the convergence behavior, statistical properties, and robustness performance of these approaches are often poorly understood due to the non-convex nature of the underlying control problems. In this talk, we revisit the Linear Quadratic Gaussian (LQG) control and present recent progress towards its landscape analysis from a non-convex optimization perspective. We view the LQG cost as a function of the controller parameters and study its analytical and geometrical properties. Due to the inherent symmetry induced by similarity transformations, the LQG landscape is very rich yet complicated. We show that 1) the set of stabilizing controllers has at most two path-connected components, and 2) despite the nonconvexity, all minimal stationary points (controllable and observable controllers) are globally optimal. Based on the special non-convex optimization landscape, we further introduce a novel perturbed policy gradient (PGD) method to escape a large class of suboptimal stationary points (including high-order saddles). These results shed some light on the performance analysis of direct policy gradient methods for solving the LQG problem. The talk is based on our recent papers: https://arxiv.org/abs/2102.04393 and https://arxiv.org/abs/2204.00912.
Yang Zheng is an assistant professor in the ECE department at UC San Diego. Yang Zheng received the DPhil (Ph.D.) degree in Engineering Science from the University of Oxford in 2019. He received the B.E. and M.S. degrees from Tsinghua University in 2013 and 2015, respectively. From February 2019 to August 2020, he was a postdoctoral researcher at Harvard University. He was a research associate at Imperial College London in 2021.
Dr. Zheng’s research interests include learning, optimization, and control of network systems, and their applications to cyber-physical systems, autonomous vehicles, and traffic systems. His work has been acknowledged by several awards, including the 2019 European Ph.D. Award on Control for Complex and Heterogeneous Systems, the Best Student Paper Award Finalist at the 2019 European Control Conference, the Best Student Paper Award at the 17th IEEE International Conference on Intelligent Transportation Systems, and the Best Paper Award at the 14th Intelligent Transportation Systems Asia-Pacific Forum. He received the National Scholarship, Outstanding Graduate at Tsinghua University, the Clarendon Scholarship at the University of Oxford, and the Chinese Government Award for Outstanding Self-financed Students Abroad.
TILOS Seminar: Machine Learning for Design Methodology and EDA Optimization
Haoxing Ren, NVIDIA
In this talk, I will first illustrate how ML helps improve design quality as well as design productivity from design methodology perspective with examples in digital and analog designs. Then I will discuss the potential of applying ML to solve challenging EDA optimization problems, focusing on three promising ML techniques: reinforcement learning (RL), physics-based modeling and self-supervised learning (SSL). RL learns to optimize the problem by converting the EDA problem objectives into environment rewards. It can be applied to both directly solve the EDA problem or be part of a conventional EDA algorithm. Physics-based modeling enables more accurate and transferable learning for EDA problems. SSL learns the optimized EDA solution data manifold. Conditioned on the problem input, it can directly produce the solution. I will illustrate the applications of these techniques in standard cell layout, computational lithography, and gate sizing problems. Finally, I will outline three main approaches to integrate ML and conventional EDA algorithms together and the importance of adopting GPU computing to EDA.
Haoxing Ren (Mark) leads the Design Automation research group at NVIDIA Research. His research interests are machine learning applications in design automation and GPU accelerated EDA. Before joining NVIDIA in 2016, he spent 15 years at IBM Microelectronics and IBM Research working on physical design and logic synthesis tools and methodologies for IBM microprocessor and ASIC designs. He received several IBM technical achievement awards including the IBM Corporate Award for his work on improving microprocessor design productivity. He published many papers in the field of design automation including several book chapters in logic synthesis and physical design. He also received the best paper awards at ISPD’2013, DAC’2019 and TCAD’2021. He earned his PhD in Computer Engineering from University of Texas at Austin in 2006.
TILOS Seminar: How to use Machine Learning for Combinatorial Optimization
Sherief Reda, Professor, Brown University and Principal Research Scientist at Amazon
Combinatorial optimization methods are routinely used in many scientific fields to identify optimal solutions among a large but finite set of possible solutions for problems of interests. Given the recent success of machine learning techniques in classification of natural signals (e.g., voice, image, text), it is natural to ask how machine learning methods can be used to improve the quality of solution or the runtime of combinatorial optimization algorithms? In this talk I will provide a general taxonomy and research directions for the use of machine learning techniques in combinatorial optimization. I will illustrate these directions using a number of case studies from my group's research, which include (1) improving the quality of results of integer linear programming (ILP) solver using deep metric learning, and (2) using reinforcement learning techniques to optimize the size of graphs arising in digital circuit design.
Sherief Reda is a Full Professor at the School of Engineering and Computer Science Department at Brown University and a Principal Research Scientist at Amazon. He joined Brown University in 2006 after receiving his Ph.D. in computer science and engineering from University of California, San Diego. He has over 135 research articles in the areas of energy-efficient computing, electronic design automation and combinatorial optimization, as well as several patents. Professor Reda received a number of research acknowledgments and awards, including eight best paper nominations, three best paper awards, and a National Science Foundation CAREER award. He has been a PI or co-PI on more than $21.1M of funded projects from federal agencies and industry corporations. He is a senior member of IEEE.
The FPGA Physical Design Flow Through the Eyes of Machine Learning
Dr. Ismail Bustany, Fellow, AMD
The FPGA physical design (PD) flow has innate features that differentiate it from its sibling, the ASIC PD flow. FPGA device families service a wide range of applications, have much longer lifespans in production use, and bring templatized logic layout and routing interconnect fabrics whose characteristics are captured by detailed device models and simpler timing and routing models (e.g. buffered interconnect and abstracted routing resources). Furthermore, the FPGA PD flow is a “one-stop shop” from synthesis to bitstream generation. This avails complete access to annotate, instrument, and harvest netlist and design features. These key differences provide rich opportunities to exploit both device data and design application specific contexts in optimizing various components of the PD flow. In this talk, I will present examples for the application of ML in device modeling and parameter optimization, draw attention to exciting research opportunities for applying the “learning to optimize” paradigm to solving the placement and routing problems, and share some practical learnings.
Dr. Ismail Bustany is a Fellow at AMD, where he works on physical design implementation and MLCAD . He has served on the technical program committees for the ISPD, ISQED, and DAC. He was the 2019 ISPD general chair. He currently serves on the organizing committees for the ICCAD and SLIP. He organized the 2014 and 2015 ISPD detailed routing-driven placement contests and co-organized the 2017 ICCAD detailed placement contest. His research interests include physical design, computationally efficient optimization algorithms, MLCAD, sparse matrix computations/acceleration, and partitioning algorithms. He earned his B.S. in CSE from UC San Diego and M.S./Ph.D. in EECS from UC Berkeley.
TILOS Seminar: Reasoning Numerically
Sicun Gao, Assistant Professor, UC San Diego
Highly-nonlinear continuous functions have become a pervasive model of computation. Despite newsworthy progress, the practical success of “intelligent” computing is still restricted by our ability to answer questions regarding their quality and dependability: How do we rigorously know that a system will do exactly what we want it to do and nothing else? For traditional software and hardware systems that primarily use digital and rule-based designs, automated reasoning has provided the fundamental principles and widely-used tools for ensuring their quality in all stages of design and engineering. However, the rigid symbolic formulations of typical automated reasoning methods often make them unsuitable for dealing with computation units that are driven by numerical and data-driven approaches. I will overview some of our attempts in bridging this gap. I will highlight how the core challenge of NP-hardness is shared across discrete and continuous domains, and how it motivates us to seek the unification of symbolic, numerical, and statistical perspectives towards better understanding and handling of the curse of dimensionality.
Sicun Gao is an Assistant Professor in Computer Science and Engineering at the University of California, San Diego. He works on search and optimization algorithms for improving the quality of automation and autonomous systems. He is a recipient of the Air Force Young Investigator Award, Amazon Research Award, NSF Career Award, and Silver Medal for the Kurt Godel Research Prize. He received his PhD from Carnegie Mellon University and was a postdoctoral researcher at CMU and MIT.















