UAI 2022 - Keynote Speakers


UAI 2022 is pleased to announce the following invited speakers:

Danilo J. Rezende,  DeepMind

Eric P. Xing,  Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence

Finale Doshi-Velez,  Harvard University

Mihaela van der Schaar,  University of Cambridge

Peter Spirtes,  Carnegie Mellon University

Zeynep Akata,  University of Tübingen


Danilo J. Rezende

DeepMind

Title

Inference and sampling with symmetries and manifold constraints

Abstract and slides

The study of symmetries in physics has revolutionized our understanding of the world. Inspired by this, the development of methods to incorporate internal (Gauge) and external (space-time) symmetries into machine learning models is a very active field of research. We will discuss general methods for incorporating symmetries in ML, and our work on invariant generative models. We will then present its applications to quantum field theory on the lattice (LQFT) and molecular dynamics (MD) simulations. In the MD front, we'll talk about how we constructed permutation and translation-invariant normalizing flows on a torus for free-energy estimation. In the LQFT front, we'll present our work that introduced the first U(N) and SU(N) Gauge-equivariant normalizing flows for pure Gauge simulations and its extension to incorporate "pseudo-fermions", leading to the first proof of principle of a full QCD simulation with normalizing flows in 2D.

Slides could be found here.

Bio

Danilo J. Rezende is a Senior Staff Researcher and lead of the Generative Models and Inference group at DeepMind, London. For the last 12 years his research has focused on scalable inference and generative models applied to reinforcement learning, modelling of complex data such as medical images, videos, 3D scene geometry and complex physical systems. He has co-authored >90 papers and patents, amongst which a few highly-cited papers on approximate inference and modelling with neural networks (such as Deep Latent Gaussian models, Normalizing Flows and Interaction Networks). Highlights of his recent work at the intersection of AI and physics include equivariant normalizing flows for lattice-QCD and molecular dynamics. Danilo is engaged in promoting the alliance between ML/AI, Physics and Geometry. He holds a BA in Physics and an MSc in Theoretical Physics from Ecole Polytechnique (Palaiseau, France) and the Institute of Theoretical Physics (SP, Brazil). Once an aspiring PhD in theoretical physics at the Centre de Physique Théorique in Marseille, France he switched to a PhD in Computational Neuroscience at Ecole Polytechnique Federale de Lausanne (Lausanne, Switzerland), where he studied computational/statistical models of learning and sensory fusion.


Eric P. Xing

Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence

Title

Machine Learning at All Levels -- a pathway to “Autonomous” AI

Abstract

An integrative AI system is not a monolithic blackbox, but a modular, standardizable, and certifiable assembly of building blocks at all levels: data, model, algorithm, computing, and infrastructure. In this talk I summarize our work on developing principled and "white-box" approaches, including formal representations, optimization formalisms, intra- and inter-level mapping strategies, theoretical analysis, and production platforms, for optimal and potentially automatic creation and configuration of AI solutions at ALL LEVELS, namely, data harmonization, model composition, learning to learn, scalable computing, and infrastructure orchestration.

We argue that traditional benchmark/leaderboard-driven bespoke approaches or the massive end-to-end “AGI” models in the Machine Learning community are not suited to meet the highly demanding industrial standards beyond algorithmic performance, such as cost-effectiveness, safety, scalability, and automatability, typically expected in production systems; and there is a need to work on ML-at-All-Level as a necessity step toward industrializing AI that can be considered transparent, trustworthy, cost effective, and potentially autonomous.

Bio

Eric P. Xing is the President of the Mohamed bin Zayed University of Artificial Intelligence, a Professor of Computer Science at Carnegie Mellon University, and the Founder and Chairman of Petuum Inc., a 2018 World Economic Forum Technology Pioneer company that builds standardized artificial intelligence development platform and operating system for broad and general industrial AI applications. He completed his PhD in Computer Science at UC Berkeley. His main research interests are the development of machine learning and statistical methodology; and composable, automatic, and scalable computational systems, for solving problems involving automated learning, reasoning, and decision-making in artificial, biological, and social systems. Prof. Xing currently serves or has served the following roles: associate editor of the Journal of the American Statistical Association (JASA), Annals of Applied Statistics (AOAS), and IEEE Journal of Pattern Analysis and Machine Intelligence (PAMI); action editor of the Machine Learning Journal (MLJ) and Journal of Machine Learning Research (JMLR); he is a board member of the International Machine Learning Society.


Finale Doshi-Velez

Harvard University

Title

Towards Increasing Generalization in Interpretable Machine Learning

Abstract and slides

The field of interpretable machine learning has continued to see rapid expansion and progress. However, many of the results are quite specific: a popular technique fails under a certain adversarial attack, a small user study with a certain task suggests that one form of explanation is better than other. What can we do to get more generalizable insights?

In this talk, I'll start by first sharing some of those more "specific" results from our work, in particular situations in which the outcomes were unexpected. Next, I'll lay out a hypothesis for how we might generalize from these and other insights. Currently, we usually think about how explanations affect people. But what if we introduced the following abstraction: explanations have certain properties, and those properties govern how people respond to the explanation? I'll describe initial work that starts to map some of these connections, and invite others to do the same.

Slides could be found here.

Bio

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability.

Selected Additional Shinies: BECA recipient, AFOSR YIP and NSF CAREER recipient; Sloan Fellow; IEEE AI Top 10 to Watch


Mihaela van der Schaar

University of Cambridge

Title

New frontiers in machine learning interpretability

Abstract and slides

Medicine has the potential to be transformed by machine learning (ML) by addressing core challenges such as time-series forecasts, clustering (phenotyping), and heterogeneous treatment effect estimation. However, to be embraced by clinicians and patients, ML approaches need to be interpretable. So far though, ML interpretability has been largely confined to explaining static predictions.

In this keynote, I describe an extensive new framework for ML interpretability. This framework allows us to 1) interpret ML methods for time-series forecasting, clustering (phenotyping), and heterogeneous treatment effect estimation using feature and example-based explanations, 2) provide personalized explanations of ML methods with reference to a set of examples freely selected by the user, and 3) unravel the underlying governing equations of medicine from data, enabling scientists to make new discoveries.

Slides could be found here.

Bio

Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute in London. In addition to leading the van der Schaar Lab, Mihaela is founder and director of the Cambridge Centre for AI in Medicine (CCAIM).

Mihaela was elected IEEE Fellow in 2009. She has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.

Mihaela is personally credited as inventor on 35 USA patents (the majority of which are listed here), many of which are still frequently cited and adopted in standards. She has made over 45 contributions to international standards for which she received 3 ISO Awards. In 2019, a Nesta report determined that Mihaela was the most-cited female AI researcher in the U.K.


Peter Spirtes

Carnegie Mellon University

Title

Obstacles and Opportunities in Learning Causal Structures and Causal Representations

Abstract and slides

Over the last 30 years, there has been an explosion of research in learning causal structures and causal representations from data. There are a host of difficulties in applying many algorithms that learn causal structure from data to domains of interest due to the oversimplifying assumptions made by these structure or representation learning algorithms, such as assumptions about no latent confounders, distributional assumptions, stationarity, etc. However, there have been significant advances in devising algorithms that do not make these very strong assumptions. I will present a brief history of major advancements in research on causal structure and causal representation learning over the last 30 years, and what remaining obstacles there are to successful application of algorithms that learn causal structure or causal representations to many domains of interest.

In this talk, I'll start by first sharing some of those more "specific" results from our work, in particular situations in which the outcomes were unexpected. Next, I'll lay out a hypothesis for how we might generalize from these and other insights. Currently, we usually think about how explanations affect people. But what if we introduced the following abstraction: explanations have certain properties, and those properties govern how people respond to the explanation? I'll describe initial work that starts to map some of these connections, and invite others to do the same.

Slides could be found here.

Bio

Peter Spirtes is the Marianna Brown Dietrich Professor and Head of the Department of Philosophy at Carnegie Mellon University. His research interests are interdisciplinary in nature, involving philosophy, statistics, graph theory, and computer science. His research has implications for the practices of a number of disciplines in which causal inferences from statistical data are made. Together with Prof. Clark Glymour, He has published one of the first algorithms for causal learning from observational data, called PC; as well as one of the widely used reference books in the field (Spirtes, Glymour, Scheines,2000). His work has shown that there are computer programs that can in some circumstances reliably draw useful causal conclusions under a reasonable set of assumptions from experimental or non-experimental data, or combinations of both. His current research centers on the extent to which these limiting assumptions can be relaxed, thereby extending the application of the results to a much wider class of phenomena and investigating the extent to which these search procedures scaled up to work with larger numbers of variables. This research program has important theoretical and practical implications for a number of different disciplines, including biology. Theoretically, it has helped us understand the relationship between probability and causality, and what the precise limits of reliable causal inference from various kinds of data under a variety of different assumptions are. Practically, it has provided a useful tool for scientists that helps them build causal models.


Zeynep Akata

University of Tübingen

Title

Explainability in Deep Learning Through Communication

Abstract

Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Such explanations are best communicated to the user via natural language. In a conversation, communication is most effective if the speaker understands the purpose of the listener. In this talk, I will present my past and current work on Explainable Machine Learning combining vision and language. Focusing on learning simple and compositional representations of images discriminating properties of the visible object and jointly predicting a class label, I will demonstrate how our models explain why the predicted label is or is not chosen for the image as well as how we improve the explainability of deep models via conversations. Finally, I will discuss the important role of uncertainty in communication.

Bio

Zeynep Akata is a professor of Computer Science (W3) within the Cluster of Excellence Machine Learning at the University of Tübingen. After completing her PhD at the INRIA Rhone Alpes with Prof Cordelia Schmid (2014), she worked as a post-doctoral researcher at the Max Planck Institute for Informatics with Prof Bernt Schiele (2014-17) and at University of California Berkeley with Prof Trevor Darrell (2016-17). Before moving to Tübingen in October 2019, she was an assistant professor at the University of Amsterdam with Prof Max Welling (2017-19). She received a Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014, a young scientist honour from the Werner-von-Siemens-Ring foundation in 2019 and an ERC-2019 Starting Grant from the European Commission. Her research interests include multimodal learning and explainable AI.





Sponsors