UAI 2019 - Tutorials


The tutorials will be held on July 22nd, 2019. We will have four two hour tutorials for this year:
  1. Tractable Probabilistic Models: Representations, Algorithms, Learning, and Applications
    Guy Van den Broeck, Nicola Di Mauro, Antonio Vergari
  2. Mixing Graphical Models and Neural Nets Like Chocolate and Peanut Butter
    Matt Johnson
  3. Causal Reinforcement Learning
    Elias Bareinboim
  4. Mathematics of Deep Learning
    Raja Giryes

Tutorial 1: Tractable Probabilistic Models: Representations, Algorithms, Learning, and Applications

Guy Van den Broeck (UCLA), Nicola Di Mauro (Università degli Studi di Bari "Aldo Moro" ), Antonio Vergari (UCLA)

Slides available here.

Abstract

Probabilistic generative models like Bayesian Networks, Markov Random Fields and Variational Autoencoders enjoy a considerable amount of attention due to their expressiveness. However, their capability for performing exact probabilistic inference is limited to a small set of queries and otherwise requires resorting to approximation routines. Moreover, learning such models from data is generally even harder as inference is a sub-routine of learning, requiring further approximations.

In contrast, Tractable Probabilistic Models (TPMs) guarantee performing exact inference is efficient for a large set of probabilistic queries, e.g., arbitrary marginals, conditionals and MAP queries, thus enabling efficient learning schemes. In addition to these benefits, TPMs are surprisingly competitive when learning from data, compared with their intractable alternatives, as numerous recent successes in real-world applications show.

In this tutorial, we will present an excursus over the rich TPM literature, starting from the seminal work about mixtures and tree models to the latest representations such as probabilistic circuits. While doing so, we will highlight the sources of intractability in probabilistic inference and learning, review the solutions that different tractable representations adopt to overcome them, and discuss what they are trading off to guarantee tractability.

Furthermore, we will zoom in on the current state-of-the-art for TPMs, disentangling and making sense of the “alphabet soup” of models (ACs, CNs, DNNFs, d-DNNFs, OBDDs, PSDDs, SDDs, SPNs, etc…) that populate this landscape. We will show how these models can be represented as probabilistic circuits under a unifying framework, discussing which structural properties delineate each model class and enable different kinds of tractability. We will touch upon the main algorithmic paradigms to automatically learn both the structure and parameters of TPMs from data.

Lastly, we will showcase several successful application scenarios where TPMs have been employed as an alternative to or in conjunction with intractable models, including image classification, completion and generation, scene understanding, activity recognition, language and speech modeling, bioinformatics, collaborative filtering, verification and diagnosis.

Biographical details

Guy Van den Broeck is an Assistant Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning (Statistical Relational Learning, Tractable Learning, Probabilistic Programming), Knowledge Representation and Reasoning (Probabilistic Graphical Models, Lifted Probabilistic Inference, Knowledge Compilation, Probabilistic Databases), and Artificial Intelligence in general. Guy’s work received best paper awards from key artificial intelligence venues such as UAI, ILP, and KR, and an outstanding paper honorable mention at AAAI. His doctoral thesis was awarded the ECCAI Dissertation Award for the best European dissertation in AI. Guy serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR). Website: http://web.cs.ucla.edu/~guyvdb/

Nicola Di Mauro is an Assistant Professor at the Department of Computer Science of the University of Bari Aldo Moro since 2005 where he is member of the Machine Learning group at the LACAM laboratory. He received his Ph.D. from the University of Bari Aldo Moro in 2005. His main research interests are statistical relational learning, probabilistic deep learning and machine learning, as well as their applications. He participated in in various European projects concerning these topics. Nicola Di Mauro has published over 100 peer-reviewed technical papers published on International Journals and conference proceedings. He regularly serves on the PC (often at senior level) for several top conference of artificial intelligence and machine learning, and is on the editorial boards of some international journals.

Antonio Vergari is currently a postdoctoral researcher at the StarAI Lab at the University of California, Los Angeles (UCLA) working on integrating tractable probabilistic reasoning and deep representations. Previously, he was a postdoc at the Max-Planck-Institute for Intelligent Systems, Tuebingen, Germany where he worked on automating machine learning and data science through tractable probabilistic models. He obtained his Ph.D. in 2017 from the University of Bari, Italy, on learning deep probabilistic models and exploiting them for representation learning.


Tutorial 2: Mixing Graphical Models and Neural Nets Like Chocolate and Peanut Butter

Matt Johnson (Google Brain)

Slides available here.

Abstract

Deep neural networks (DNNs) have retaken center stage in machine learning due to their tremendous capabilities for flexible function approximation in the presence of large data sets and computational resources. Complementary to these developments, exponential family probabilistic graphical models (PGMs) let us reason about constrained, organized representations that admit efficient, specialized inference algorithms based on ideas like dynamic programming and cheap natural gradients. These complementary strengths demand to be combined, like chocolate and peanut butter.

This tutorial will provide a unified view of how to think about graphical models and neural networks together, focusing on fundamentals rather than a complete survey of the recent literature. There will be a deep dive on graphical models, exponential families, and associated approximate inference algorithms, all framed so that it fits with ideas and techniques in deep learning like amortized inference and automatic differentiation. For example, we’ll cover how you might embed iterative graphical model inference algorithms in the prediction step of a neural network, and efficiently differentiate through the whole procedure for learning. Moreover, our approach will emphasize mechanizable ideas, so that we can develop software to do the laborious parts for us.

After attending this tutorial, you will be better equipped to design models and algorithms that draw ideas from DNNs and PGMs in any proportion, and to implement those designs in software.

Biographical details

Matt Johnson is a research scientist at Google Brain interested in probabilistic models, approximate inference algorithms, and software systems to support them. He works on JAX, a system for composable function transformations in Python. His other recent work includes composing graphical models with neural networks to leverage specialized inference algorithms, automatically recognizing and exploiting conjugacy structure for approximate integration without a domain-specific language, and model-based reinforcement learning from pixels with structured latent variable models (blog post). Matt is also a coauthor of the original Autograd, which automatically differentiates native Python and NumPy programs.

Matt was a postdoc with Ryan Adams at the Harvard Intelligent Probabilistic Systems Group and Bob Datta in the Datta Lab at the Harvard Medical School. His Ph.D. is from MIT in EECS, where he worked with Alan Willsky on Bayesian time series models and scalable inference. He was an undergrad at UC Berkeley (Go Bears!).


Tutorial 3: Causal Reinforcement Learning

Elias Bareinboim (Columbia University)

Abstract

Causal inference provides a set of tools and principles that allows one to combine data and substantive knowledge about the environment to reason with questions of counterfactual nature – i.e., what would have happened had reality been different, even in settings when no data about this unrealized reality is available. Reinforcement Learning is concerned with finding a policy that optimizes a specific function (e.g., reward, regret) in interactive and uncertain environments. These two disciplines have evolved independently and with virtually no interaction between them. In fact, they operate over different aspects of the same building block, i.e., counterfactual relations, which makes them umbilically tied. In this tutorial, we introduce a unified treatment putting these two disciplines under the same theoretical umbrella. We then show that a number of natural and pervasive classes of learning problems emerge when this connection is fully established (which cannot be seen individually from either discipline). This new understanding leads to a broader view of what counterfactual learning is and suggests the great potential for the study of causality and reinforcement learning side by side, which we name causal reinforcement learning (CRL).

Biographical details

Elias Bareinboim is the director of the Causal Artificial Intelligence (CausalAI) Laboratory and an associate professor in the Department of Computer Science at Columbia University. His research focuses on causal and counterfactual inference and their applications to data-driven fields in the health and social sciences as well as artificial intelligence and machine learning. His work was the first to propose a general solution to the problem of ``data-fusion,'' providing practical methods for combining datasets generated under different experimental conditions and plagued with various biases. More recently, Bareinboim has been exploring the intersection of causal inference with decision-making (including reinforcement learning) and explainability (including fairness analysis). Before joining Columbia, he was an assistant professor at Purdue University and received his Ph.D. in Computer Science from the University of California, Los Angeles. Bareinboim was named one of ``AI's 10 to Watch'' by IEEE, and is a recipient of an NSF CAREER Award, the Dan David Prize Scholarship, the 2014 AAAI Outstanding Paper Award, and the 2018 UAI Best Student Paper Award.


Tutorial 4: Mathematics of Deep Learning

Raja Giryes (Tel Aviv University)

Slides available here.

Abstract

In the past five years there have seen a dramatic increase in the performance of recognition systems due to the introduction of deep neural networks for feature learning and classification. However, the theoretical foundation for this success remain elusive. This tutorial will present some of the theoretical results developed for deep neural networks that aim to provide a mathematical justification for properties such as the approximation capabilities, convergence, global optimality, invariance, stability of the learned representations, generalization error, etc. In addition, it will discuss the implication of the developed theory on practical training of neural networks.

The tutorial will start with the theory for neural networks from the early 90s (including the well-known results of Hornik et. al. and Cybenko). Then it will move to the recent theoretical findings established for deep learning in the past five year. The practical considerations that follow from the theory will be also discussed.

Biographical details

Raja Giryes is an assistant professor in the school of electrical engineering at Tel Aviv University. He received the B.Sc (2007), M.Sc. (supervision by Prof. M. Elad and Prof. Y. C. Eldar, 2009), and Ph.D. (supervision by Prof. M. Elad 2014) degrees from the Department of Computer Science, The Technion - Israel Institute of Technology, Haifa. Raja was a postdoc at the computer science department at the Technion (Nov. 2013 till July 2014) and at the lab of Prof. G. Sapiro at Duke University, Durham, USA (July 2014 and Aug. 2015). His research interests lie at the intersection between signal and image processing and machine learning, and in particular, in deep learning, inverse problems, sparse representations, and signal and image modeling. Raja received the EURASIP best PhD award, the ERC-StG grant, Maof prize for excellent young faculty (2016-2019), VATAT scholarship for excellent postdoctoral fellows (2014-2015), Intel Research and Excellence Award (2005, 2013), the Excellence in Signal Processing Award (ESPA) from Texas Instruments (2008) and was part of the Azrieli Fellows program (2010-2013).






Golden Sponsor



Silver Sponsor