Program

For Participants

For Authors

For Reviewers

Organization

Misc

UAI 2014 - Tutorials

The tutorials will be held on Wednesday, July 23th. Here is the list of tutorials this year:


1. Random Perturbations for Inference
Tamir Hazan, University of Haifa
2. Learning Tractable Probabilistic Models
Pedro Domingos, University of Washington
Daniel Lowd, University of Oregon
3. Probabilistic Programming
Avi Pleffer, Charles River Analytics
4. Probabilistic Inference in Relational Models
Dan Suciu, University of Washington
Guy Van den Broeck, University of California, Los Angeles

Tamir Hazan

University of Haifa

Random Perturbations for Inference

Abstract

Predictions in modern statistical inference problems can be increasingly understood in terms of discrete structures such as arrangements of objects in computer vision, parses in natural language processing, or molecular structures in computational Biology. In a fully probabilistic treatment, all possible alternative assignments are considered, thus requiring to estimate exponentially many structures with their respective weights. However, sampling from traditional structured probabilistic models such as the Gibbs distribution is computationally expensive for many artificial intelligence applications.

In this tutorial I will present a new approach to relax the exponential complexity of probabilistic reasoning in structured models while relying on efficient predictions under random perturbations. This approach leads to a new inference framework that is based on probability models that measure the stability of the prediction algorithm to random changes of the structures scores.

The goal of this tutorial is to introduce random perturbations as efficient and principled tools to reason about the neighborhood of possible outcomes when making the best decision. The tutorial will explore both theoretical and practical aspects of random perturbations.

BIO

Tamir Hazan received his PhD from the Hebrew University of Jerusalem (2010) and he is currently an assistant professor at the University of Haifa, Israel. Tamir Hazan’s research describes efficient methods for reasoning about structured models. His work on random perturbations was presented in the machine learning best papers track at AAAI 2012 and as a best paper shortlist at ICML 2012. Tamir Hazan’s research also includes the primal-dual norm-product belief propagation algorithm which received a best paper award at UAI 2008. In the last couple of years, Tamir Hazan has been organizing the "Perturbation, Optimization, and Statistics" workshop at NIPS.


Pedro Domingos

Daniel Lowd

University of Washington University of Oregon

Learning Tractable Probabilistic Models

Abstract

Inference is the hardest part of learning. Learning most powerful models requires repeated intractable inference, and approximate inference often interacts badly with parameter optimization. At inference time, an intractable accurate model can effectively become an inaccurate model due to approximate inference. All these problems would be avoided if we learned only tractable models, but standard tractable model classes - like thin junction trees and mixture models - are insufficiently expressive for most applications. However, in recent years a series of surprisingly expressive tractable model classes have been developed. Many of these are based on the sum-product theorem, including sum-product networks, bounded-inference graphical models, feature trees, and tractable Markov logic. Others exploit symmetry, sub-modularity, global interaction, and other ideas. We will give an overview of these representations, algorithms for learning them, and their startling successes in challenging applications in vision, natural language, and others.

BIO

Pedro Domingos is Professor of Computer Science and Engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 200 technical publications. He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at several leading conferences.

Daniel Lowd is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. His research interests include learning and inference with probabilistic graphical models, adversarial machine learning, and statistical relational machine learning. He maintains Libra, an open-source toolkit for Learning and Inference in Bayesian networks, Random fields, and Arithmetic circuits, and recently received a Google Faculty Award for related work on learning tractable models.


Avi Pfeffer

Charles River Analytics

Probabilistic Programming

Abstract

Probabilistic reasoning is a principled and widely used approach to machine learning. Probabilistic reasoning lets you use a model to predict future events, infer events that may have led to observations, and use past experience to improve future predictions. Probabilistic programming uses programming languages to represent probabilistic models, providing rich data structures and control flow, including user-defined functions and recursion. This provides the power to represent an extremely wide range of models. Probabilistic programming systems provide inference and learning algorithms that apply automatically to models written in the language. The combination of high expressivity and general-purpose inference algorithms makes representing and reasoning about new kinds of models much easier.

In this tutorial, I will explain the principles of probabilistic programming using the functional programming paradigm, including the way a program in a functional language can be used to represent a probabilistic model. I will present examples using our Figaro probabilistic system showing the diverse representational capabilities of probabilistic programming. I will also discuss some of the inference and learning algorithms used in probabilistic programming, including both factor-based and sampling algorithms.

BIO

Avi Pfeffer is currently Principal Scientist and Division Director of Research at Charles River Analytics. Avi is a leading researcher on a variety of computational intelligence techniques including probabilistic reasoning, machine learning, and computational game theory. Avi has developed numerous innovative probabilistic representation and reasoning frameworks. He is the lead designer of Charles River Analytics’ Figaro probabilistic programming language. As an Associate Professor at Harvard, he developed IBAL, the first general-purpose probabilistic programming language. While at Harvard, he also produced systems for representing, reasoning about, and learning the beliefs, preferences, and decision making strategies of people in strategic situations. Prior to joining Harvard, he co-invented object-oriented Bayesian networks and probabilistic relational models, which are part of the foundation of the field of statistical relational learning. Avi serves as Action Editor of the Journal of Machine Learning Research and served as Associate Editor of Artificial Intelligence Journal and as Program Chair of the Conference on Uncertainty in Artificial Intelligence. Avi received his Ph.D. in computer science from Stanford University and his B.A. in computer science from the University of California, Berkeley.


Guy Van den Broeck

Dan Suciu

University of California, Los Angeles University of Washington

Lifted Probabilistic Inference in Relational Models

Abstract

This tutorial explains the core ideas behind lifted probabilistic inference in statistical relational learning (SRL) and extensional query evaluation in probabilistic databases (PDBs). Both fields deal with relational representations of uncertainty and have realized that efficient inference is an enormous challenge. Both fields have also achieved remarkable results developing efficient algorithms for tasks previously thought to be intractable.

SRL and PDBs have very recently started to connect through the common language of relational logic. We now understand their commonalities and differences. Typical inference tasks are different in nature, yet can be captured in the same weighted model counting framework. Theoretical complexity bounds from one field translate to the other. This tutorial covers several of these developments.

Within SRL, we cover weighted model counting encodings of graphical models and Markov logic, the realization that symmetries enable tractable, lifted inference, as well as a deeper understanding of the theoretical strengths and limitations of lifting.

Within PDBs, we study of the data complexity of probabilistic inference, where the FO formula is fixed, and one asks for the complexity of the problem as a function of size of the database. In particular, we discuss a dichotomy theorem for the data complexity stating that, for a large class of queries, weighted model counting is either efficient, or provably #P-hard.

The tutorial will focus on the big ideas that set probabilistic inference with relations apart from more classical inference problems. We focus on why lifted algorithms work, which properties they exploit, how they can reduce complexity, but also why inference is still hard in the worst case.

BIO

Dan Suciu is a Professor in Computer Science at the University of Washington. He received his Ph.D. from the University of Pennsylvania in 1995, was a principal member of the technical staff at AT&T Labs and joined the University of Washington in 2000. Suciu is conducting research in data management, with an emphasis on topics related to Big Data and data sharing, such as probabilistic data, data pricing, parallel data processing, data security. He is a co-author of two books Data on the Web: from Relations to Semistructured Data and XML, 1999, and Probabilistic Databases, 2011. He is a Fellow of the ACM, holds twelve US patents, received the best paper award in SIGMOD 2000 and ICDT 2013, the ACM PODS Alberto Mendelzon Test of Time Award in 2010 and in 2012, the 10 Year Most Influential Paper Award in ICDE 2013, and is a recipient of the NSF Career Award and of an Alfred P. Sloan Fellowship. Suciu serves on the VLDB Board of Trustees, and is an associate editor for the VLDB Journal, ACM TOIS, ACM TWEB, and Information Systems and is a past associate editor for ACM TODS. Suciu's PhD students Gerome Miklau and Christopher Re received the ACM SIGMOD Best Dissertation Award in 2006 and 2010 respectively, and Nilesh Dalvi was a runner up in 2008.

Guy Van den Broeck obtained his Ph.D. in Computer Science from KU Leuven, Belgium, in 2013. He was a postdoctoral researcher in the automated reasoning lab at the University of California, Los Angeles in 2014. He currently works as a postdoctoral researcher at KU Leuven and is supported by the Research Foundation-Flanders. His research interests are in artificial intelligence, machine learning, logical and probabilistic automated reasoning and statistical relational learning. His work was awarded the Alcatel-Lucent Innovation Award in 2009 and the ILP Turing Theory Prize (best student paper) in 2011. He is co-organizing the 4th International Workshop on Statistical Relational AI (StarAI) at AAAI 2014.

Please visit our
Sponsors

microsoft research logo

artificial intelligence journal

Facebook logo

google logo

Charles River Analytics log

IBM Research logo