Programme

For Participants

Misc

Workshops will run 9am - 5pm in two different hotels (details below).


Workshops:
Causal Structure Learning (at Catalina Canyon Resort)
www.stat.washington.edu/tsr/uai-causal-structure-learning-workshop/
StarAI — Statistical Relational AI (at Catalina Canyon Resort)
http://tsi.wfubmc.edu/labs/strait/StaRAI/starai.html
The Uncertainty in Natural Intelligence (at Catalina Country Club)
cocosci.berkeley.edu/uai2012/
9th Bayesian Modeling Applications (at Catalina Country Club)
www.abnms.org/uai2012-apps-workshop

Workshop 1Causal Structure Learning

www.stat.washington.edu/tsr/uai-causal-structure-learning-workshop/

Determining causal relationships from observations and experiments is fundamental to human reasoning, decision making and the advancement of science. The aim of this workshop is to bring together researchers interested in the challenges of causal structure learning from observational and experimental data especially when latent or confounding variables may be present.

Topics related to causal structure learning will be explored through a set of tutorials, invited talks, spotlight presentations and a poster session.

Example Topics:

  • Causal structure learning via regularization/priors.
  • Using non-parametric/generalized independence constraints for structure learning in the presence of confounding.
  • Methods exploiting linearity and additivity.
  • Approaches for learning structure from deterministic data.
  • Algorithms for learning from overlapping datasets.
  • Methods for combining experimental and observational data.
  • Procedures for selecting experiments.
  • Methods for predicting the effects of interventions in an observed system.
  • Efficient data structures and algorithms for structure search.
  • Methods for analyzing causal pathways.
  • Applications of causal structure learning to real-world datasets.

Organizers:

Dominik Janzing - MPI for Intelligent Systems, Tuebingen
Marloes Maathuis - ETH Zurich
Chris Meek - Microsoft Research
Thomas Richardson - University of Washington
Peter Spirtes - Carnegie Mellon University
Jin Tian - Iowa State

Workshop 2StarAI — Statistical Relational AI

http://tsi.wfubmc.edu/labs/strait/StaRAI/starai.html

Much has been achieved in the field of AI, ”the science and engineering of making intelligent machines” as McCarthy defines it, yet much remains to be done if we are to reach the goals we all imagine. One of the key challenges with moving ahead is closing the gap between logical and statistical AI. Logical AI has mainly focused on complex representations, and statistical AI on uncertainty. Clearly, however, intelligent machines must be able to handle the complexity and uncertainty of the real world. It is difficult — if not impossible — to described the world naturally in terms of features or a fixed number of objects and relations among them. Instead we should assume that the world is made up of a variable numbers objects that interrelate in a noisy way. This is also witnessed by the increasing presence of relational data: information mega-networks, linked open data and triple stores, heterogenous bibliographic, organization, and social networks, drug-disease-gene interactions, molecules, human behaviour, among others. For these cases graphs are not enough to encode probabilistic and decision-theoretic models: we need relational or logical models.

The goal of the StarAI workshop is to reach out to UAI and to explore what might be called Statistical Relational AI. How to deal with continuous variables? Or even hybrid models? How do we deal with deterministic knowledge such as integrity constraints? How to efficiently perform inference within relational models? By message-passing, cutting planes or dual decompositon? By knowledge compilation? By lifting? And, what is lifting? Are there lifted junction trees? Are there lifted MCMC inference approaches? Should we consider sum-product networks or stay with factor-graph like structures or weighted CNFs? How is lifting related to symmetry breaking approaches for solving (PO)MDPs, CSPs, and ILPs? How do we lift learning? Can we actually learn to lift models? Is lifting beneficial for models traditionally considered to be propositional such as Ising and Potts models? What about distributed StarAI respectively probabilistic programming? How do we deal with relational environments evolving over time? Which AI tasks are likely to benefit from statistical relational methods? What are the killer applications? Is it computer vision or bio-medicine? Is it the world-wide mind and ”Machines reading X”? Or even declarative networking? Generally speaking, which lessons can StarAI learn from other, related communities and for which problems faced by other communities is StarAI most promising?

In the tradition of the highly successful StarAI workshop at AAAI-2010 and the many SRL workshops and Dagstuhl seminars in the past, StarAI-2012 again will be the premier venue for bringing together different sub-disciplines that focus on the probabilistic relational methods for AI. We seek to invite researchers in all subfields of UAI to attend the workshop and to explore together how to reach the goals imagined by the early AI pioneers.

Organizers:

Henry Kautz - University of Rochester
Kristian Kersting - Fraunhofer
Sriraam Natarajan - Wake Forest University
David Poole - University of British Columbia

Workshop 3The Uncertainty in Natural Intelligence

cocosci.berkeley.edu/uai2012/

Some of the hardest problems in artificial intelligence, such as feature and concept learning, are solved seemingly effortlessly by people. These are problems of inductive inference, which are difficult because there are many solutions that are consistent with the information explicitly given with the problem (e.g., solving ab=2 for the value of a without being given any additional information).

People solve problems of inductive inference by favoring solutions that are consistent with their prior knowledge and penalizing solutions that are inconsistent with prior beliefs. Bayesian inference provides a formal calculus for how people should update their prior belief in each solution in light of their observations. Prior beliefs are formulated as a probability distribution over the unobserved solutions. This methodology has provided a successful paradigm for exploring formal solutions to how people solve inductive problems.

Using Bayesian inference to formally represent human solutions to inductive problems not only provides a computational explanation of human behavior, but also offers novel methods for solving difficult problems in artificial intelligence. In this workshop, we present recent computational successes in human learning as a source of new artificial intelligence algorithms by exploiting the common computational language of these two communities, probability theory. This workshop is a forum for researchers in artificial intelligence, machine learning, and human learning, all interested in the same inductive problems, to discuss computational methodologies, insights, and research questions. We hope to foster a dialogue that leads to a greater understanding of human learning and further unites these two areas of research.

Organizers:

Joseph Austerweil - UC Berkeley
Noah D. Goodman - Stanford
Tom Griffiths - UC Berkeley
Josuah Tenenbaum - MIT

Workshop 49th Bayesian Modeling Applications

www.abnms.org/uai2012-apps-workshop

Special theme: Temporal Modeling

The 9th Bayesian Modeling Applications Workshop solicits submissions of real-world applications of graphical models and Bayesian networks, in particular those dealing with temporal modeling. Our desire is to foster discussion and interchange about novel contributions that can speak to both the academic and the larger research community. Accordingly, we seek submissions also from practitioners and tool developers as well as researchers.

Bayesian networks are now a powerful, well-established technology for reasoning under uncertainty, supported by a wide range of mature academic and commercial software tools. They are now being applied in many domains, including environmental and ecological modeling, bioinformatics, medical decision support, many types of engineering, robotics, military, financial and economic modeling, education, forensics, emergency response, surveillance, and so on. We welcome submissions describing such real world applications, whether as stand-alone BNs or where the BNs are embedded in a larger software system. We encourage authors to address the practical issues involved in developing real-world applications, such as knowledge engineering methodologies, elicitation techniques, defining and meeting client needs, validation processes and integration methods, as well as software tools to these support these activities.

We particularly encourage the submission of papers that address the workshop theme of temporal modeling. Recently communities building dynamic Bayes networks (DBNs) and partially observable MDPs (POMDPs) are coming to realize that they are applying their methods to identical applications. Similarly POMDPs and other probabilistic methods are now established in the field of Automated Planning. Stochastic process models such as continuous time Bayes networks (CTBNs) should also be considered as part of this trend. Adaptive and on-line learning models also fit into this focus.

The submission deadline for papers is Saturday 19th May. The format of the workshop will be combination of oral and poster presentation, with demonstrations encouraged for both, grouped to facilitate discussion.

The contact email address for the chairs is bmaw2012@abnms.org.

Organizers:

John Mark Agosta - Toyota ITC, USA
Ann Nicholson - Monash University & Bayesian Intelligence, Australia
M. Julia Flores - University of Castilla-La Mancha, Spain