UAI 2013 - Invited Speakers
UAI 2013 is pleased to announce the following invited speakers:
Yoky MatsuokaVP of Technology at Nest Labs
Banquet Talk. 7:00pm - Friday July 12th
Talk Title: Research and Consumer Products
Biographical detailsYoky Matsuoka is VP of Technology at Nest Labs (www.nest.com). She received her Ph.D. at MIT in Electrical Engineering and Computer Science in the fields of Artificial Intelligence and Computational Neuroscience. She received an M.S. from MIT and a B.S. from UC Berkeley, both in EECS. She was also a Postdoctoral Fellow in the Brain and Cognitive Sciences at MIT and in Mechanical Engineering at Harvard University. She was Torode Family Endowed Career Development Professor of Computer Science and Engineering at the University of Washington, Director of NSF ERC Center for Sensorimotor Neural Engineering, and Ana Loomis McCandless Professor of Robotics and Mechanical Engineering at Carnegie Mellon University. Her work has been recognized with a MacArthur Fellowship, acclaimed as one of “The Brilliant Ten” in Popular Science Magazine, “Top 10 Women to Watch in 2010” by Barbie, and "Power 25” in Seattle Magazine.
Machine Learning Department
Carnegie Mellon University
Keynote Talk 1. 8:40am - Friday July 12th
Never Ending Learning
We will never really understand learning until we can build machines that learn many different things, over years, and become better learners over time.
This talk describes our research to build a Never-Ending Language Learner (NELL) that runs 24 hours per day, forever, learning to read the web. Each day NELL extracts (reads) more facts from the web, and integrates these into its growing knowledge base of beliefs. Each day NELL also learns to read better than yesterday, enabling it to go back to the text it read yesterday, and extract more facts, more accurately, today. NELL has been running 24 hours/day for over two years now. The result so far is a collection of 40 million interconnected beliefs (e.g., servedWtih(coffee, applePie), isA(applePie, bakedGood)), that NELL is considering at different levels of confidence, along with hundreds of thousands of learned phrasings, morphological features, and web page structures that NELL uses to extract beliefs from the web. Track NELL's progress at rtw.ml.cmu.edu.
Biographical detailsTom M. Mitchell founded and chairs the Machine Learning Department at Carnegie Mellon University, where he is the E. Fredkin University Professor. His research uses machine learning to develop computers that are learning to read the web, and uses brain imaging to study how the human brain understands what it reads. Mitchell is a member of the U.S. National Academy of Engineering, a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). He believes the field of machine learning will be the fastest growing branch of computer science during the 21st century. Mitchell's web page is www.cs.cmu.edu/~tom.
Keynote Talk 2. 8:30am - Saturday July 13th
Bayesian Learning in Online Service: Statistics Meets Systems
Over the past few years, we have entered the world of big and structured data - a trend largely driven by the exponential growth of Internet-based online services such as Search, eCommerce and Social Networking as well as the ubiquity of smart devices with sensors in everyday life. This poses new challenges for statistical inference and decision-making as some of the basic assumptions are shifting:
- The ability to optimize both the likelihood and loss functions
- The ability to store the parameters of (data) models
- The level of granularity and 'building blocks' in the data modeling phase
- The importance of priors vs. likelihoods
- The interplay of computation, storage, communication and inference and decision-making techniques
In this talk, I will discuss the implications of big and structured data for Statistics and the convergence of statistical model and distributed systems. I will present one of the most versatile modeling techniques that combines systems and statistical properties - factor graphs - and review a series of approximate inference techniques such as distributed message passing and Gibbs sampling. The talk will be concluded with an overview of real-world problems at Amazon.
Ralf is Director of Machine Learning Science at Amazon Berlin, Germany. In 2011, he worked at Facebook leading the Unified Ranking and Allocation team. This team is focused on building horizontal large-scale machine learning infrastructure for learning user-action-rate predictors that enabled unified value experiences across the products. Ralf joined Microsoft Research in 2000 as a Postdoctoral researcher and Research Fellow of the Darwin College Cambridge. From 2006 - 2010, together with Thore Graepel, he was leading the Applied Games and Online Services and Advertising group which engaged in research at the intersection of machine learning and computer games and in the areas of online services, search and online advertising combining insights from machine learning, information retrieval, game theory, artificial intelligence and social network analysis. From 2009 to 2011, he was Director of Microsoft's Future Social Experiences (FUSE) Lab UK working on the development of computational intelligence technologies on large online data collections.
Prior to joining Microsoft, Ralf worked at the Technical University Berlin as a teaching assistant where I obtained both a diploma degree in Computer Science in 1997 and a Ph.D. degree in Statistics in 2000. Ralf's research interests include Bayesian inference and decision making, computer games, kernel methods and statistical learning theory. Ralf is one of the inventors of the Drivatars™ system in the Forza Motorsport series as well as the TrueSkill™ ranking and matchmaking system in Xbox 360 Live. He also co-invented the adPredictor click-prediction technology launched in 2009 in Bing's online advertising system.
Massachusetts Institute of Technology
Keynote Talk 3. 8:30am - Sunday July 14th
Modeling common-sense scene understanding with probabilistic programs
To see is, famously, to ''know what is where by looking''. Yet to see is also to know what will happen, what can be done, and what is being done -- to detect not only objects and their locations, but the physical dynamics governing how objects in the scene interact with each other and how agents can act on them, and the psychological dynamics governing how intentional agents in the scene interact with these objects and each other to achieve their goals. I will talk about recent efforts to capture these core aspects of human common-sense scene understanding in computational models, which can also be used for building more human-like machine vision and reasoning systems. These models of intuitive physics and intuitive psychology take the form of *probabilistic programs*: probabilistic generative models defined not over graphs, as in many current machine learning and vision models, but over programs whose execution traces describe the causal processes giving rise to the behavior of physical objects and intentional agents. Common-sense physical and psychological scene understanding can then be characterized as approximate Bayesian inference over these probabilistic programs. We study how this approach can solve a wide range of problems including inferring scene structure from images, predicting physical dynamics and inferring latent physical attributes from static images or short movies, and reasoning about the goals and beliefs of agents from observations of short action traces.
Joshua B. Tenenbaum received his Ph.D. in 1993 from MIT in the Department of Brain and Cognitive Sciences, where he is currently Professor of Computational Cognitive Science as well as a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL). He studies learning, reasoning and perception in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He and his collaborators have pioneered accounts of human cognition based on sophisticated probabilistic models, with a recent emphasis on probabilistic programming formalisms. They have also developed several novel machine learning algorithms inspired by human learning. Their papers have received awards at several conferences, including the IEEE Computer Vision and Pattern Recognition (CVPR) conference, Neural Information Processing Systems (NIPS), the Annual Meeting of the Cognitive Science Society, Uncertainty in AI (UAI), the International Joint Conference on Artificial Intelligence (IJCAI), and the International Conference on Development and Learning (ICDL). He is the recipient of early career awards from the Society for Mathematical Psychology, the Society of Experimental Psychologists, and the American Psychological Association, along with the Troland Research Award from the National Academy of Sciences.