Planning Under Uncertainty: Challenges and Recent Progress

Planning under uncertainty, typically modeled with Markov Decision Processes (MDPs), has always been considered a core skill of an intelligent agent. It describes scenarios in which the agent needs to make a sequence of decisions in a dynamic environment, possibly trying to achieve some goal, while optimizing a characteristic of the chosen course of action (e.g., the expected time it takes or the amount of money it brings). The importance of dealing with such problems isn’t merely philosophic – MDPs have been responsible for many highly impactful real-world applications of computer science, from inventory management to military operations planning to landing damaged aircraft. The list of MDPs’ success stories is continuing to grow as they are being employed in new settings, such as workflow control in crowdsourcing and user-aware scheduling of maintenance jobs on computing devices. Nonetheless, the experiences of applying MDPs to realistic scenarios have highlighted two serious challenges for this model. First, modern algorithms for solving MDPs are still not scalable enough to handle many practically-sized problems. Second, the MDP classes that we know how to solve tend to be restrictive, making certain unnatural assumptions about the modeled settings. Violating these assumptions breaks the existing MDP solution techniques. The research I have carried out during my Ph.D. and will present in this talk advances state of the art in probabilistic planning by addressing both of these issues. In the first part of the presentation, I will show how we can devise highly scalable approximation algorithms for goal-oriented MDPs by combining two major planning paradigms, dimensionality reduction and deterministic relaxation. The resulting approaches automatically extract human-understandable causal structure from the MDP at hand and use it to efficiently compute a good policy. Besides enabling us to handle larger scenarios, these algorithms bring us closer to the ideal of AI, autonomous recognition of features important for solving a problem. The second part of the talk will be devoted to the extensions of existing MDP classes that remove the latter’s restrictive assumptions. I will explain the unusual mathematical properties of these more general MDPs and describe efficient optimal algorithms for them. The talk will conclude with a discussion of my ongoing work on solving MDPs with nonstandard but practically very useful optimization criteria.

Speaker Details

Andrey Kolobov is a Ph.D. student at the Department of Computer Science at the University of Washington, Seattle, advised by Professors Mausam and Daniel S. Weld. His research concentrates primarily on designing new mathematical models and scalable domain-independent algorithms for planning under uncertainty, but has also touched upon other areas, such as transfer learning, active learning, first-order probabilistic languages, and information retrieval. Andrey has authored multiple works in top AI venues, including a recent book on Markov Decision Processes, an exceptional paper track publication at AAAI-2010, the best paper at the Doctoral Consortium at ICAPS-2009, and a runner-up entry at the International Probabilistic Planning Competition-2011. He is a recipient of the Outstanding Program Committee Member Award at AAAI-2012. Andrey holds a double B.A. degree in Computer Science and Applied Mathematics from the University of California, Berkeley.

Date:
Speakers:
Andrey Kolobov
Affiliation:
University of Washington