Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Workshop on Tractability / Programme

Monday, July 5

09:00  registration and coffee
09:50  welcome 
10:00

John Hooker. Carnegie Mellon University, U.S.A

Integrating Solution Methods through Duality

10:45

Peter Jeavons. University of Oxford, UK

Presenting Constraints

11:30

Georg Gottlob. University of Oxford, UK

Hypertree Decompositions

12:15 lunch 
13:00

Joao Marques-Silva.  University College Dublin, Ireland

Boolean Satisfiability Solving: Past, Present & Future

13:45

Vladimir Kolmogorov.  University College London, UK

Scalable optimization techniques for certain graphical models

14:30 coffee 
15:15

Daniel Marx. Tel Aviv University, Israel

Fixed-Parameter Algorithms

16:00

Lakhdar Sais.  CRIL Lens, France

Structure-based simplification techniques of Boolean formulas

16:45

PhD Student session

17:15 reception 
19:30 dinner at Queen's College - invited speakers only 

 

Tuesday, July 6

08:30  coffee 
09:00

Andreas Krause. California Institute of Technology, U.S.A.

Submodular Optimization in Machine Learning and AI

09:45

Nikolaj Bjorner. Microsoft Research, U.S.A.

Engineering Satisfiability Modulo Theories solvers for intractable problems

10:30 coffee 
11:00

Jakob Nordstrom. MIT and KTH, Sweden

Understanding Space in Proof Complexity

11:45

Tony Jebara. Columbia University, U.S.A.

Graphical Modeling and Machine Learning with Perfect Graphs

12:30 lunch 
13:15 

Paul Vitanyi. CWI & Universiteit van Amsterdam

Introduction to Kolmogorov complexity and applications

14:15

Miki Hermann. Ecole Polytechnique, France

What Makes Minimal Inference Tractable

15:00

Nadia Creignou. LIF Marseille, France

Phase transition for the satisfiability of random (quantified) Boolean formulas

15:45 Panel and Discussions
17:30 end of the workshop 

 

 

PhD Session

Thomas Windheuser 

U. of Munich, Germany

Interactive Image Segmentation 

Valentin Weber 

G-SCOP lab, Grenoble, France

Instances hardness and hard instances for NP-hard problems. 

Dhruv Batra 

CMU. USA

MAP Inference in Markov Random Fields via Outer-Planar Decomposition 

Caterina Vitadello 

U. of Munich, Germany

Human Motion Capture 

Robert Woodward

U. of Nebraska-Lincoln, USA 

Integrating Higher-Levels of Consistency in Solvers to Uncover Tractability of CSPs 

Danny Tarlow

U. Of Toronto, Canada 

Efficient message passing in certain high order models 

 

 

Nikolaj Bjorner: Engineering Satisfiability Modulo Theories solvers for intractable problems

We give a view from the trenches of writing theorem proving tools for industrial applications. On one hand we often encounter verification problems arising from industrial applications that require straight-forward and small resolution proofs using a very small core from the problem as presented to the theorem prover. The challenge is then to be able to quickly identify the interesting subset that is relevant for verification and avoiding searching in an abundant irrelevant space. Modern SMT solvers incorporate several techniques for speeding up search in these scenarios. They also include specialized theory solvers that allow solving problems using efficient procedures as opposed to general procedures for intractable classes. One example in the context of Z3 is combinatory array logic, which offers an NP complete satisfiability problem in contrast to existing NEXPTIME complete settings of array properties. Unfortunately, this turns out not being the entire story. So on the other hand, intractability shows up in critical applications, where even problems that can be formulated quite succinctly are too hard for current techniques (based on modern variants of the DPLL algorithm that finds propositional resolution proofs). We give an overview of some approaches to beefing up DPLL with the ability to introduce new literals to simulate Frege proofs and our experiences to date.

Bio: Nikolaj Bjørner is working with Leonardo de Moura on a state-of-the-art SMT constraint solver Z3. Z3 is used for program verification and test case generation. He is also managing the Foundations of Software Engineering group at Microsoft Research. Until 2006, he was in the Core File Systems group where he designed and implemented the core of DFS-R which is included as part of Windows Server 2003 R2, Windows Live Messenger (Sharing Folders), and Vista Meetings Space. He also designed some of the chunking utilities used in the remote differential compression protocol RDC.

Nadia Creignou: Phase transition for the satisfiability of random (quantified) Boolean formulas

A significant tool for SAT research has been the study of random instances. In the last decades, numerous experimental studies have provided strong evidence that the difficulty to solve large instances of k-SAT is tightly linked to a phase transition in the probability that a random instance is satisfiable. As the clauses-to-variables ratio increases, the vast majority of formulas abruptly stop being satisfiable at a critical threshold point. The instances that are hard to solve seem to be located around this critical point.

Determining the nature of the phase transition, locating it and determining a precise scaling window turn out to be challenging tasks, which have aroused a lot of interest. Recently there has been a growth of interest in a powerful generalization of the Boolean satisfiability, namely the satisfiability of Quantified Boolean formulas, QBFs. A sharp phase transition from satisfiability to unsatisfiability for random QBF's has similarly been observed and might relate to computational hardness.

I will give some introductory material on random instances and discuss the techniques that have been used for the study of random formulas. I will explain why the study of quantified formulas raises new challenges, focusing on specific problems for which first results have been obtained.

Bio: Nadia Creignou is Professor of Computer Science at the University of Aix-Marseille, France. She received her Habilitation degree from the University of Caen in 1999. She is interested in computational complexity and in random discrete structures. Most of her own research has focused on Boolean constraint satisfaction problems and reasoning.

Georg Gottlob: Hypertree Decompositions

One of the best-known methods for decomposing graphs is the method of tree-decompositions introduced by Robertson and Seymour. Many NP-hard problems become polynomially soblvable if restricted to instances whose underlying graph structure has bounded treewidth. The notion of treewidth can be straightforwardly extended to hypergraphs by simply considering the treewidth of their primal graphs or, alteratively, of their incidence graphs. However, doing so comes along with a loss of information on the structure of a hypergraph with the effect that many polynomially solvable problems cannot be recognized as such because the treewidth of the underlying hypergraphs is unbounded. In particular, the treewidth of the class of acyclic hypergraphs is unbounded. In this talk, I will describe more appropriate measures for hypergraph acyclicity, and, in particular, the method of hypertree decompositions and the associated concept of hypertree width. After giving general results on hypertree decompositions, I will show how hypertree decompositions can be applied to optimization problems such as winner dertermination in combinatorial auctions.

Bio: Georg Gottlob is a Professor of Computing Science at Oxford University. His research interests are in database theory (in particular, query languages), Web information processing, AI, and computational logic. Gottlob got his Ph.D. degree in Computer Science from TU Vienna, Austria in 1979 and 1981, respectively. Before he moved to Oxford in 2006, he was a Professor of Computer Science at TU (since 1988). Before that, he was affiliated with the Italian National Research Council in Genoa, Italy. He also was a Research Associate and lecturer at the Politecnico di Milano, Stanford University and held visiting positions at Paris VII and at Berkeley.

Gottlob has received the Wittgenstein Award from the Austrian National Science Fund. He is an ACM and an ECCAI Fellow, and a member of the Austrian Academy of Sciences, the German National Academy of Sciences Leopoldina, and the European Academy of Sciences Academia Europaea in London. He chaired the Program Committees of IJCAI 2003 and ACM PODS 2000. He has co-founded the Lixto software company (www.lixto.com) which offers software and services for Web data extraction. Gottlob recently won an ERC Advanced Investigators Grant in the area of Web data extraction.

Miki Hermann: What Makes Minimal Inference Tractable

We study the complexity of the propositional minimal inference problem. Although the complexity of this problem has been already extensively studied before because of its fundamental importance in nonmonotonic logics and commonsense reasoning, no complete classification of its complexity was found. We classify the complexity of four different and well-studied formalizations of the problem in the version with unbounded queries, proving that the complexity of the minimal inference problem for each of them has a trichotomy (between P, coNP-complete, and Pi_2 P-complete). One of these results finally settles with a positive answer the trichotomy conjecture of Kirousis and Kolaitis [A dichotomy in the complexity of propositional circumscription, LICS'01]. In the process we also strengthen and give a much simplified proof of the main result from [Durand and Hermann, The inference problem for propositional circumscription of affine formulas is coNP-complete, STACS'03].

Bio: Miki Hermann is a Chargé de recherche CNRS. He is with the Laboratory of Computer Science (LIX) at Ecole Polytechnique since 2002. From 1988 until 2002 he was a CNRS researcher in the LORIA laboratory in Nancy. He received his habilitation from the University Henri Poincare in Nancy in 1994. His research interests encompass complexity - mainly counting complexity, complexity in automated deduction, complexity of non-monotonic reasoning - and constraint satisfaction problems.

John Hooker: Integrating Solution Methods through Duality

Nearly all successful solution methods for optimization and constraint solving can be viewed as primal-dual methods. Several kinds of duality have been used in optimization, but all are instances of two general schemes: a duality of search and inference, and a duality of search and relaxation. These two dualities interact in primal-dual methods and provide a conceptual framework for unifying these methods. We provide a series of examples to illustrate this principle and to show how it can lead to new and effective methods that combine solution techniques from mathematical programming, constraint programming, global optimization, and local search heuristics.

Bio: John Hooker is Professor of Operations Research and Holleran Professor of Business Ethics at Carnegie Mellon University. He has published over 130 articles and ten books in these two areas and in cross-cultural management. He is an INFORMS Fellow and recipient of the INFORMS Computing Society Prize. He is an area editor for INFORMS Journal on Computing and founding editor-in-chief of Journal of Business Ethics Education. In 2009-2011 he is part-time visiting professor at London School of Economics.

Peter Jeavons: Presenting Constraints

I will describe the constraint satisfaction problem and show that it unifies a very wide variety of computational problems. I will discuss the techniques that have been used to analyse the complexity of different forms of constraint satisfaction problem, focusing on the algebraic approach, explaining the basic ideas and highlighting some of the recent results in this area.

Bio: Peter Jeavons is Professor of Computer Science and Director of Teaching at the Computing Laboratory of the University of Oxford. He is interested in algorithms and computational complexity. Most of his own research has focused on constraint satisfaction problems, and he leads a research group on Constraints jointly with David Cohen at Royal Holloway, University of London.

He's also interested in computational biology, especially bioinformatics.

Tony Jebara: Graphical Modeling and Machine Learning with Perfect Graphs

Many graphical modeling and learning problems (such as maximum a posteriori estimation and marginal inference) are NP-hard in general. Similarly, many combinatorics problems on graphs are NP-hard including maximum clique, maximum weight independent set and graph coloring. However, a family of graphs known as perfect graphs (which generalizes trees) admits exact solutions in polynomial time. In particular, solutions to these perfect graphs problems are possible via linear programming, semidefinite programming, constraint programming (weighted max-SAT solvers) and message passing and are potentially highly efficient. We discuss how machine learning can exploit the perfect graph family for various practical problems as well.

Bio: Tony Jebara is Associate Professor of Computer Science at Columbia University and co-founder of Sense Networks. He directs the Columbia Machine Learning Laboratory whose research intersects computer science and statistics to develop new frameworks for learning from data with applications in vision, networks, spatio-temporal data, and text. Jebara has published over 75 peer-reviewed papers in conferences and journals including NIPS, ICML, UAI, COLT, JMLR, CVPR, ICCV, and AISTAT. He is the author of the book Machine Learning: Discriminative and Generative and co-inventor on multiple patents in vision, learning and spatio-temporal modeling. In 2004, Jebara was the recipient of the Career award from the National Science Foundation. His work was recognized with a best paper award at the 26th International Conference on Machine Learning, a best student paper award at the 20th International Conference on Machine Learning as well as an honorable mention from the Pattern Recognition Society in 2000. Jebara's research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (New York Times, Slash Dot, Wired, Businessweek, IEEE Spectrum, etc.). He obtained his PhD in 2002 from MIT. Recently, Esquire magazine named him one of their Best and Brightest of 2008. Jebara's lab is supported in part by the NSF, CIA, NSA, DHS, and ONR.

Vladimir Kolmogorov: Scalable optimization techniques for certain graphical models

In the first part of the talk I will discuss an efficient implementation of Edmonds' perfect matching algorithm.

It can be used, in particular, for solving the planar maxcut problem, and for difficult cases achieves a speed-up of several orders of magnitude over the previous state-of-the-art.

In the second part I will consider the problem of minimizing functions represented as a sum of individual terms.

In the case of pairwise terms maxflow-based techniques proved to be very successful, allowing to compute a global minimum for submodular functions and a part of an optimal solution for general functions.

I will discuss extensions of these techniques to higher-order terms.

Bio: Vladimir Kolmogorov received an MS degree from the Moscow Institute of Physics and Technology in Applied Mathematics and Physics and a PhD degree in Computer Science from Cornell University. After spending two years as an Associate Researcher at Microsoft Research, Cambridge, he joined University College London as a Lecturer. Vladimir's research is in the area of computer vision. More specifically, he focuses on graphical models, optimization algorithms for Markov Random Fields, and their applications to image segmentation, stereo and other vision problems. His papers received best paper award at ECCV 2002, best paper honourable mention award at CVPR 2005 and an honorable mention, outstanding student paper award (to M. Pawan Kumar) at NIPS 2007. He holds the Royal Academy of Engineering/EPSRC Research Fellowship.

Andreas Krause: Submodular Optimization in Machine Learning and AI

In recent years, a fundamental problem structure has emerged as very useful in a variety of machine learning and AI problems: Submodularity is an intuitive diminishing returns property, stating that adding an element to a smaller set helps more than adding it to a larger set.

Applications where this property has proven useful include active learning and experimental design, informative path planning, multi-agent coordination, inference, structure learning, clustering, influence maximization and ranking. Similarly to convexity, submodularity allows one to efficiently find provably (near-)optimal solutions.

In this tutorial, I will give an introduction to the concept of submodularity, discuss algorithms for optimizing submodular functions and -- as the main focus -- illustrate their usefulness in solving difficult machine learning and AI problems, with a special focus on active information gathering tasks.

Bio: Andreas Krause is an assistant professor of Computer Science at the California Institute of Technology. He received his Ph.D. from Carnegie Mellon University in 2008. Krause is a recipient of the NSF CAREER award and the Okawa Foundation Research Grant recognizing top young researchers in telecommunications. His research on sensing and optimized information gathering received awards at several premier conferences, as well as the best research paper award of the ASCE Journal of Water Resources Planning and Management.

Joao Marques-Silva: Boolean Satisfiability Solving: Past, Present & Future

The development of Boolean Satisfiability (SAT) solvers is a success story in computer science. Despite SAT being NP-complete, modern solvers routinely solve instances with hundreds of thousands of variables and millions of clauses. Moreover, SAT finds an ever increasing number of practical applications, and has been extended to richer languages, that include Soft Constraints and Satisfiability Modulo Theories. This talk surveys the organization of modern SAT solvers, highlighting the most relevant algorithmic techniques, and briefly covering techniques less effective in practice. In addition, the talk also outlines representative research directions, including the reasons for the practical success of SAT solvers, and the integration of tractability techniques.

Bio: Joao Marques-Silva is Stokes Professor of Computer Science and Informatics at University College Dublin (UCD), Ireland. Before moving to UCD, he held appointments at the University of Southampton, UK, and at the Technical University of Lisbon, Portugal. Joao Marques-Silva holds a PhD degree in Electrical Engineering and Computer Science from the University of Michigan, Ann Arbor, in 1995, and the Habiliation degree in Computer Science from the Technical University of Lisbon, Portugal, in 2004. His research interests include algorithms for constraint solving and optimization, and applications in formal methods, artificial intelligence, and bioinformatics. He received the 2009 CAV award for fundamental contributions to the development of high-performance Boolean satisfiability solvers.

Dániel Marx: Fixed-Parameter Algorithms

Parameterized complexity is a relatively recent approach of dealing with hard computational problems. The main idea is that instead of expressing the running time as a function of the input size n, we analyze the running time as a multivariate function of the input size n and some parameter k of the input. Our goal is to develop algorithms that are efficient for small values of k, even if the input size n is large. The central notion of the area is fixed-parameter tractability (FPT): we say that a parameterized problem is FPT if it can be solved in time f(k)nc where f is an arbitrary (typically exponential) function of the parameter k and c is a constant. In other words, if a problem is FPT, then the combinatorial explosion can be restricted to the parameter k: for small fixed values of k, the problem can be efficiently solved, since the running time depends only polynomially on the input size.

It turns out that a wide range of interesting problems are FPT.

Moreover, there is a very rich and interesting theory behind fixed-parameter tractability and a powerful toolkit of techniques have been developed in the past 15 years. In my talk, I will present examples of graph-theoretic and constraint satisfaction problems that can be handled using these techniques.

Bio: Dániel Marx received a PhD from Budapest University of Technology and Economics in 2005. Then he was a research fellow in Hungarian Academy of Sciences and a postdoc in Humboldt-Universität zu Berlin. He is currently a postdoc in Tel Aviv University.

The main focus of his research is parameterized complexity and fixed-parameter tractability. His other interests include constraint satisfaction problems, graph coloring, algorithmic graph theory, combinatorial optimization and computational complexity. He is an author of more than 50 research papers published in journals and conferences.

He was invited to serve as a program committee member of several conferences like ICALP, ESA, LATIN, STACS. He is a member of the steering committee of the International Symposium on Parameterized and Exact Computing (IPEC).

Jakob Nordström: Understanding Space in Proof Complexity 

In recent years, deciding if a CNF formula is satisfiable has gone from a theoretical question to a practical approach for solving real-world problems. For current state-of-the-art satisfiability algorithms, typically based on resolution and clause learning, the two main bottlenecks are the amounts of time and memory space used.

Understanding time and memory consumption of SAT-solvers, and how these resources are related to one another, is therefore a question of great interest.

Roughly a decade ago, it was asked whether proof complexity had anything intelligent to say about this question, corresponding to the interplay between size and space of proofs. In this talk, I will explain how this question can be answered almost completely by combining two tools, namely good old pebble games on graphs, studied extensively in the 70s and 80s, and a new, somewhat surprising, theorem showing how weak space lower bounds can be amplified to strong bounds simply by making variable substitutions in the corresponding CNF formulas.

Apart from giving a deeper theoretical understanding of the relations between time and space in resolution, our work also raises the question whether the space complexity of CNF formulas can perhaps be an adequate measure of the tractability of CNF formulas for resolution-based SAT-solvers.

Bio: Jakob Nordstrom received his Master of Science in Computer Science and Mathematics at Stockholm University in 2001, and his PhD in Computer Science at the Royal Institute of Technology (KTH) in 2008. He currently is a postdoctoral researcher at the Massachusetts Institute of Technology (MIT).

In 2006 he received the best student paper award at 38th ACM Symposium on Theory of Computing (STOC'06), and his PhD thesis received the Ackermann Award 2009 for “outstanding dissertations in Logic in Computer Science” from the European Association for Computer Science Logic.

In 1997-1998, Jakob Nordstrom served as a military interpreter at the Swedish Armed Forces Language Institute, graduating as the best student of the 1998 class. In parallel with his studies and later research, he has worked as a Russian interpreter, engaged among others for His Majesty the King of Sweden and the Swedish Prime Minister. He also has a Diploma in Choir Conducting with extended Music Theory from the Tallinn Music Upper Secondary School, Estonia. During the period 1994-1999, he was the artistic director of Collegium Vocale Stockholm, a vocal ensemble performing mainly Renaissance and Baroque music.

Lakhdar Sais: Structure-based simplification techniques of Boolean formulas

In this talk we present several contributions to the problem of simplifying Boolean formula, which is one of the key components of SAT solvers. After a brief overview of the most important work in this area, we describe two kinds of approaches that combine the exploitation of the structure of the Boolean formula with restricted forms of inferences. The first category of simplification techniques is based on the recognition and exploitation of Boolean functions, while the second exploits the tractable parts of the formula and/or a limited form of deduction based on unit propagation. Finally, we conclude with some issues related to the tractability workshop.

Bio: Lakhdar Saïs is a professor at CRIL - CNRS research center "Université Lille Nord de France". He is currently the leader of the inference and decision process group at CRIL. His research focuses on search and representation problems in Artificial Intelligence including propositional satisfiability, quantified boolean formula, constraint satisfaction problems and knowledge representation and reasoning.

Paul Vitanyi: Introduction to Kolmogorov complexity and applications

We treat the central ideas and their applications of Kolmogorov complexity---a modern notion of randomness dealing with the quantity of information in individual objects. Kolmogorov complexity is known variously as 'algorithmic information', 'algorithmic entropy', 'Kolmogorov-Chaitin complexity', 'descriptional complexity', 'shortest program length', 'algorithmic randomness', and others.

We give an introductory treatment of the subject with a selection from a wide range of illustrative applications. Such applications include randomness of individual finite objects or infinite sequences, the compressibility method illustrated by a general lower bound on average-case Shellsort (a 40 year open problem) and a novel successful notion to determine similarity of objects (like genomes of species, leading to the correct phylogeny).

We leave out here: universal probability, general inductive reasoning, inductive inference, prediction, mistake bounds, computational learning theory, inference in statistics, combinatorics, time and space complexity of computations, average case analysis of algorithms such as HEAPSORT, language recognition, string matching, formal language and automata theory, Turing machine complexity, lower bound proof techniques, probability theory, structural complexity theory, oracles, logical depth, universal optimal search, physics and computation, dissipationless reversible computing, information distance and picture similarity, bioinformatics...

See the papers on http://homepages.cwi.nl/~paulv/kolmcompl.html and the webpage http://homepages.cwi.nl/~paulv .

Bio: Paul M.B. Vitanyi received his Ph.D. from the Free University of Amsterdam (1978) and he holds positions at the national CWI Research Institute in Amsterdam and is professor of Computer Science at the University of Amsterdam. He serves on the editorial boards of Distributed Computing, Information Processing Letters, Theory of Computing Systems, Parallel Processing Letters, International journal of Foundations of Computer Science, Journal of Computer and Systems Sciences (guest editor), and elsewhere. He has worked on cellular automata, computational complexity, distributed and parallel computing, machine learning and prediction, physics of computation, Kolmogorov complexity, quantum computing. Together with Ming Li they pioneered applications of Kolmogorov complexity and co-authored ``An Introduction to Kolmogorov Complexity and its Applications,'' Springer-Verlag, New York, 1993 (2nd Edition 1997), parts of which have been translated into Chinese, Russian and Japanese.