Summer School on Programming Languages, Analysis, and Verification 2008

The following are abstracts of talks presented at the Microsoft Research India Summer School on Programming Languages, Analysis, and Verification, which took place June 16–28, 2008, at the Indian Institute of Science, Bangalore, India.

Taming Concurrency: A Program Verification Perspective

Speaker: Shaz Qadeer, Microsoft Research

Concurrency, as a basic primitive for software construction, is more relevant today than ever before, primarily due to the multi-core revolution. General-purpose software applications must find ways to exploit concurrency explicitly in order to take advantage of multiple cores. However, experience has shown that explicitly parallel programs are difficult to get right. To deliver compelling software products in the multi-core era, we must improve our ability to reason about concurrency. Generalizing well-known sequential reasoning techniques to concurrent programs is fundamentally difficult because of the possibility of interference among concurrently-executing tasks. In these lectures, I will present reduction and context-bounding, two ideas that alleviate the difficulty of reasoning about interference by creating a simpler view of a concurrent execution than an interleaving of thread instructions. Reduction reduces interference by making a sequence of instructions in a thread appear as a single atomic instruction; it allows the programmer to view an execution as a sequence of large atomic instructions. Context-bounding reduces interference by focusing on executions with a small number of context switches; it allows the programmer to view an execution as a sequence of large thread contexts. I will present the theory behind these two approaches and their realization in a number of program verification tools I have developed over the years.

Programming-Language Motivation, Design, and Semantics for Software Transactions

Speaker: Dan Grossman, University of Washington

With multicore processors bringing parallel computing to the masses, there is an urgent need to make concurrent programming easier. Software transactions hold great promise for simplifying shared-memory concurrency, and they have received enormous attention from the research community in the last couple years. This series of lectures will approach transactions from the perspective of modern, high-level programming languages. We will start by motivating why transactions make shared-memory concurrency easier, focusing on a deep analogy between transactional memory and garbage collection. We will then discuss the difference between strong-isolation and weak-isolation and the many surprising anomalies that arise with the latter. To better understand how to avoid such anomalies, we will define a family of formal languages using operational semantics and a type system that is sufficiently restrictive to ensure that strong- and weak-isolation are indistinguishable for well-typed programs. Returning to language design, we will discuss nontrivial interactions between transactions and several other language features, including exceptions, continuations, and thread-creation. Finally, we will discuss memory consistency models and the largely open question of how they interact with transactions.

Though the implementation of efficient and scalable transactions is also important, we will not focus on any implementation details.

Improving Software Quality with Type Qualifiers

Speaker: Jeff Foster, University of Maryland at College Park

Software continues to increase in size and complexity, yet programmers have few practical tools and techniques to ensure its quality and reliability. In these lectures, we will explore a general technique based on type qualifiers that allows programmers to write annotations in their source code describing additional program properties that should be maintained during execution. Type qualifiers have a number of advantages as a mechanism for specifying and checking properties of programs: They are a simple, natural extension of the standard types present in typical programming languages; they support efficient inference algorithms, so that programmers need only write a few annotations and the rest are inferred automatically; and the results of type qualifier inference can be intuitively presented to the programmer to justify any errors that are detected.

In these lectures, we will discuss a range of issues from theoretical foundations to practical implementation concerns. Our starting point will be developing constraint-based type inference for simply-typed lambda calculus, a core programming languages formalism. From there, we will build up to subtyping systems and study constraint-based polymorphic type inference; for type qualifiers in particular, we will study how to use context-free language reachability to implement polymorphism efficiently. We will discuss particular issues in adding type qualifiers to C, and explore applications, including inferring const in ANSI C; security-related qualifiers for finding format-string vulnerabilities and user/kernel vulnerabilities in Linux; and immutability inference in Java. If time permits, we will discuss some additional applications of constraint-based analysis techniques, including analysis and data race detection, and we will also discuss user interface issues in building practical static analysis tools.

Implementing Software Transactions

Speaker: Tim Harris, Microsoft Research, Cambridge

In this series of lectures I'll discuss the design and implementation of software transactions. This follows on from Dan Grossman's lectures on the motivation for transactions and on the design and semantics of programming language features based on them.

I'll start by reviewing the low-level synchronization primitives provided by different processors and discussing their performance characteristics: which operations are fast, which operations are slow, and what the differences are between multi-core computers and traditional shared-memory multi-processors.

I'll then discuss the performance properties that we might want from transactional memory (or indeed from other shared memory data structures) -- straight-line speed, different notions of scalability, and different progress guarantees. I'll then look in some detail at a number of different implementations of transactional memory and we'll examine how they differ in the tradeoffs that they have taken between these different properties.

Finally, I'll talk about how this fits together with the rest of a programming language's implementation: how a compiler can build "atomic" blocks over a transactional memory implementation, what scope there is for compile-time optimization, and how the system's performance can benefit from integration between the STM and the garbage collector.

SMT Solvers in Program Analysis and Verification

Speaker: Nikolaj Bjorner, Microsoft Research, Redmond

Theorem proving technologies for checking satisfiability of logical formulas are crucial for many program analysis and verification applications. Of particular recent interest are solvers for Satisfiability Modulo Theories (SMT). We will describe how theorem provers, with emphasis on Microsoft’s Z3 solver, are used in program analysis and verification. Several concrete applications from program analysis at Microsoft will be presented. We will also give an introduction on the theory behind and the design of SMT solvers.

Program Verification Using Separation Logic

Speaker: Hongseok Yang, UK

The incorrect use of pointers, such as null pointer dereference and memory leak, is one of the most common sources of program errors. In this course, we will survey separation logic, a recent extension of Hoare logic that addresses this problem of pointers. Especially, we will focus on how separation logic exploits informal disciplines used by programmers, thereby supporting formal but efficient verification of pointer programs. Topics of the course will include frame rule, abstract predicate and data abstraction, concurrency, and automatic verification based on separation logic.