Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Speakers and Abstracts

2012 Summer School on Concurrency
August 22–29, 2012 | St. Petersburg, Russia

Judith BishopJudith Bishop

Director, Computer Science
Microsoft Research, Redmond
United States

Dr. Judith Bishop is director of Computer Science at Microsoft Research. Her role is to create strong links between Microsoft’s research groups and universities globally, through encouraging projects, supporting conferences, and engaging directly in research. Her expertise is in programming languages and distributed systems, with a strong practical bias and an interest in compilers and design patterns. She initiated the Software Innovation Foundation (SEIF) and is currently working on a new way of running programs in browsers (especially F#).

Judith received her PhD from the University of Southampton. Before coming to Microsoft, she was a professor at the Universities of Witwatersrand and Pretoria, with visiting positions in the United Kingdom, Germany, Canada, Italy, and the United States. She has been a member of IFIP’s Working Group 2.4 (System Implementation Languages) since 1980 and chaired the group for six years. She was general co-chair of ICSE 2010, PC co-chair of TOOLS 2011, and serves frequently on editorial, program, and award committees. She has written 16 books, which have been translated into six languages, including Russian. Her awards include the IFIP Silver Core and Outstanding Service Award (2006) and the South African Department of Trade and Industry Distinguished Woman of the Year (2005).

Abstract

Concurrency in C# and Java: Understanding Why Languages Matter
.NET and Java are both platforms supporting object-oriented languages, designed for embedded systems and the Internet. As such they include libraries for the concurrent execution of separate threads of control, and for communication between computers via typed channels and various protocols. Just using any of these libraries in either language will not, however, automatically cause a program to speed up on a multi-core machine. It requires very careful programming to avoid making the classic errors when trying to maintain mutual exclusion of shared variables, or achieve correct synchronization of threads without running into deadlock. In this tutorial we describe and discuss the advances that have been made in the past few years on both the .NET and Java platforms in making concurrent and parallel programming easier for the programmer, and ultimately more efficient.

Sebastian BurckhardtSebastian Burckhardt

Researcher
Microsoft Research, Redmond
United States

Dr. Sebastian Burckhardt was born and raised in Basel, Switzerland, and studied mathematics at the local university. During an exchange year at Brandeis University, he discovered his affinity to computer science and immigrated to the United States, where he completed a second master's degree. After a few years of industry experience at IBM, he returned to academia and earned his PhD in Computer Science at the University of Pennsylvania. For the past four years, he has worked as a researcher at Microsoft Research in Redmond. His research interests revolve around the general problem of programming concurrent, parallel, and distributed systems conveniently, efficiently, and correctly. More specific interests include memory consistency models, concurrency testing, self-adjusting computation, and the concurrent revisions programming model.

Abstracts

Concurrent Revisions, Part I: Beyond a Linear History of State
How to write programs that read and mutate shared data in a parallel, concurrent, or distributed setting continues to present interesting and important challenges for developers and researchers. Our “concurrent revisions” project attacks this old problem using a new combination of techniques borrowed from source control systems and software transactional memory. In this first session we discuss the semantic foundations, namely revision diagrams and fork-join data types. We show how this model enables deterministic parallel programming, and how we can prove determinism formally. Finally, we report on our experiences with one application (a game) and how we were able to use concurrent revisions to parallelize conflicting tasks to improve performance.

Concurrent Revisions, Part II: Beyond a Linear History of State
In this session, we introduce two new applications of the semantic foundations of concurrent revisions: incremental computation, and eventual consistency. First, we show how to add support for recording and repeating computations. Thus, we obtain a programming model that supports both parallel and incremental execution. Second, we show how to use revision diagrams and fork-join data types to implement eventual consistency in a distributed system. Finally, we show how eventually consistent storage can be made accessible to programmers, by directly integrating support for “cloud types” into a programming language, and demonstrate how we integrated this feature into the TouchDevelop programming language and programming environment.

Victor GergelVictor Gergel

Professor, Dean of the Computing Mathematics and Cybernetics Faculty
Nizhni Novgorod State University
Russia

Prof. Victor Gergel is a dean of the Computing Mathematics and Cybernetics Faculty, Nizhni Novgorod State University, Russia. His research interests include supercomputing technologies and parallel computations, mathematical models, methods and decision-making systems (the problems of global and multiobjective optimization, pattern recognition, and classification), software engineering, and development of complex software applications for parallel computing on multiprocessor computer systems (clusters). He authored: Theory and Practice of Parallel Computing (2007) and High Performance Computations of Multiprocessor Multicore Systems (2010). Since 2007, he has been a leader of the Center of Supercomputing Technologies.

Victor has a PhD (1984) and DSc in Technics (1994) and a professorship (1998) from Nizhni Novgorod State University.

Abstracts

Complexity and Performance Analysis of Parallel Computing
In high-performance computing systems, parallel computing helps to achieve an impressive acceleration of calculations. Improved performance of computer equipment makes it possible to solve increasingly complex tasks and to continuously expand the range of problems to be studied.

Thus, an important issue in the development of parallel algorithms and programs is the efficient use of the potential of parallelization in tackling problems and of the computing systems being used.

This lecture shows how one can evaluate theoretically the performance of parallel efficiency (speedup and efficiency) still at the stage of development of parallel algorithms and programs and how to compare these estimates with the best (maximum achievable) quality indicators of parallel computations. To validate the approaches discussed in the lecture, we give a number of examples in which the theoretical estimates obtained are compared with the results of computational experiments.

Additional material on the subject of the lecture can be found in the textbook by V.P. Gergel, Theory and Practice of Parallel Computing. M.: Binom, 2007.

Modern Languages and Technologies of Parallel Programming
The use of parallel computing entails a significant increase in the complexity of software development. One of the possible areas to reduce the complexity of software development for parallel computing is to create new technologies and programming languages that are specifically focused on the design of parallel programs.

In this lecture, we consider two parallel programming languages. The first of them, Co-Array FORTRAN (CAF), is essentially a parallel extension of FORTRAN, the language that is widely used in the practice of programming. The second example is Chapel, a new parallel programming language.

The two examples permit a comparison to be made of these languages’ characteristics. One can also assess the dynamics of the development of parallel computing approaches; identify the strengths and weaknesses of the technologies being examined.

Additional material on the subject of the lecture can be found in the textbook by V.P. Gergel, Modern Languages and Technologies for Parallel Computing. Moscow: MSU, 2012.

Dan GrossmanDan Grossman

Associate Professor
University of Washington, Seattle
United States

Prof. Dan Grossman is an associate professor in the Department of Computer Science & Engineering at the University of Washington, where he has been a faculty member since 2003. Grossman completed his PhD at Cornell University and his undergraduate studies at Rice University. His research interests lie in the area of programming languages, ranging from theory to design to implementation, with a focus on improving software quality. In recent years, he has focused on better techniques for expressing multithreaded programs, particularly using languages with well-defined support for transactional memory. In prior work, he focused on type-safe systems programming using the Cyclone language, which he developed with colleagues.

Abstracts

Data-Race Exceptions via Low-Level Detection of High-Level Data Races
Detecting and preventing data races has been a focus of programming-languages research for nearly two decades, but the design and implementation of data-race detectors continues to improve. This presentation will first give a broad background and history of data-race detection, including the definition of a data race and the relation of data races to the poorly understood and counterintuitive world of memory-consistency models. We will then focus on techniques for fully precise dynamic data-race detection, including low-level support possibly in hardware. We will then discuss why low-level data-race detection (e.g., on assembly code) can both miss data races and report false races for programs written in higher-level languages (e.g., C# or Java). Finally, we will show how to extend the interface of a low-level race detector such that the implementation of a high-level language can cooperate with the race detector to fix the missed races and false races.

Deterministic Execution of Concurrent Programs
Nondeterministic execution complicates essential aspects of developing computing applications, including testing, debugging, and replication for fault tolerance. Concurrent programming is often considered to be inherently and wildly nondeterministic, but several research projects in recent years have developed approaches for making multithreaded programs deterministic. We will present an overview of the main techniques before focusing on a set of approaches that (a) ensure deterministic execution of a program without any change to the programming language or programmer input (the language semantics remains nondeterministic) and (b) do not assume the program is data-race free. We will develop a design perspective that relates the weakness of the memory-consistency model to the performance of the algorithm for deterministic execution.

Maurice HerlihyMaurice Herlihy

Professor, Computer Science
Brown University
United States

Prof. Maurice Herlihy has an AB in Mathematics from Harvard University, and a PhD in Computer Science from MIT. He served on the faculty of Carnegie Mellon University, on the staff of DEC Cambridge Research Lab, and is currently professor in the Computer Science Department at Brown University. He is an ACM Fellow, and is the recipient of the 2003 Dijkstra Prize in Distributed Computing, and the 2004 Goedel Prize in theoretical computer science. He won the 2008 ISCA influential paper award for the 1993 paper that coined the term “transactional memory.” 

Abstract

Design Patterns for Highly Concurrent Data Structures
Concurrent data structures are the "ball-bearings'' of concurrent programs: they are the place where the program's "moving parts'' interact, and if they perform poorly, it may not matter how well the rest of the program is engineered. In these lectures, we will discuss what it means for a concurrent object to be correct, and we will review a number of broadly-applicable design patterns, along with a number of specialized algorithms and techniques.

Jeff KramerJeff Kramer

Senior Dean
Imperial College, London
United Kingdom

Prof. Jeff Kramer is the senior dean at Imperial College London. He was head of the Department of Computing from 1999 to 2004 and dean of the Faculty of Engineering from 2006 to 2009. His research work is primarily concerned with software engineering, focusing on software architecture, behavior analysis, the use of models in requirements elaboration, and architectural approaches to adaptive software systems. He was a principal investigator of research projects that developed the CONIC and DARWIN architectural environments for distributed programming and of associated research into software architectures and their analysis. Jeff was program co-chair of ICSE 1999, chair of the ICSE Steering Committee from 2000 to 2002, and general co-chair of ICSE 2010 in Cape Town. He was editor in chief of IEEE TSE from 2006 to 2009, received the Most Influential Paper Award at ICSE 2003, and was awarded the 2005 ACM SIGSOFT Outstanding Research Award and the 2011 ACM SIGSOFT Distinguished Service Award. He is co-author of books on concurrency and on distributed systems and computer networks, and the author of more than 200 journal and conference publications. Jeff is a Fellow of the Royal Academy of Engineering, a Chartered Engineer, Fellow of the IET, Fellow of the ACM, Fellow of the BCS, and Fellow of the City and Guilds of London Institute.

Abstract

Model-Based Design and Analysis of Concurrent and Adaptive Software
Concurrency is useful in a wide range of software applications, but can be prone to errors. This tutorial takes an architectural model-based approach to the modelling and design of concurrent software. Simple software components are modelled using labelled transition systems LTS (a form of finite state machine). These can be composed to describe the behaviour of more complex software systems, and can be automatically analysed to check various safety and liveness properties before implementation.

Furthermore, we believe that such an architectural model-based approach is also beneficial in the design, analysis and construction of adaptive software systems. Rigorous techniques are needed to develop adaptive systems which can cope with both changes in the environment and with changing goals. We present an outline three-layer reference model as a context in which to articulate some of the main issues and to describe our pilot implementation.

The approach, concurrency concepts and analysis will be demonstrated through a series of examples and using the LTSA model checker. Our current research work in the area of adaptive software will also be demonstrated to illustrate plan synthesis from goals and properties.

Bertrand MeyerBertrand Meyer

Professor of Software Engineering, ETH Zurich, Switzerland, and
Adjunct Professor, ITMO National Research University, Saint Petersburg, Russia

Prof. Bertrand Meyer is professor of Software Engineering at ETH Zurich. He is also an adjunct professor at ITMO National Research University in Saint Petersburg, in charge of creating a software engineering laboratory. His areas of interest include software engineering as a whole, with special emphasis on software specification and verification, concurrent programming, object technology, programming languages, programming environments, and computer science education. He is the designer of the Eiffel language and the associated Design by Contract method, and the author of a number of well-known textbooks. Prof. Meyer was the president of Informatics Europe, the association of European CS departments and research labs, from 2006 to 2011, and the chair of the ETH CS department from 2004 to 2006. He remains the chief architect of Eiffel Software, a California company that he co-founded prior to joining ETH. He is the recipient of an ERC Advanced Investigator Grant, starting April 2012, entitled, "Concurrency Made Easy” and devoted to facilitating large-scale use of reliable concurrent programming techniques.

Abstract

Concurrent Programming is Easy
Concurrent programming is supposed to be very hard, requiring completely new concepts and extensive retraining of programmers. Such an approach loses the many benefits obtained from decades of improving the state of sequential programming. The method, language extension, and tools described in these lectures take the reverse path: to provide concurrency as a simple extension to the solid base of object-oriented programming with Design by Contract as available in Eiffel. Based on the SCOOP approach (Simple Concurrent Object-Oriented Programming, also the basis for the “Concurrency Made Easy” ERC 2012-2017 Advanced Investigator Grant), they show how to program concurrent applications simply and effectively while retaining the reasoning patterns and the formal properties that have proved so fecund in the rest of modern programming.

Davide SangiorgiDavide Sangiorgi

Professor of Computer Science
University of Bologna
Italy

Prof. Davide Sangiorgi has been a full professor in Computer Science at Bologna University (Italy) since 2002. He is the head of the research team "Focus," a joint initiative between University of Bologna and INRIA (Institut national de recherche en informatique et en automatique, France). He has previously held research and academic positions at Edinburgh University and INRIA. Within IFIP (International Federation for Information Processing), he is member of the Working Group 2.2 “Formal Description of Programming Concepts,” which he chaired during from 2004 to 2011, and of the Working Group 1.8 “Concurrency Theory.” He is member of the Editorial Board of the journals Logical Methods in Computer Science, Acta Informatica, and Distributed Computing, and member of Academia Europaea. His research activities are centered on concurrency theory, type systems, and concurrent and distributed programming. He is author of more than 100 papers in international journals and conference proceedings, and of two books on concurrency theory and logics (published by Cambridge University Press).

Abstract

An Introduction to Bisimulation and Coinduction
A fundamental concern in concurrency theory is establishing when two processes are "the same", i.e., indistinguishable to an external observer interacting with them. This notion, called, behavioural equivalence, is the basis upon which a theory of processes can be developed.

In the lectures will introduce bisimulation, one of the most studied forms of behavioural equivalence for processes. Bisimulation is an instance of coinduction, a powerful tool for defining potentially infinite objects and reasoning on them. Today, coinduction is widely used in computer science, but also in other fields, including artificial intelligence, cognitive science, mathematics, modal logics, philosophy and physics. I will review the basics of coinduction, with an emphasis on the difference and the duality with induction.