Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Abstracts

The ASM approach for modular design and verification of programming features

Egon Börger, University of Pisa

We survey the use which has been made of Abstract State Machines in the area of programming languages, namely to define dynamic properties of programs at source, intermediate and machine levels in a rigorous way to be amenable to mathematical analysis, e.g. of type safety, correctness and completeness of compilers, etc. We illustrate how theorems about program properties can be integrated into a modular development of programming languages and programs, using as example a Java/JVM compilation correctness proof for defining, interpreting, compiling, and executing bytecode for the Java language. We show how programming features modularize not only the source programs, but also the program property statements and their proofs. Source code, theorem statements and proofs for a program can be developed compositionally along the lines of the used programming constructs.

Compositional resource analysis in Hume using automatic amortisation 
Kevin Hammond, University of St Andrews

This talk shows how amortised analysis can be automated using type inference. Building on an operational semantics for Hume and extending previous work by Hoffmann and Jost, we develop an associated cost model and thus derive an automatic analysis for worst-case behaviours in terms of e.g. memory usage and execution time. The analysis is compositional and scalable. Using e.g. dependent types, resource information can be associated with language constructs that can inform and direct the resource analysis.

Laws of concurrent design
Sir Tony Hoare, Microsoft Research

What are they? familiar algebraic properties (associativity, commutivity, idempotence and distribution) of sequential composition, concurrent composition and union. Plus one weak exchange law, linking sequential and concurrent composition.

What do they mean? They describe real-world systems (including computers and networks), in which events occur sequentially, concurrently, or optionally.

What are they useful for? For program optimisation, verification, and implementation, testing, and error diagnosis/correction.

What are they true of? Systems in which the occurrence of an event may depend on (prior) occurrence of certain other events. The nature of the events and the dependency relation are parameters of the model, and of tools that are based on the model.

Are they beautiful? question left open.

How to test a meta-program?
Paul Klint (joint work with Mark Hills), CWI, Amsterdam

Despite best efforts, programs contain errors. This is true for mundane programs that carry out simple tasks to programs that describe the operations on other programs like type checking, interpretation, compilation or transformation. How can we spot the errors in these so-called meta-programs? While formal correctness guarantees for meta-programs are desirable, they are unachievable in many practical situations. What can we do about this?

We present initial experiments that explore how random testing can be applied at the meta-level.

This research is done in the context of the Rascal meta-programming language. Rascal offers everything needed for meta-programming: grammar definitions and built-in parser generation for generalized top-down parsing, high-level built-in data types (lists, sets, maps, tuples, relations), user-definable algebraic data types (parse trees being just one of the predefined ADTs), elaborate pattern matching and tree traversal. The Rascal libraries provide support for visualization, metrics, statistics, analysis of specific languages (e.g., Java, PHP), IDE extension and much more. The Rascal ecosystem is also a Language Workbench that makes it simple to build an IDE for a newly defined language.

We will explore how Rascal's built-in support for random testing can be applied to test meta-programs written in Rascal. Since the Rascal type checker is written in Rascal itself, it is an ideal subject of study. We define a simple domain-specific language to describe the expected behavior of the type checker and generate random tests to validate its behavior. We report on our initial findings, show them in a life demo, and speculate about the potential of using random testing for meta-programming.

Programming Language Semantics as Natural Science
Shriram Krishnamurthi, Brown University

Most programming languages should be thought of natural objects, and the art of semantics should revolve around what it takes to describe them under that assumption. Furthermore, we should extend the same treatment beyond languages to libraries, frameworks, and the other linguistic elements that constitute modern programs. I will discuss some of my group's prior successes in this direction, and also outline future challenges.

Making real-time language definitions scalable
José Meseguer, University of Illinois at Urbana-Champaign

Generally speaking, the rewriting logic semantics project has made good advances in scaling up to large languages, efficiently executing language definitions, and providing various forms of semantics-based formal analysis. However, scalability of semantic definitions for real-time programming and modeling languages is more challenging for several reasons:

  1. real time has to be faithfully modelled;
  2. programs in such languages often need to deal with an unpredictable external environment; and
  3. programs for distributed real-time systems can have a huge number of states, making model checking analysis very hard.

The talk will describe recent progress in giving semantic definitions for real-time languages within rewriting logic, and recent advances in making such definitions, and the analysis of programs based on them, orders of magnitude more scalable.

Specify and verify your language using K
Grigore Rosu, University of Illinois at Urbana-Champaign

One of the long-lasting dreams of the programming language community is to have one formal semantic definition of a target programming language and from it to derive all the tools needed to execute and analyze programs written in that language: parsers, interpreters, compilers, symbolic execution engines, model checkers, deductive program verifiers, and so on. We believe that this dream started to become reality. In this talk we will give an overview of K, a rewrite-based executable semantic framework in which programming languages can be defined using configurations, computations and rules. Configurations organize the state in units called cells, which are labeled and can be nested. Computations carry computational meaning as special nested list structures sequentializing computational tasks, such as fragments of program. Computations extend the original language abstract syntax. K (rewrite) rules make it explicit which parts of the term they read-only, write-only, read-write, or do not care about. This makes K suitable for defining truly concurrent languages even in the presence of sharing. Computations are like any other terms in a rewriting environment: they can be matched, moved from one place to another, modified, or deleted. This makes K suitable for defining control-intensive features such as abrupt termination, exceptions or call/cc. The K framework is designed to allow implementing a variety of generic tools that can be used with any language defined in K. Currently, it includes the following: a parser, interpreters, symbolic execution engines (connected to the Z3 SMT solver), state-space exploration via a configurable semantic debugger, a model checker, and a deductive program verifier. The latter is based on matching logic for expressing static properties, which generalizes separation logic, and on reachability logic for expressing dynamic properties, which generalizes Hoare logic.

Principles and applications of abstract-interpretation-based static analysis
David Schmidt, Kansas State University

Static analysis is property extraction from formal systems. Abstract interpretation is a foundation for static analysis based on Galois connections, semi-homomorphisms, and fixed-point calculation. In this talk, we introduce abstract interpretation, apply it to static analyses of program semantics (state-transition systems, equationally specified definitions, rule-based relational definitions), survey applications of static analysis, develop the correspondence of properties to propositions, and consider approaches to modular, "scalable" analyses.

Programming language and multiprocessor semantics in Ott, Lem, and Ln
Peter Sewell, University of Cambridge

In this talk I will reflect on our experiences with the Ott tool for expressing programming language semantic definitions (as inductive relations over arbitrary user-defined syntax) and with the complementary Lem language, for lightweight executable mathematics (as types, functions and inductive relations in a typed higher-order language). Ott and Lem have been used in many research projects by ourselves and others, including substantial programming-language and machine-language semantic definitions; they are demonstrably effective tools for the working semanticist. I will discuss some of the things that Ott and Lem do not support at present (including production-quality parser generation) and, if time permits, talk about special purpose metalanguage support for ISA semantics.

Ott: http://www.cl.cam.ac.uk/~pes20/ott/
Lem: http://www.cs.kent.ac.uk/people/staff/sao/lem/

A DSL for describing type checkers for DSLs
Mark van den Brand (joint work with A.P. van der Meer and A. Serebrenik), Eindhoven University of Technology

In recent years, Model Driven Engineering (MDE) has been suggested as an approach to make software development easier, faster and cheaper. In order to achieve this, domain models are used to represent systems and their behavior within a specific knowledge domain such as finance or complex manufacturing systems. As domain experts frequently have limited knowledge of software engineering, facilitating model design and interaction becomes imperative.

A popular solution addressing this challenge consists in creating Domain-Specific Languages (DSLs). As opposed to General Purpose Languages (GPLs), DSLs include special constructs for domain concepts, and exclude constructs that are not useful in the domain. Using these constructs, models can be made smaller, clearer and more expressive.

One disadvantage of DSLs is that each DSL needs its own set of tools to implement it in order to make the DSL effective. To reduce the effort and knowledge required, thus lowering the barrier, we created a DSL for type system specification, EMF-TL. In EMF-TL, we consider type checking from a MDE perspective. During type checking, an untyped input model is transformed into a typed output model, if possible. The information discovered during typing can then be used by further transformation steps that are applied.

In order to facilitate type system specification, EMF-TL uses constraints instead of plain conditions to guide the transformation. Constraints allow us to increase flexibility by postponing decisions, making it more straightforward to define common type system concepts. We have implemented a prototype type checker generator that constructs type checkers based on EMF-TL specifications, and demonstrated its usefulness by performing several case studies.

Related Links
Location
  • Microsoft Research Ltd
    21 Station Road
    CB1 2FB
Organizers
  • Andrew Kennedy, Microsoft Research
  • Peter Mosses, Swansea University
Local Arrangements
  • Helen Guy-Mas, Microsoft Research
Sponsors
  • Microsoft Research Ltd
  • EPSRC