IFIP WG 2.3 meeting 42
Where: Philadelphia, Pennsylvania, USA
When: 5-9 January 2004
Host: Ernie Cohen
Topics of discussion
- Rajeev Alur, Software synthesis from hybrid automata
- Manfred Broy, Services and features: specification and multi-view
modeling of software systems
- Michael Butler, Modelling and reasoning about long-running
We recently collaborated with a group at IBM UK Labs on formalizing
compensation mechanisms in long-running transactions. An outcome of
this collaboration was a formal modeling language called StAC (Structured
Activity Compensation). This language is similar to process algebraic
languages such as Hoare's CSP or Milner's CCS but StAC has additional
operators dealing with compensation and with early termination of processes.
Compensation and termination are also found in languages such as XLANG,
BPEL4WS and BPML. I propose to give an overview of the StAC language,
its semantics and its relationship with BPEL4WS. I also propose to
discuss some ideas on reasoning about StAC processes. This work is joint
work with Carla Ferreira.
- Ernie Cohen, The NGSCB Nexus
The goal of Microsoft's NGSCB project is to turn the PC into a trustworthy
computing platform. The heart of NGSCB is the (inner) nexus, a small
exokernel that allows a number of concurrently executing, mutually
distrustful supervisor-mode components to safely share hardware resources.
I'll describe the nexus and how we're assuring its correctness.
- Masami Hagiya, UML Scrapbook and Realization of Snapshot
(joint work with Richard Potter et al.)
We have developed SBUML (Scrapbook for UML), an extension of UML (User-Mode
Linux), by adding the checkpointing functionality to UML. In this
paper, we first describe the design and implementation of SBUML, and then
propose a new snapshot programming environment, which is realized by the
SBUML programming interfaces. The intended applications of SBUML include
intrusion detection and sandboxing for network security, and software model
checking for verifying multi-threaded programs. In particular, using
snapshot programming, one can easily enumerate state spaces of programs
actually running under Linux.
Harrison, Formal verification of mathematical algorithms
- Daniel Jackson, Subtypes for specification
We've developed a new type system for Alloy that incorporates subtypes, ad
hoc union types, and parametric polymorphism in a surprisingly simple way.
Type errors correspond to expressions that are redundant, in the sense that
their value doesn't affect the value of the formula as a whole. Using
subtypes, we can also decompose formulas for more efficient analysis.
- Gary T. Leavens, Advances and Issues in JML
The Java Modeling Language (JML, see
jmlspecs.org) is a behavioral interface specification language tailored
to Java. Its goals are to help record detailed design decisions about
Java classes and interfaces and to support a wide range of tools that help
programmers. JML combines features of Eiffel, the Larch family of
interface specification languages, and the refinement calculus. JML is
an open project that is intended to foster some commonality among the
specification notations used by various groups doing research on formal
methods for Java programs.
This talk briefly describes some advances that JML's design has made over
these other specification languages. It then focuses on problems and
issues remaining in its design. In particular it describes the
following advances and issues related to them: purity checking to prevent
side effects in assertions, and protected and private specifications.
Time permitting, other issues can also be discussed.
This talk is based in part on joint work with Curt Clifton, Clyde Ruby, and
others at Iowa State, and also with Rustan Leino, Erik Poll, Bart Jacobs,
Michael Ernst, and others. The work is supported in part by the NSF
under grant CCR-0097907 and CCR-0113181. An earlier version of this
talk was given at the FMCO 2002 conference.
Gary T. Leavens, Yoonsik Cheon, Curtis Clifton, Clyde Ruby, and David R. Cok.
How the Design of JML Accommodates Both Runtime Assertion Checking and Formal Verification. In Frank S. de Boer, Marcello M. Bonsangue, Susanne Graf, and Willem-Paul de Roever, editors,
Formal Methods for Components and Objects, pages 262-284. Volume 2852 of Lecture Notes in Computer Science, Springer Verlag, Berlin, 2003. Also Department of Computer Science, Iowa State University,
March 2003 [PDF].
- K. Rustan M. Leino, Abstract interpretation for heap structures
Many abstract domains are defined over a program state space whose form is a
Cartesian product of variables, that is, where each variable is an
independent coordinate. A particular abstract domain may keep track of
a relation between some variables, say those variables with integer values,
and ignore the others. For an object-oriented program, one can think
of the heap as one big variable: a two-dimensional matrix indexed by object
identities and field names. For the heap variable, one would like to
relate the values at certain locations (object-field pairs) with each other
and with other program variables, which independent-coordinate abstract
domains don't immediately do. I will present the preliminary design of
a system for doing abstract interpretation over a program state space that
includes such a heap variable. The design is parameterized by any
independent-coordinate abstract domain. It automatically maintains new
variable names for interesting heap locations, letting the underlying
abstract domain operate on these variables.
Annabelle McIver, Probabilistic loop guards
- Dominique Méry, Incremental Proof-based Design of Models for Deriving
System On Chip Architecture
The Digital Video Broadcasting (DVB) set of digital TV standards specify
baseline systems for various transmission media: satellite, cable,
terrestrial, ... Each baseline system standard defined the channel
coding and modulation schemes for the transmission medium. The source
coding is adapted from the MPEG-2 standard. The design of these
systems requires a common understanding of measurement techniques and the
interpretation of measurement results. The document ETSI TR 101 290
gives recommendations in this field for defining measurement techniques; it
introduces measurement guidelines for DVB systems, especially the evaluation
of parameters specific to the terrestrial DVB environment (DVB-T). The
goal is to measure and to analyse the MPEG-2 Transport Stream with respect
to recommended parameters.
The evaluation of the quality of signal transmission is a critical issue for
the DVB-T context; it requires the design of a monitoring tool conform to
the normalisation documents. Moreover, the tool should be built with
respect to criteria related to hardware constraints. The main problem
is to get a model of the monitoring tool from the normalisation documents.
The methodology, for designing models of system from requirements, leads to
formally justified hints on the future architectural choices of that system;
it based on the B event-based method, which integrates the incremental
development of models using a theorem prover to validate each step of
development called refinement. It shows that interactions with
non-specialists partners are possible. Moreover, graphs over
parameters called dependency graphs are derived from abstract mathematical
models of the system; they express a hierarchy among parameters which is
automatically validated through the refinement process; they provide hints
for the future architecture of the SoC: decisions of implementations for
computing parameters. Finally, the refinement process expresses a
relationship over system models and over dependency graphs. Future
works are related to the derivation of an architecture and codes from
- Bertrand Meyer, Proving object-oriented components
Carroll Morgan, Exact expected convergence time for Herman's Ring
"Herman's Ring" is an algorithm for self-stabilisation of a token ring, and
dates from about 1990. Its expected time to stabilisation is was
originally stated to be O(n^2 * log.n); since then it has been improved to
Recently we have found an -exact- solution, in probabilistic programming
logic, for the expected time to stabilisation from initial states comprising
just three tokens (and it is Theta(n^2)). I will show how that is proved.
there is a tabulation (using the PRISM probabilistic model-checker) of
expected convergence from demonically selected initial states (ie any
odd number of tokens initially, not just three, from 1 up to the number of
processors N) --- it is calculated effectively by simulation (via Markov
matrices) for values of N from 1,3... up to 17.
We noticed that each one of those steps-to-stabilisation results for the
worst case among demonically chosen initial token numbers, obtained with
PRISM, agrees with our exact value for just three tokens obtained via logic.
Can it be proved therefore that the three-token case is no faster on average
than any of the other odd-number-of-token cases, so that our exact
solution will apply in general?
Parthasarathy, Pending Calls and Matching Returns in Temporal
Model checking of linear temporal logic (LTL) specifications with respect to
pushdown systems has been shown to be a useful tool for analysis of programs
with potentially recursive procedures. LTL, however, can specify only
regular properties, and properties such as correctness of procedures with
respect to pre and post conditions, that require matching of calls and
returns, are not regular. In this paper, we introduce a temporal logic of
calls and returns (Caret) for specification and algorithmic verification of
correctness requirements of structured programs. The formulas of Caret are
interpreted over sequences of propositional valuations tagged with special
symbols "call" and "ret". Besides the standard global temporal modalities,
Caret admits the abstract-next operator that allows a path to jump from a
call to the matching return. This operator can be used to specify a variety
of non-regular properties such as partial and total correctness of program
blocks with respect to pre and post conditions. The abstract versions of the
other temporal modalities can be used to specify regular properties of local
paths within a procedure that skip over calls to other procedures. Caret
also admits the last-caller modality that jumps to the most recent pending
call, and such caller modalities allow specification of a variety of
security properties that involve inspection of the call-stack. Even though
verifying context-free properties of pushdown systems is undecidable, we
show that model checking Caret formulas against a pushdown model is
decidable. We present a tableau construction that reduces our model checking
problem to the emptiness problem for a Buchi pushdown system. The complexity
of model checking Caret formulas is the same as that of checking LTL
formulas, namely, polynomial in the model and singly exponential in the size
of the specification.
- Benjamin C. Pierce, Harmony: A synchronization framework for
Increased use of optimistic data replication has led to increased interest
in SYNCHRONIZATION technologies. These technologies are not only a fact of
life in present-day networks; they are fascinating, and they raise a host of
challenging scientific questions.
The goal of the Harmony project is to develop a generic framework for
synchronizing tree-structured data---i.e., a tool for propagating updates
between different copies, possibly stored in different formats, of
tree-shaped data structures. For example, Harmony can be used to synchronize
the bookmark files of several different web browsers, allowing bookmarks and
bookmark folders to be added, deleted, edited, and reorganized by users
running different browsers on disconnected machines. Other Harmony instances
under development include synchronizers for calendars (Palm DateBook, ical,
and iCalendar formats), address books, Keynote presentations, structured
documents, file systems, and generic XML and HTML.
The beginning of the talk will be a brief guided tour of the Harmony system,
touching on basic design issues (in particular, the tradeoffs between
Harmony's ``state-based'' architecture and ``operation-based''
alternatives), on the core synchronization engine and its properties, and on
the domain-specific programming language used to transform ``concrete data''
from the real world into abstracted forms suitable for synchronization.
Most of the talk will focus on current research challenges.
Harmony is joint work with Michael Greenwald and Alan Schmitt.
- Geoffrey Washburn
- Pamela Zave, Symmetry in network connections
Usually, protocols and features for telecommunications are highly
asymmetric--they make a strong distinction between the initiating and
receiving ends. However, experience shows that many feature functions are
symmetric, and many end-to-end connections consist of segments set up in
alternating directions. The question: how can we organize and analyze
network connections, when taking this richer view?
Pictures taken by Daniel Jackson, Bertrand Meyer, and Carroll Morgan.
IFIP WG 2.3 home