Programming with Managed Time (essay + videos)

By Sean McDirmid (MSR) and Jonathan Edwards (MIT) — August 2014.

There is a huge gap between how programmers and computers experience time. Programmers must manually bridge this gap by choreographing program control flow so that all reads and writes occur in "correct" relative orders. But this is quite difficult: consider initializing objects so their fields are not read too early; asynchronously repairing views so that they are consistent with changing models; organizing analyses (e.g. compilation) into multiple passes so updates (e.g. symbol table entries) are visible where needed; or locking resources that can be manipulated from multiple threads. To make programming easier, the time gap could be eliminated by using timeless pure functions, but the ability to update state directly remains convenient, popular, and often necessary.

   

This essay is based on an Onward! paper (pdf) and follows a blog post.

 

    This essay is the second of three in a series following usable live programming and preceding experiments in code typography. 

Programming languages should address state update order by abstracting away from the computer’s model of time; i.e. they should manage time. Let's draw an analogy with managed memory: it is now widely accepted that languages should unburden us from the hard problem of correctly freeing memory. Languages should likewise correctly order state updates to liberate programmers from the tyranny of instruction counters without dooming them to not update state at all. But what would such a programming system look like?

 

    Auto time management is not a new idea; e.g. Rich Hickey also mentions on it. 

 

Simultaneous Execution

Execution order is already irrelevant in pure timeless functional programming. However, what if execution order could be made irrelevant in the presence of state updates? All statements would appear to execute at the same time (simultaneously): all reads would be guaranteed to see all writes, no matter what lexical or execution order they occur in. Consider some code written in a Python-like programming language and environment:

    Simultaneous execution simply means that all operations appear to occur atomically. 


   

All videos will start and restart whenever you click on them. 

A cell is a stateful variable (like a memory cell), while the log method prints to the right-side screen. Here x and y are used to define z before they are assigned values; also whenever x or y change in value, z is recomputed automatically. Simultaneous execution is just an illusion since code must still execute on a sequential CPU. An underlying programming model, Glitch, replays code repeatedly in a loop until all reads have seen all relevant writes; the code in the above example will execute twice so that expression assigned to z can see the assignments to x and y. Replay is also triggered on any code change, where Glitch treats code as mutable state.

State and state updates can be fully encapsulated in methods as well as class-like traits. Any code inside a method or trait runs simultaneous with method callers and object creators; consider:

   

Glitch is based on speculative execution; see also transactions.  



 

The Vertex trait is extended by graph vertex objects whose edges are stored in private edges sets (state constructs like cells). The code of Vertex's constructor executes simultaneously with all addE method calls on its extending objects, and so can maintain all reachable vertices in a reach set. Reach sets in this example are interactively expanded and contracted as addE calls are added and removed.

Commutativity

To reach an appearance of simultaneous execution, Glitch re-orders and rolls back state updates as needed during replay. More than one assignment to a cell at the same time is not allowed as such assignments would not commute, i.e. they could not be re-ordered safely. Glitch rejects reassignment dynamically; consider:

    Our paper explains how Glitch supports retraction and non-monotonic change in the presence of cycles. 



 

 

    This is not valid as no fixed point is reached. 

On assignment to the x cell a second time, Glitch rejects the first assignment as a “conflict.” Also, when the last assignment is edited to refer to itself (x = x + 1), Glitch is unable to reach a fix point in executing the code, and so execution diverges and increments x forever! Given that cycles can occur through multiple levels of indirection, Glitch is unable to detect them dynamically, let alone reject the bad ones; this is a drawback to our approach but analogous to how languages do not usually deal with infinite loops and runaway recursion.

To cope with the lack of cell re-assignment, Glitch provides various accumulator state constructs to use in aggregation tasks; consider:



Here x is a sum accumulator that is incremented whenever the foo method is called. Only the final accumulated value of x can be observed, while += operations can be re-ordered and rolled back with corresponding -= operations.

Tick Tock Goes the Clock

Although simultaneous execution hides computer time, programmers must still deal with “real” time as defined by discrete event occurrences; e.g. when a key is pressed, a timer expires, or a client request is received. Event-handling code in Glitch executes differently from non-handler code: it only sees the state as it is at the event, and its updates are only visible after; consider:



In this code, a handler on a "tick" event, which fires every 20 milliseconds, is used to implement Newton's interpolation method for computing a square root. The vertical slider to the very right (beneath the log) can be used to "travel" through time, which here allows us to see intermediate iteration values. Time is still available for use in a program, but now effects are deterministic through event handlers.

Access to real time via event handling is necessary to encode reactive, interactive, or time stepped behaviors; e.g. consider the example of a bouncing circle: 

    The example abuses time to escape into sequential execution; we do not condone nor condemn this behavior.  


    The bottom right part of the screen is a canvas that is used to render the circle's behavior.  

 

    This example is based on Bret Victor's "Make time visible" example in his seminal Learnable Programming essay; managed time is one step toward realizing many of the ideas within. 

The c object in this example is a circle whose radius is determined by r, position is determined by p, and velocity is controlled by dx and dy. The code in the tick handler then updates the position and dx and dy as needed to handle wall collisions. The IDE is setup in this example to provide the program with 100 ticks on startup before pausing the execution. Beyond a "focus" frame, which is determined by the time slider, every fifth frame is rendered at reduced opacity to provide for a "strobe" effect.

The first change made in the example increases the circle's radius from 10 to 15; note that the ball's current and strobed rendering are automatically updated according to change. The next change reduces x velocity and y gravity. After these changes, the "play" button on the slider is pressed, moving the simulation beyond its 100 first frames. However, only the last 100 frames are recorded and can be travelled to; old strobe frames are deleted as the window slides forward.

The code of the last example is messy: the simulation state all exists at a top level, and forces are mixed in with the step code. Glitch, however, makes it easy to encapsulate this logic into a trait:



Force is now aggregated through a force sum accumulator that is added to a velocity on every tick event (which in turn becomes the new position). Gravity is then implemented via a += op on force.

Many continuous behaviors must execute after an event occurs; e.g. consider opening a window that exists until dismissed. Accordingly, we can specify code that is subject to simultaneous execution in time following an event. Consider subjecting our bouncing circle to a spring fixed to the mouse:

 

 

 

    This is not a video; nothing happens when you click on it. 



 

    The downmse, and upmse are mouse down and mouse up; respectively. The positionmse method is the mouse's current position. 

When a mouse down event is detected, the original distance between the circle and the mouse at the event is "remembered" in the len variable to be used in an "after" block, which will execute continuously and simultaneously after the event occurs. The after block implements a spring with constant .005 (changed to .05 during the example) as well as dampening. The block is stopped when the mouse goes up via a "break" statement.

When the program is in play mode, user input, such as that related to the mouse, is enabled and recorded for use in replay. When the spring constant in the above example is changed from .005 to .05, the recording is updated to reflect as if it were always such value. The program is then set to play-mode for further interaction.

Experience

Managed time can make code much simpler to write. The IDE used for the examples of this essay was implemented in Glitch as a C# library. Glitch made it much easier to implement a compiler and UI stack that was interactive: mostly you simply write code as you would for the batch case that is then subject to replay in Glitch. Glitch will also automatically deal with multiple pass algorithms. For example, when writing a compiler in Glitch, there is no reason to separate parsing, naming, and type checking into separate passes for the sake of the symbol table; Glitch will simply replay trees as their dependent symbols are added.

Using Glitch as a library in C# is low-risk: whenever necessary we can escape the confines of managed time back to the chaos of computer time. However such embeddings fail to fully test the limitations of managed time, nor fully reap the benefits of simpler semantics. YinYang, the Python-like language used for the examples in this essay, is a “pure managed time” language without escape hatches. YinYang can be used to implement user interfaces, games, and even a compiler, but it is not clear how to express simple things like an in-place bubble sort without resorting to time (via event handling)! Many classic algorithms depend upon re-assignment in one time step and thus do not naturally transliterate into simultaneous execution; similar limitations occur in pure languages.

    See the paper for a complete discussion of related work. And no, Smalltalk does not already do this.  

Our current work has focused on programmability rather than performance: dependency tracing, recording history, state update logging, multi-phase replays, and dynamic code change all have significant performance costs. Initial experience with our prototype suggests that the fine tuning allowed when Glitch is used as a library can scale well; e.g. there are no noticeable problems in the C# implementation of YinYang’s programming environment. We initially found YinYang itself to be very slow: small examples would execute and update quickly, but just a hundred frames of simulating one particle would bring the system to its knees. We were able to optimize some of this overhead away (the examples of this essay run smoothly), but there is clearly much more work to be done! We take heart from the history of managed memory: garbage collection performance was long a grave concern—it was only when the benefits of managed memory became more widely appreciated that large investments in performance optimization were made, with highly successful results.

Live Programming

Glitch can incrementally repair program behavior that undergoes arbitrary changes; as shown in the above examples, this can be applied to changes in code as well as changes in state. The ability to change code in an executing program is an essential part of live programming that provides programmers with continuous feedback on how their code executes as they edit it. This continuous feedback needs a steady frame to be useful, meaning (i) relevant variables can be manipulated at locations within the scene (the framing part), and (ii) that variables are constantly present and meaningful (the steady part). Managed time addresses the “steady” aspect: given simultaneous execution, each state element has only one observable value between any two events—no stepping context needs to be considered when examining a variable’s value.

    The paper talks a bit more about semantics, implementation, and performance. 
 

 

 

 

 

 

 

    A previous essay covers the framing aspect of live programming.  

Glitch can further provide the programming environment with access to the program’s history, supporting novel debugging experiences based on time travel, as envisioned by Bret Victor’s Learnable Programming essay and his earlier Inventing on Principle talk. As can be seen in the last two examples, a programmer can scrub through a program execution, allowing them to inspect its state at different times. For example, a programmer can inspect the trajectory of a bouncing ball by using a slider to visualize the ball at various points in time. The IDE in this essay also sample times using strobing; e.g. as in the examples various positions of a bouncing ball can be viewed in the same visualization, giving the programmer a more comprehensive view of how the ball moves over time.

We hope to enable these richer programming experiences in the general case. The work demonstrated in this essay steps in that direction.

    Also take a look at Chris Hancock's dissertation; he does a great job of defining and describing how live programming is useful.