Usable Live Programming

Programming today involves editing code while simulating how that code will run in our head to ensure that program goals are met. To augment this mental simulation, editing is periodically interrupted by bouts of debugging to get feedback on how code really executes. Live programming [Hancock03] promises for much more fluid feedback between the programmer and a program that is executing while it is being edited. For example, in the following clip of a live programming session, a programmer writes code to display a circle (tap the below clips to play and replay):


This code is written in a research-oriented and very experimental YinYang [McDirmid13] programming language that is supported by its own live programming environment. YinYang syntax is very Python-like with static type inference, type errors are displayed in the editor, which is a topic for another day. The above code creates a new object inferred as a circle whose:

  1. size is specified by assigning its radiusC property;
  2. is positioned by assigning the posW property to a point (composed via xy); and
  3. color is made blue by assigning its bgS property.

The circle object on screen is affected by edited code as soon as it is recognized; e.g. the circle shrinks and grows as "6", "60", and "600" are assigned to radiusC. This kind of live feedback is useful in tweaking our code until desirable output is achieved; e.g. as in the following clip:


The code is edited from displaying a line of circles to a ring of circles, adjusting numbers along the way so that the effect becomes desirable. The continuous execution feedback provided in YinYang makes it easier to hit desired programming targets, much like hitting a target with a water hose is much easier than hitting it with a bow and arrow.

Making Live Feedback More Useful

Although live execution feedback as presented so far is quite dazzling, is it useful? Not really; Bret Victor critiques Khan Academy's live programming learning environment in his Learnable Programming essay as follows:

We see code on the left and a result on the right, but it’s the steps in between which matter most. The computer traces a path through the code, looping around loops and calling into functions, updating variables and incrementally building up the output. We see none of this.

Live execution feedback alone is not very useful as programs involve intermediate computations that are not part of their comprehensible output. A debugger in a conventional IDE would make such computations observable through breakpoints and variable/value inspectors. However, what do breakpoints mean if the program is executing continuously? Variable inspectors of conventional IDEs would also draw attention away from the editing experience, leading to a programming experience that is fragmented in space as well as time.

Probing

We recast live programming as editing and debugging (not just executing) code not just at the same time but also in the same space by weaving debugging feedback directly into the editor. In YinYang, places in program execution, not just code, are navigable in the editor, where expression and statement values can then be probed directly within in the editor. Consider the following clip where the programmer debugs an implementation of a square-root (sqrt) procedure through the use of probing:


This clips starts with an empty sqrt procedure that simply returns its input. A '@' probe symbol is then prepended before the sqrt call on the first line of the editor, which causes the result of the call to be rendered in the line's gutter where syntax/type error feedback is also provided. The implementation of sqrt is then changed to perform the first iteration of Newton's iterative square root method. As this line is being typed, the value of the probe on the first line changes immediately to reflect the new value being returned.

Probes also work inside procedures calls, but because procedures can be called in many calling contexts, a specific calling context is indicated using the editor’s "go to definition" feature: the user clicks on a call and navigates to not only the call’s definition, but the call’s context is also used for probing within that definition. Consider this short excerpt from the above clip:


In this clip, the call becomes red temporarily when the user clicks it where the cursor then jumps down to the definition of sqrt with an A2 inserted below both the callee and called definition to indicate they are linked. Probes are then inserted to observe values of the expression that updates the y variable, which, as shown here, includes infix operators as well as method calls, so multiple values can easily be observed per expression or statement.

With probing, a programmer can debug code without leaving the editor. That probe annotations are specified in code rather than in some separate "watch window" makes the feature even more accessible during the editing experience. On the other hand, probes are somewhat limited in that calling or loop contexts have to be specified through tedious navigation, which can be a PITA for a large program execution. Probes are also not that useful for locating problems in a program; something more is needed!

Tracing

A very wise man once said thatthe most effective debugging tool is still careful thought, coupled with judiciously placed print statements[Kernighan]. Finding problems in a program by probing execution alone is often like searching for a needle in a haystack. For this reason, YinYang program execution can be overviewed into a trace through simple print-like trace statements added into the code by a program. A program's trace is updated live as it is being edited, which is shown in the following clip:


In this clip, the sqrt procedure is defined to use a loop where a trace statement is added to track what y is on the beginning of each iteration, helping out code tweaking so square roots of at least 100 are supported. Since tracing is just another form of output , that trace entries can be updated live is not very surprising. However, trace entries in YinYang are also navigable so that clicking on an entry takes the user both to the lexical location of the creating trace statement and the context that the statement executed under, which then allows for probing. Consider how probing and tracing are used together in a final video clip:


The trace entry created in the third iteration of sqrt's loop is first clicked in this clip, jumping the cursor to the trace statement's location. Next, a probe is added to the y variable in the trace statement, revealing that y is 26.24 (the same as in the trace entry), other probes are added to observe intermediate computations in the loop. Other trace entries are then clicked to observe computation values for different loop iterations. As far as debugging is concerned, there is a lot of synergy between probing and tracing: tracing overviews program execution as the programmer desires to identify a problem, where trace entries are anchors for probing at execution contexts involved in the problem.

Reality

Although live programming as shown here is totally workable on small examples, it is not yet ready for “real” programming. If live programming is truly to be useful, it has to be more than just a toy! Jonathan Edwards mentions that, in order for us to make these kinds experiences real, we must come up with new ways to deal more sanely with time and change. I have begun work on a new programming model, called Glitch, that automatically manages “change” in code and state like garbage collection manages memory. The YinYang’s editor and compiler are already built using Glitch (albeit through C#), and we are finding it to be much more powerful than other reactive programming frameworks currently out there.

Other tasks on the TODO list:

  • YinYang currently only supports batch programs or interactive programs where state exists outside of the live programming loop. Glitch handles change in input and code over time, but not yet time itself, which is important for feedback on interactive programs that handle multiple events over time. YinYang’s UX must also deal with the presentation of time.
  • For UI programming, we’d like to treat UI widgets like trace entries so that they can be used as execution anchors. The idea is to provide complete navigability between UI and program execution, allowing the programmer to quickly move between the two. The idea of navigability was introduced in previous live programming work [Burckhardt13].
  • Being based on Glitch, YinYang naturally supports reactive programming (like an FRP language) as well as safe concurrent execution based on optimistic speculative assumptions that can be “rollbacked” if they are discovered to be wrong, similar to STM. Basically, the capabilities needed to support live programming are related to other reactive, concurrent, and asynchronous features that are also very useful.

History and Related

Live programming improves on REPLs that provide feedback on line-by-line code input but do not allow for code editing. Live programming also represents a significant advance over Smalltalk-style liveness where code and objects can be changed while a program is running, but the program’s meaning is not retroactively and transparently repaired according to newly edited code. [Hancock03] identifies the importance of making useful observations about execution in the experience; quote (bottom of page 56):

… liveness is defined as a feedback property of a programming environment, and this is the way it is commonly discussed in the literature. And yet, if making a live programming environment were simply a matter of adding “continuous feedback,” there would surely be many more live programming languages than we now have. As a thought experiment, imagine taking Logo (or Java, or BASIC, or LISP) as is, and attempting to build a live programming environment around it. Some parts of the problem are hard but solvable, e.g. keeping track of which pieces of code have been successfully parsed and are ready to run, and which ones haven’t. Others just don’t make sense: what does it mean for a line of Java or C, say a = a + 1, to start working as soon as I put it in? Perhaps I could test the line right away—many programming environments allow that. But the line doesn’t make much sense in isolation, and in the context of the whole program, it probably isn’t time to run it. In a sense, even if the programming environment is live, the code itself is not. It actually runs for only an infinitesimal fraction of the time that the program runs.

Hancock’s Flogo II supports observing values in an executing program along with control flow states, although in real-time without support for whole program re-execution. [Edwards04] goes a step further by having the IDE continuously re-execute examples in the form of an “example-enlightened” editor. YinYang borrows from both in its probing and tracing experience.

I started my work on live programming a few years ago [McDirmid07]; check out a short video of the resulting programming environment in use:


This language, called SuperGlue [McDirmid06], was inexpressive in the sense that it was very declarative; at the time it was not clear to how live programming could be done for an imperative language like say Python. Also, while the experience was dazzling, it did not seem very useful, so I gave up for a while. However, watching Bret Victor's talk [Victor11] and reading [Victor12] reinvigorated my motivation in work in this field; see [McDirmid13] for a more detailed description of the work presented in this essay. Recently, Chris Granger and company have been doing wonderful things in the area of interactivity and programming with LightTable as a product ready for use soon! On the other hand, live programming is a research topic with many open questions that should keep us busy for awhile.
People
Related