Abstract. The combination of monads and effects leads to a clean and easy to reason programming paradigm. Monadic programming is easy to reason, but can be cumbersome, as it requires explicit lifting and binding. In this paper, we combine monads and effects within a single programming paradigm: we use monads to define the semantics of effect types, and then, use the effects to program with those monads. In particular, we implemented an extension to the effect type system of Koka [20] with user defined effects. We use a typedirected translation to automatically lift such effectful programs into monadic programs, by inserting bind and unit operations. As such, these userdefined effects are not just introducing a new effect type, but also enable full monadic abstraction and let us “take control of the semicolon” in a typed and structured manner. We give examples of various abstractions like ambiguous computations and parsers. All examples have been implemented in the Koka language and we describe various implementation issues and optimization mechanisms.
Μonads (proposed by Moggi and others [24, 35]) are used in programming languages to wrap effectful computations, but they can be cumbersome to program with as they require explicit lifting and binding. In this paper, we combine monads and effect typing (proposed by Gifford and others [11, 32]) within a single programming paradigm: we use monads to define the semantics of effect types, and then, use the effect types to program with those monads.
We implemented these ideas as an extension of the effect type system of
Koka [20] – a strict, JavaScriptlike, strongly typed programming
language that automatically infers the type and effect of functions.
Koka has a type inference system for a set
of standard effects like divergence, exceptions, heap operations,
input/output, etc. Here we extend the effect system with
monadic user defined effects, where we can define our own effect in
terms of any monad. As an example, we define an amb
effect for
ambiguous computations [17]. In particular we would like to
have ambiguous operations that return one of many potential values, but
in the end get a list of all possible outcomes of the ambiguous
computation. Using our new effect
declaration we can define the
semantics of the amb
effect in terms of a concrete list monad:
effect amb〈a〉 = list〈a〉 {
function unit( x ) { [x] }
function bind( xs, f ) { xs.concatMap(f) }
}
where the unit
and bind
are the usual listmonad definitions.
Using the above effect definition we can write
functions that have the amb
iguous effect.
For example we can write a flip
primitive that
returns either true or false and use it to compute the
truth table of xor
:
function flip : () → amb bool
function xor() : amb bool {
val p = flip()
val q = flip()
(pq) && not(p&&q) // p,q : bool
}
Note how the result of flip
is just typed as bool
(even though amb
computations internally use a list monad of all possible results).
Furthermore, unlike languages like Haskell, we do not need to explicitly
lift expressions into a monad, or explicitly bind computations using do
notation.
Translation. Koka uses an automatic type directed translation that translates a program with userdefined effect types into a corresponding monadic program. Internally, the previous example gets translated into:
function xor() : list〈bool〉 {
bind( flip(), fun(p) {
bind( flip(), fun(q) {
unit( (pq) && not(p&&q) )
})})}
Here we see how the unit
and bind
of the effect declaration
are used: bind
is inserted whenever a monadic value is returned
and passed the current continuation at that point.
The capture of the continuation at every bind
makes monadic effects
very expressive. For example, the amb
effect can cause
subsequent statements to be executed multiple times, i.e. once for every
possible result. This is somewhat dual to the builtin exception effect
which can cause subsequent statements to not be executed at all, i.e.
when an exception is thrown. As such, this kind of expressiveness
effectively let us take “control of the semicolon”.
The marriage of effects and monads. Translating effectful to monadic code is not new. Wadler and Thiemann [36] show that any effectful computation can be transposed to a corresponding monad. If is the callbyvalue type translation of and is the corresponding monad of an effect , they show how an effectful function of type corresponds to a pure function with a monadic type . In this article, we translate any effectful function of type , to the function . This is almost equivalent, except for the parameter which represents arbitrary builtin effects like divergence or heap operations. If we assume to be empty, i.e. a pure function like in Wadler and Thiemann's work, then we have an exact match! As we shall see, due to the nonmonadic effects as an extra parameter, our monads are effectively indexed or polymonads [14] instead of regular monads.
Our contributions are summarized as follows:
Using the correspondence between monads and effects [36], we propose a novel system where you define the semantics of an effect in terms of a firstclass monadic value, but you use the monad using a firstclass effect type. We build on the existing Koka type system [20] to incorporate monadic effects with full polymorphic and higherorder effect inference.
We propose (§ 3) a sound type directed monadic translation that transforms a program with effect types into one with corresponding monadic types. This translation builds on our earlier work on monadic programming in ML [31] and automatically lifts and binds computations.
In contrast to programming with monads directly (as in Haskell),
programming with monadic effects integrates seamlessly with builtin
effects where there is no need for families of functions like map
,
and mapM
, or other special monadic syntax, as we further explain in
§ 2.2.1.
Types tell us about the behavior of functions. For example, the ML type of a function tells us that the function is well defined on inputs of type and returns values of type . But that is only one part of the story, the ML type tells us nothing about all other behaviors: i.e. if it accesses the file system perhaps, or throws exceptions, or never returns a result at all.
Koka is a strict programming language with a type system that tracks effect. The type of a function in Koka is of the form signifying a function that takes an argument of type , returns a result of type and may have a side effect . We can leave out the effect and write as a shorthand for the total function without any side effect: . A key observation on Moggi's early work on monads [24] was that values and computations should be assigned a different type. Koka applies that principle where effect types only occur on function types; and any other type, like , truly designates an evaluated value that cannot have any effect^{1}.
In contrast to many other effect systems, the effect types are not just labels that are propagated but they truly describe the semantics of each function. As such, it is essential that the basic effects include exceptions () and divergence (). The deep connection between the effect types and the semantics leads to strong reasoning principles. For example, Koka's soundness theorem [20] implies that if the final program does not have an effect, then its execution never results in an exception (and similarly for divergence and state).
Exceptions in Koka can be raised using the primitive error
function:
error : string → exn a
The type shows that error
takes a string as an argument and may
raise an exception. It returns a value of any type! This is
clearly not possible in a strongly typed parametric language like Koka,
so we can infer from this type signature that error
always raises an
exception. Of course, effects are properly propagated so the function
wrong
will be inferred to have the exn
type too:
function wrong() : exn int { error("wrong"); 42 }
Exceptions can be detected at runtime (unlike divergence) so we can
discharge exceptions using the catch
function:
function catch( action : () → exn a,
handler : exception → a) : a
To catch exceptions we provide two arguments: an action
that may throw
an exception and an exception handler
. If action()
throws an
exception the handler
is invoked, otherwise the result of the
action()
is returned. In both cases catch
has a total
effect: it
always returns a value of type a
. For example, function pure
always
returns an int
:
function pure() : int { catch( wrong, fun(err){ 0 } ) }
In reality, the type of catch
is more polymorphic: instead of just
handling actions that can at most raise an exception, it accepts
actions with any effect that includes exn
:
function catch( action : () → 〈exne〉 a,
handler : exception → e a ) : e a
The type variable e
applies to any effect. The type expression
〈exne〉
stands for the effect row that extends the effect e
with the
effect constant exn
. Effectively, this type captures that given an
action that can potentially raise an exception, and perhaps has other
effects e
, catch
will handle that exception but not influence any of
the other effects. In particular, the handler
has at most effect e
.
For example, the result effect of:
catch( wrong, fun(err) { print(err); 0 } )
is console
since the handler uses print
. Similarly, if the handler
itself raises an exception, the result of catch
will include
the exn
effect:
catch( wrong, fun(err) { error("oops") } )
Apart from exceptions Koka supplies more builtin effects: we already
mentioned div
that models divergence; there is also io
to model
interaction with inputoutput, ndet
to model nondeterminism, heap
operations through alloc
, read
, and write
, and the list goes on.
For all builtin effects, Koka supplies primitive operators that create
(e.g. error
, random
, print
, etc) and sometimes discharge (e.g. catch
, timeout
, or runST
) the effect.
The main contribution of this paper is how we extend Koka so that the user can define her own effects, by specifying the type and meaning of new effects and defining primitive operations on them.
In the introduction we saw how one can define and use the
ambiguous amb
effect with flip
and xor
operations. We now
discuss the definition and translation in more detail. The amb
effect
is defined using an effect
declaration:
effect amb〈a〉 = list〈a〉 {
function unit( x : a ) : list〈a〉 { [x] }
function bind( xs : list〈a〉, f : a → e list〈b〉 ) : e list〈b〉 {
xs.concatMap(f)
}
}
Defining the amb
effect amounts to defining
the standard list monad, which can be further simplified
by removing the optional type annotations. Given the above definition, a new
effect type amb
is introduced, and we know:
a
values: as a list〈a〉
unit
, and
bind
.
Moreover, with the above definition Koka automatically generates the
to_amb
and from_amb
primitives, and a monadic type alias:
function to_amb ( xs : list〈a〉 ) : amb a
function from_amb ( action: () → amb a ) : list〈a〉
alias M_amb〈a〉 = list〈a〉
that allow us to go from monadic
values to effect types and vice versa. These are basically typed versions
of the reify
and reflect
methods of Filinski's monadic
embedding [8].
We use the above primitives to define the flip
function that creates the amb
iguous effect
and the function main
that evaluates the effectful computation:
function flip() : amb bool { to_amb( [False,True] ) }
function main() : console () { print( from_amb(xor) ) }
When we evaluate main
we get a list of all possible output
values: [False,True,True,False]
. One can extend such mechanism to, for
example, return a histogram of the results, or to general probabilistic
results [17, 31].
Later we discuss the more interesting effect of parsers (§ 2.3), but before that, let's discuss in more detail how we do a monadic translation of effects.
Koka uses a type directed translation to internally translate effectful
to monadic code. As shown in the introduction, the xor
function is
translated as:


In particular, bind
is inserted at every point where a
monadic value is returned, and passed the current continuation at that
point. Since flip
has an ambiguous result, our typedirected
translation binds its result to a function that takes p
as an argument
and similarly for q
. Finally, the last line returns a pure boolean
value, but xor
's result type is ambiguous. We use unit
to lift the
pure value to the ambiguous monad. We note that in Koka's actual
translation, xor
is translated more efficiently using a single map
instead of a unit
and bind
.
The translation to monadic code is quite subtle and relies crucially
on type information provided by type inference. In particular,
the intermediate core language is explicitly typed à la System F
(§ 3.1). This way, we compute effects precisely and
determine where bind
and unit
get inserted
(§ 3.3). Moreover, we rely on the user to ensure
that the unit
and bind
operations satisfy the monad
laws [35], i.e. that unit
is a left and right identity
for bind
, and that bind
is associative. This is usually the case
though; in particular because the effect typing discipline ensures
that both unit
and bind
are total and cannot have any sideeffect
(which makes the translation semantically robust against rewrites).
One of the crucial features of Koka is effect polymorphism. Consider the
function map
function map(xs : list〈a〉, f : (a) → e b) : e list〈b〉 {
match(xs) {
Nil → Nil
Cons(y, ys) → Cons( f(y), map(ys,f) )
}
}
The function map
takes as input a function f
with some effect e
.
Since it calls f
, map
can itself produce
the effect e
, for any effect e
. This means that we can use
such existing abstractions on user defined effects too:
function xor() {
val [p,q] = map( [1,2], fun(_) { flip() } )
(pq) && not(p&&q)
}


Unfortunately, this leads to trouble when doing a type directed
translation: since the function passed to map
has a monadic effect, we
need to bind
the call f(y)
inside the map
function! Moreover,
since we can apply map
to any monadic effect, we need to
dynamically call the right bind
function.
The remedy is to pass Haskelllike dictionaries or monad interfaces to
effect polymorphic functions. In our case, a dictionary is a structure
that wraps the monadic operators bind
and unit
. The dictionaries are
transparent to the user and are automatically generated and inserted.
During the translation, every effect polymorphic function takes a
dictionary as an additional first argument. Figure 1 shows
how the map
function gets translated.
Now that internally every effect polymorphic function gets an extra
dictionary argument, we need to ensure the corresponding dictionary is
supplied at every callsite. Once again, dictionary instantiation is
typedirected and builds upon Koka's explicitly typed intermediate core
language. Whenever a polymorphic effect function is instantiated with a
specific effect, the type directed translation automatically inserts the
corresponding dictionary argument. Figure 1 shows this
in action when we call map
inside the xor
function.
We can still use map
with code that has a nonmonadic effect and in
that case the translation will use the dictionary of the primitive
identity monad, e.g. map( dict_id, [1,2], sqr )
.
A naïve translation is not very efficient though: always using the
monadic version of map
introduces a performance penalty to all code,
even code that doesn't use any monadic effect. As shown in
§ 4.1, we avoid this by careful translation. For every
effect polymorphic function, we generate two versions: one that takes a
monad dictionary, and another that has no monadic translation at all.
When instantiating map
we use the efficient nonmonadic version unless
there is monadic effect. This way the performance of code with
nonmonadic effects is unchanged.
Being able to reuse any previous abstractions when using monadic effects
is very powerful. If we insert userdefined effects to a function, only
the type of the function changes. Contrast this to Haskell: when
inserting a monad, we need to do a nontrivial conversion of the syntax
to do
notation, but also we need to define and use monadic counterparts
of standard functions, like mapM
for map
.
Koka allows combination of user defined effects.
Consider a behavior user defined effect,
which represents computations whose
value varies with time, as in functional reactive programs [7].
We encode the beh
effect as a function from time
to a
.
Since a beh
aviour is a function, it may have effects itself:
it can diverge or throw exceptions for example.
This means that we need to parameterize the beh
effect with two type parameters one for the value a
and one for the effect e
:
effect beh〈e, a〉 = time → e a { ... }
With the above definition, Koka automatically creates a type alias
for the behavioral monad and the respective unit
and bind
operators:
alias M_beh〈e, a〉 = time → e a;
ub : a → e M_beh〈e, a〉;
bb : (M_beh〈e, a〉, a → e M_beh〈e, b〉) → e M_beh〈e, b〉
As with the amb
effect, the user can define primitives that,
for example, return the temperature and humidity over time:
temp : () → beh int;
hum : () → beh int;
We use these primitives to define a function that states that one goes out when temperature is more than 70°F and humidity less than 80%. Koka automatically translates the effectful function to its monadic version:


Next, we want to insert ambiguity into the above function. Following Swamy et al. [31] we combine the ambiguous and behavioral effects by tupling them
effect 〈amb, beh〉〈e, a〉 = time → e list〈a〉 { ... }
and Koka creates the appropriate monadic operators
alias M_ab〈e, a〉 = time → e list〈a〉;
uab : a → e M_ab〈e, a〉;
bab : (M_ab〈e, a〉, a → e M_ab〈e, b〉) → e M_ab〈e, b〉;
Then, we define morphisms to lift from a single to the joined effect
morphism amb 〈amb, beh〉{ fun(xs){ fun(t){ xs }} }
morphism beh 〈amb, beh〉{ fun(b){ fun(t){ [b(t)] }} }
With the above morphism definitions, Koka automatically derives internal morphism functions
a2ab :: M_amb 〈e, a〉 → e M_〈amb, beh〉〈e, a〉;
b2ab :: M_beh 〈e, a〉 → e M_〈amb, beh〉〈e, a〉;
and use them to automatically translate our modified go_out
function
that combines the two user defined effects:


This technique for combining monads by tupling is taken from [31]. But, as further discussed in § 5 our current work, though highly inspired, crucially differs from [31] in that the use of effect polymorphism (instead of effect subtyping that was previously used) makes types much simpler.
There are various language design aspects with regard to morphism
declarations – due to space restrictions we highlight the most
important ones and defer to [31] and [14]
for a more indepth discussion. First of all, since effect rows are
equivalent up to reordering of labels, we can only declare one combined
monad for a specific set of userdefined effects. For example, we can
combine 〈amb,beh〉
in only one of the two possible ways (within
the scope of a module). Moreover, the compiler rejects duplicate
definitions. Finally, if we assume that the morphism laws hold,
the compiler could derive morphisms from the existing ones, i.e.
morphisms from ma
to mb
, and mb
to mc
can be combined to give
rise to a morphism from ma
to mc
.
Currently, in our system we assume that the user provides all required morphisms explicitly,
but we plan to implement automatic morphism derivation.
Userdefined effects, like amb
, interact with
builtin effects like state, divergence, and exceptions. The formal
semantics of Koka [20] are unchanged in our system, and we
define the semantics of the userdefined effects simply as a monadic
transformation. As such, if we viewed the effects as a stack of monad
transformers, the user defined effects would be last with all builtin
effects transforming it, i.e. something like div〈st〈exn〈amb〈a〉〉〉〉
.
These semantics still require careful compilation; for example, it is
important when doing the internal monadic translation to properly capture
local variables in the continuation functions passed to bind
.
Here is an example of a more subtle interaction: if we use mutable variables in the ambiguity monad, we may observe that computations run multiple times:
function strange() : amb bool {
var i := 0
val p = flip()
val q = flip()
i := i+1
if (i ≥ 4) then True else (pq) && not(p&&q)
}
In this example, we define and increment the mutable variable i
. The
function strange
itself does not have a stateful effect (st〈h〉
)
because the mutability is not observable from outside and can be
discharged automatically through Koka's higherranked type
system [18, 20]. However, executing
run(strange)
results in [False,True,True,True]
where inside the body
of strange
we can observe that some statements are executed multiple
times. This shows the importance of strong typing: in an IDE one would
see that the flip()
invocations have an amb
effect that causes the
following statements to potentially execute more than once. This is
similar for exceptions, where statements following invocations of
functions that may raise exceptions, may not execute at all.
Under the monadic semantics, the interaction with builtin effects is
more or less what one would expect, with one exception: the exception
effect does not play nice with certain user defined effects due to the
(expected) lexical scoping of catch
. Exceptions interact with amb
as
expected, but this is not the case in the context of the parser effect
as we further discuss in § 2.3.1.
We conclude the overview with a more advanced example in the form of monadic
parsers. A parser can be defined as a function that consumes the input
string and returns a list of (all possible) pairs of parsed tokens and
the remaining input: string → list〈(a,string)〉
. This representation
is quite standard but many other designs are
possible [15, 21].
Since a parser (like the beh
aviour effect) is a function, it may have effects itself: parsers can diverge or
throw exceptions for example. This means that we need to parameterize
the parser effect with two type parameters (instead of one):
effect parser〈e,a〉 = string → e list〈(a,string)〉 {
function unit( x ) { return fun(s) { [(x,s)] } }
function bind( p, f ) {
return fun(s) {
p(s).concatMap( fun(r) { f(r.fst)(r.snd) } )
}}
}
Given the above definition, Koka automatically derives the conversion functions:
function to_parser( p : string → e list〈(a,string)〉 )
: 〈parsere〉 a
function from_parser( action : () → 〈parsere〉 a)
: e (string → e list〈(a,string)〉)
which can be used by the parserlibrary developer to build primitive
parsing operators as shown in Figure 2: parse
that takes
a parsing computation and an input string and runs the parser;
succeed(x)
that returns its argument x
, without consuming the input;
satisfy(p)
that parses the string iff it satisfies p
; and
choice(p_{1}, p_{2})
that chooses between two parsers p_{1}
or p_{2}
.
Note how the effect e
in from_parser
occurs both as the effect of the
function, but also in the returned parser function. Essentially this is
because we cannot distinguish at the type level whether an effect occurs
when constructing the parser (i.e. before the first bind
), or whether
it occurs when running the parser.
function parse( p : () → 〈parsere〉 a, input : string
) : e list〈(a,string)〉 {
from_parser(p)(input)
}
function succeed( x : a ) : parser a {
to_parser fun(input) { [(x,input)]
}
function satisfy( pred : (string) → maybe〈(a,string〉) ) {
to_parser fun(input) {
match(pred(input)) {
Just((x,rest)) → [(x,rest)]
Nothing → []
}}
}
function choice( p_{1} : () → 〈parsere〉 a,
p_{2} : () → 〈parsere〉 a ) : 〈parsere〉 a {
to_parser fun(input) {
match (parse(p_{1},input)) {
Nil → parse(p_{2},input)
res → res
}}
}
Having set up the parser effect and its primitives, we combine them to
construct other parsers. For example, many(p)
is a parser that
applies p
zero or more times. A digit
can be parsed
as a character where satisfy
isDigit
. Combining these two, many(digit)
gives a list of parsed digits.
function main(input: string): div list〈int, string〉 {
parse(integer, input)
}
function integer() : 〈parser,div〉 int {
val ds = many(digit)
ds.foldl(0,fun(i,d) { i*10 + d })
}
function digit() : parser int { satisfy( ... ) }
function many( p : () → 〈parser,dive〉 a )
: 〈parser,dive〉 list〈a〉 {
choice { Cons(p(),many(p)) } { succeed(Nil) }
}
Running main("12a")
now results in [(12,"a")]
. Note also how in the
integer
function we combine parser results
(many(digit)
) with pure library functions (foldl
).
Because the parser monad is defined as a function exception handling does not work as expected. Take for example the following parser that may raise an exception:
function division() : 〈parser,exn〉 int {
val i = integer(); keyword("/"); val j = integer()
if (j==0) then error("divide by zero") else i/j
}
Suppose now that we catch errors on parsers, as in
the following safe
version of our parser:
function safe() : parser int { catch( division, fun(err) { 0 } ) }
If catch
is implemented naïvely
the behaviour of safe
would not be the expected one.
In particular, if catch
just wraps a native trycatch
block, then the
exceptions raised inside division
will not be caught: after the monadic
translation, division
would return immediately with a parser function as,
only invoking the parser function would actually raise the exception (i.e. when
the parser is run using parse
). Effectively, the lexical scoping
expectation of the catch
would be broken.
Our (primitive) catch
implementation takes particular care to work
across monadic effects too. Since catch
is polymorphic in the effect,
the type directed translation will actually call the specific monadic
version of catch
and pass a dictionary as a first argument. The
primitive monadic catch
is implemented in pseudocode as:
function catch_monadic( d : dict〈e〉, action, handler ) {
catch( { d.bind_catch( action, handler ) }, handler)
}
Besides catching regular exceptions raised when executing action()
, it
uses the special bind_catch
method on the dictionary that allows any
userdefined effect to participate in exception handling. This is
essential for most effects that are implemented as functions. For the
parser, we implement it as:
effect parser〈e,a〉 = string → e list〈(a,string)〉 {
...
function bind_catch( p, handler ) {
fun(s) { catch( { p(s) }, fun(err) { handler(err)(s) }) }
}
}
With this implementation in place, the parser effect participates fully
in exception handling and the safe
parser works as expected, where any
exception raised in division
is handled by our handler, i.e. the
expression parse(safe,"1/0")
evaluates to 0
.
Here is the type of bind_catch
and its default implementation:
function bind_catch( action : () → 〈exne〉 m〈〈exne〉,a〉,
handler : exception → e m〈e,a〉
) : e m〈e,a〉
{ catch(action,handler) }
In the above type we write m
for the particular monadic type on which
the effect is defined. A nice property of this type signature and default
implementation is that Koka type inferencer requires you to define
bind_catch
, only when needed for your particular monad. For example,
the default works as is for the amb
effect since its monad disregards
the e
parameter, but the default is correctly rejected by the type
checker for the parser
since the signature of catch
requires the
m〈〈exne,a〉
to be unified with m〈e,a〉
.
In this section we formalize the typedirected translation using an explicitly typed effect calculus we call . First, we present the syntax (§ 3.1) and typing (§ 3.2) rules for . Then, we formalize our translation (§ 3.3) from effectful to monadic . Finally, we prove soundness (§ 3.4) by proving type preservation of the translation.
expressions     
  
  
  
    
types  type variable  
  
kinds    values, effects  
  effect constants  
  user effects  
  type constructor  
type scheme   
const  unit type  
bool type  
functions  
empty effect  
  effect extension  
partial, divergent  
user effects  
effect to universe 
Syntactic sugar:
effects  
effect variables  
closed effects    
single effect  
user effects 
Figure 3 defines the syntax of expressions and types of , a polymorphic explicitly typed calculus. is System F [12, 28] extended with effect types.
Expressions. expressions include typed variables , typed constants , abstraction , application , value bindings , if combinators , type application and type abstraction . Each value variable is annotated with its type and each type variable is annotated with its kind. Finally, each abstraction is annotated with its result effect which is necessary to check effect types.
In strict languages one distinguishes between values and expressions to restrict type abstraction and application over value expressions. For simplicity, here we omit this separation and rely on the Koka type checker to guarantee that we only generalize over expressions with a total effect (hence there is no need for the `value restriction' [20]).
Types and type schemes. Types consist of explicitly kinded type variables and application of constant type constructors , where the type constructor has the appropriate kind . We do not provide special syntax for function types, as they can be modeled by the constructor that, unlike the usual function type, explicitly reasons for the effect produced by the function. Finally, types can be qualified over type variables to yield type schemata.
Kinds. Wellformedness of types is guaranteed by a kind system. We annotate the type with its kind , as . Apart from the kind of types () and the kind of functions , we have kinds for effect rows (), effect constants (), and userdefined effects (). We omit the kind of the type , and write , when is immediately apparent or not relevant. For clarity, we use for regular type variables and for effect type variables. Finally, we write for effects, i.e. types of kind .
Effects. Effects are types. Effect types are defined as a row of effect labels . Such effect row is either empty , a polymorphic effect variable , or an extension of an effect row with an effect constant , written as . Effect constants are either builtin Koka effects, i.e. anything that is interesting to our language like exceptions (), divergence () etc. or lifted userdefined monadic effects like the ambiguous effect . Note that for an effect row to be wellformed we use the effect function to lift to the appropriate kind . For simplicity, in the rest of this section we omit the explicit lifting and write to denote when a label of kind is expected.
Finally, Figure 3 includes definition of type constants and syntactic sugar required to simplify the rest of this section.
Type rules. Figure 4 describes type rules for where the judgment assigns type and effect to an expression . All the rules are equivalent to the System F rules, except for rule (Lam) where the effect of the function in the type is drawn from the effect annotation in the abstraction. is explicitly typed, in that variables are annotated with their type and functions with their effect, hence, type checking does not require an environment. By construction, the Koka type inference rules always produce wellformed . Soundness of follows from soundness of Koka as described in [20]. Next, we write if there exists a derivation for some effect .
In this subsection we present how Koka preprocesses the effect and morphism declarations provided by the user. Figure 5 summarizes the rules that the Koka compiler follows to desugar the user definitions to . After desugaring, Koka proceeds to effect and type inference as presented in previous work [19, 20].
The Identity Effect. Before we look at the general type inference rule for effect declarations we start with a concrete example, namely the identity effect :
effect uid〈e,a〉 = a {
function unit(x) { x }
function bind(x,f) { f(x) }
}
From the above effect definition, initially, Koka generates a type alias that isolates the first line of the definition and relates the effect name with its monadic representation.
Then, Koka checks wellformedness of the effect definition, by (type) checking that and are the appropriate monadic operators. Concretely, it checks that
Given the definitions of unit
and bind
, Koka automatically constructs the primitives required
by the rest of the program to safely manipulate the identity effect:
Dictionaries. The first three values are uservisible but the final dictionary value is only used internally during the monadic translation. The type of the effect dictionary (e.g. ), is a structure that contains the monadic operators and of the effect. It can as well include the monadic which will otherwise be automatically derived from and , and the method to interact with primitive exceptions. Thus, we define the dictionary structure as a type that is polymorphic on the particular monad, represented as type variable :
With this we can type .
General userdefined effects. Figure 5 generalizes the previous concrete example to any userdefined effect declaration. The judgment:
states that under a kind and type environment , the effect declaration results in a new type environment that is extended with the needed types and primitives implied by . As shown in Figure 5, we first check wellformedness of the effect types, and then check that and operations have the proper types. Finally, the environment is extended with the corresponding types and values.
As a morphism example,
we assume the existence of the two user defined
effects from the Overview (§ 2),
the amb
iguous and the beh
aviour.
Moreover, we assume that the user appropriately defined
their joined effect 〈amb, beh〉
.
From these three user effect definitions, Koka desugarer yields
three aliases, say:
Then, the user can define morphisms that go from amb
or beh
to the combined effect 〈amb, beh〉
morphism amb 〈amb, beh〉 {
fun(xs : list〈a〉) : time → e list〈a〉{
fun(t){xs}
}
}
morphism beh 〈amb, beh〉 {
fun(x : time → e a) : time → e list〈a〉{
fun(t){[x(t)]}
}
}
From the above definitions, Koka will generate two morphism functions:
The above morphisms are internal Koka functions that will be used at the translation phase to appropriately lift the monadic computations.
General userdefined morphisms. Figure 5 generalizes the previous concrete example to any morphism declaration. The judgment:
states that under a kind and type environment , the morphism declaration from effect raws to results in a new type environment that is extended with the morphism from the source effect to the target effect , when the expression has the appropriate morphism type. The first premise ensures that is always a subeffect of the target effect .
Next, we define the typedirected monadic translation that takes an effect expression to the monadic expression .
Computed effects. Our translation needs two effects and : the maximum (inferred) effect and the minimum (computed) effect . After type inference, every function body has one unified effect , that consists of the unification of all the effects in that function. Our translation computes bottomup the minimal userdefined portion of each separate subexpression, where should always be contained in . Specifically, we define computed effects as effect types that have the following grammar:
Thus, computed effects can be a row of usereffect labels (including the empty row) or an effect variable. Note that because user effects are always constants, and according to the equivalence rules for rows [20], we consider computed effect rows equal up to reordering. For example .
As shown by the grammar, the computed effects are restricted in that a row of monadic effects cannot end with a polymorphic tail . This limits the functions that a user can write: no userdefined effects can be in a row that has a polymorphic tail. We believe that this restriction is not too severe in practice since it only restricts functions that are higherorder over userdefined effects where the function parameter is openended in the sideeffects it can have. This is quite unusual and easy to circumvent by stating the allowed effects explicitly. The advantage of adding this restriction is great though: we completely circumvent the need to add constraints to the type language and can simplify the translation significantly compared to earlier work [31].
We convert a regular effect type to a computed effect , and dually, we apply to an effect to remove the user defined effects:
Note that this function is partial: when a type is passed that combines a userdefined effect with a polymorphic tail it fails and the compiler raises an error that the program is too polymorphic.
To join computed effects we use the operator:
Again, this function is partial but we can show that the usage in the translation over a welltyped koka program never leads to an undefined case.
Type translation. The type operator translates effectful to monadic types.
For function types the effect is split to the buildin effect portion and the userdefined portion . The effect of the translated type is only the buildin portion while the result is monadically wrapped according to the userdefined portion using the operator
The operation derives a monadic result type and effect. For polymorphic effect types, cannot be computed until instantiation. We therefore keep this type unevaluated until instantiation time. As such, it is really a dependent type (or a type parametrized over types). In our case, this is a benign extension to since is explicitly typed. After instantiation, the type argument is not polymorphic, thus will return a concrete type.
The translation uses another dependent type to get the type of a polymorphic dictionary (see Figure 7):
Note that the type translation function reveals how the to_eff
and from_eff
functions are internally implemented.
If we apply type translation to their signatures, we see that both become
identity functions. For example, the type translation type of to_eff
is
which is equivalent to
, i.e. we can implement
to_eff
simply as . Similarly, from_eff
is implemented
as .
Monadic Abstractions. Figure 6 defines two syntactic abstractions that we use to lift and bind effect computations.
lifts the expression from the source to the target computed effect. If the computed effects are the same is returned. If effects are different and the source effect is empty, the lifting is performed via a call to the field of the dictionary of the target effect . Otherwise, the lifting is performed via the morphism . Note that the monadic and the morphism operators are effect polymorphic thus is also parametric on an effect that is used to instantiate the effect variable of and . Furthermore, fails if the morphism is not defined.
binds the expression to the variable that appears in . The expression (resp. ) has computed (minimum) effect (resp. ) and is the combined (maximum) effect of the binding. If does not have any computed effect binding is simply a binding, otherwise both expressions are lifted to the effect and binding is performed via a call in the field of the dictionary of the target effect .
As an optimization, if our system uses the monadic instead of lifting to and then using . As in the combined effect is used to instantiate the effect variable of the monadic operators. This optimization is similar to the ones used to avoid unnecessary “administrative” redexes, which customary CPStransform algorithms go to great lengths to avoid [6, 29].
We use all the above operators to define the translation relation as shown in Figure 7, where is inherited and synthesized.
Values. Values have no effect, and compute . Rules (Con) and (Var) are simple: they only translate the type of the expression and leave the expression otherwise unchanged. Rule (Lam) states that when translating the type of the parameter is also translated. Moreover, the effect dictates the maximum effect in the translation of the body . Finally, we the body of the function from the computed minimum effect to .
Type Operations. Type abstraction and application preserve the computed effect of the wrapped expression , and the Koka type system guarantees that type abstraction only happens over total expressions. In (TLamE) we abstract over an effect variable , thus we add an extra value argument, namely, the dictionary of the effect that instantiates , i.e. . Symmetrically, the rule (TAppE) that translates application of the effect applies the dictionary of the effect . Note that if the computed effect is a set of userdefined effects, say , then the rule directly applies the appropriate dictionary , i.e. the dictionary value that Koka created from the effect definition. If the computed effect is an effect variable , then the rule applies the appropriate variable dictionary , i.e. the variable abstracted by a rule (TLamE) lower in the translation tree. By the way we defined computed effects, the final case is the computed effect to be the empty effect , and then the identity dictionary is applied. This is because in the computed effects world the total effect is the identity effect . In our rules we used the effect as it is more intuitive.
Application. The rule (App) translates the application . The minimal computed effect of the application is the union of the computed effects of the function (that is ), the argument (that is ) and the computed effect of the body of the the function. The maximum effect of the function is but using this maximum effect would lead to unoptimized translation, since every application would be unnecessarily lifted to its maximum effect. For example, if we wrote:
choose(id([False,True]))
then the unified effect for the id
application would be amb
and
we would unnecessarily pass an amb
dictionary to id
and bind the
result.
As an optimization, we compute the minimal effect as , which is presented in Figure 8. In this example, we can apply (OptTApp) and use a fully pure invocation of the subexpression. As it turns out, in practice this optimization is very important and saves much unnecessary binding, lifting, and passing of dictionaries. Since a rowbased effect system like Koka uses simple unification, all sub expressions get unified with the final type. With this optimization we basically recover some of the subtyping but we can do it separately from the initial type inference. This is important in practice as we can now clearly separate the two complex phases and keep using simple unification during type inference.
Finally, the rule (Val) translates binding by binding to in . Similarly, the rule (If) translates by first binding to a fresh variable , since may have userdefined effects and then lifting both branches to the computed effect that is the union of the computed effects of the guard and the two branches and .
From previous work on type inference for Koka [20] we have that the resulting explicitly typed Koka is welltyped, i.e.
Lemma 1.
(Explicit Koka is welltyped)
If then .
Here, the relation is the type inference relation defined in [20] where the source term gets type with effect and a corresponding explicitly typed term . The new part in this paper is that our translation preserves types according to the type translation:
Theorem 1.
(Type Preservation)
If and , then .
Proof. In Appendix A we give a proof of soundness for a more general Theorem (General Type Preservation): if and , then . Theorem 1 follows as a direct implication for a computed effect .□
This property is strong since Koka has explicit effect types, i.e.
it is not possible to have a typed translation simply by using , and
as such it gives high confidence in the faithfulness of the translation.
This is related to the types of bind
and unit
for example which are
both guaranteed to be total functions (since they are polymorphic in the
effect).
We implemented monadic userdefined effects in Koka and the implementation is available at koka.codeplex.com. Koka's compiler is implemented in Haskell and it transforms Koka source code to JavaScript:
The compiler takes as input Koka source code as specified in [20] extended with effect and morphism declarations .
Next, it performs type inference and transforms the source code the Koka's intermediate representation which is very similar to of Figure 3.
Then, it applies the translation rules of Figure 7, i.e. it uses the inferred types and effects to apply our effect to monadic translation.
Finally, the monadic intermediate Koka is translated to JavaScript.
The goal of our implementation is to build a sound, complete, and efficient translation with minimum runtime overhead. We get soundness by § 3.4, and in § 4.2 we discuss that the translation is complete, modulo the monomorphic restriction. Moreover (§ 4.1), our translation is optimized to reduce any compile and runtime overhead as far as possible. We conclude this section by a quantitative evaluation (§ 4.3) of our translation.
We optimized the translation rules of Figure 7 with three main optimization rules: only translate when necessary, generate multiple code paths for effect polymorphic functions, and use monadic laws to optimize bind structures.
Selective Translation. Looking at the existing Koka source code, we observed that most of the functions are userdefined effect free. A function is userdefined effect free when (1) it does not generate userdefined effects, (2) it is not effect polymorphic (as any abstract effect can be instantiated with a userdefined one), and (3) all the functions that calls or defines are userdefined effect free.
A userdefined effect free function is translated to itself. Thus our first optimization is to skip translation of such functions. This optimization is crucial in code that does not make heavy use of userdefined effects. As it turns out, 229 out of 287 Koka library functions are not translated!
Two versions of effect polymorphic functions. The translation rule (TAppE) is quite inefficient when the applied effect is not userdefined: the identity dictionary is applied and it is used to perform identity monadic operators. Userdefined effect free functions are the common case in Koka code, and as such they should not get polluted with identity dictionaries.
As an optimization, when translating an effect polymorphic function we create two versions of the function:
the monadic version, where the effect variables can be instantiated with
userdefined effects, thus insertion of monadic operators (unit
and bind
)
and thus the addition of the monadic dictionary is required, and
the nonmonadic version, where the effect variables cannot be instantiated with userdefined effects, thus the monadic dictionary is not required.
To soundly perform this optimization we use an environment with the effect variables that cannot be instantiated with userdefined effects, which is used as a proof that insertion of monadic operators is not required.
before translation  after translation  percentage increase  
lines  bytes  lines  bytes  lines  bytes 
2678  89038  3248  121668  21.28 %  36.65% 
Nonmonadic versions are translated, but using the effect environment that constraints their effect variables to nonuser effects. Still translation is required as these functions may use or produce userdefined effects. Though, in practice translation of nonmonadic versions of functions is nonrequired and optimized by our previous optimization. It turns out that in Koka's library there are 58 polymorphic functions for which we created double versions. None of the nonmonadic version requires translation.
Monadic Laws. As a final optimization we used the monadic laws to produce readable and optimized JavaScript code. Concretely, we applied the following three equivalence relations to optimize redundant occurrences:
The Monomorphism Restriction restricts value definitions to be effect monomorphic. Consider the following program
function poly(x : a, g : (a) → e b) : e b
val mr = if (expensive() ≥ 0) then poly else poly
How often is expensive()
executed? The user can naturally assume that
the call is done once, at the initialization of mr
. But, since mr
is
effect polymorphic (due to poly
), our translation inserts a dictionary
argument. Thus, the call is executed as many times as mr
is referenced.
To avoid this situation the Koka compiler rejects such value definitions,
similar to the monomorphism restriction of Haskell. Aside from this
restriction, our translation is complete, i.e., we accept and translate
all programs that plain Koka accepts.
Finally, we give a quantitative evaluation of our approach that allows us to conclude that monadic userdefined effects can be used to produce high quality code without much affecting the compile or runningtime of your program.
static time  file size  

program  compile  trans. time  trans. %  lines 
amb  107 ms  2.795 ms  2.60 %  67 
parser  89 ms  2.582 ms  2.88 %  72 
async  117 ms  3.155 ms  2.67 %  166 
behamb  276 ms  3.009 ms  1.09 %  101 
core  6057 ms  16.424 ms  0.27 %  2465 
In Figure 10 we present the static metrics and the file size of our five benchmarks:
(1) amb
manipulates boolean formulas using the ambiguous effect,
(2) parser
parses words and integers from an input file using the parser effect,
(3) async
interactively processes user's input using the asynchronous effect, and
(4) behamb
combines the ambiguous and the behavior effects
(5) core
is Koka's core library that we translate so that all library's functions
can be used in effectful code.
On these benchmarks we count the file size in lines,
the total compilation time, the translation time
and we compute the percentage of the compilation time that is spent on translation.
The results are collected on an Intel Core i5 machine.
Static Time. As Figure 10 suggests the compiletime spend in translation is low (less that 3% of compilation time). More importantly, the fraction of the time spend in translation is minor in code that does not use monadic effects, mostly due to our optimizations (§ 4.1).
Run Time. To check the impact of our translation on runtime we
created “monadic” versions of our examples, i.e.
a version of the amb
that uses lists instead of the amb
effect and
a version of the parser
that uses a parser
data type.
We observed that our monadic translation does not have any runtime cost
mostly because it optimizes away redundant calls (§ 4.1).
We conclude that our proposed abstraction provides quality code with no runtime and low statictime overhead.
Many effect typing disciplines have been proposed that study how to delimit the scope of effects. Early work is by Gifford and Lucassen [11, 22] which was later extended by Talpin [33] and others [25, 32]. These systems are closely related since they describe polymorphic effect systems and use type constraints to give principal types. The system described by Nielson et al. [25] also requires the effects to form a complete lattice with meets and joins. Wadler and Thiemann [36] show the close connection between monads [24, 35] and the effect typing disciplines.
Java contains a simple effect system where each method is labeled with the exceptions it might raise [13]. A system for finding uncaught exceptions was developed for ML by Pessaux et al. [26]. A more powerful system for tracking effects was developed by Benton [4] who also studies the semantics of such effect systems [5]. Recent work on effects in Scala [30] shows how restricted polymorphic effect types can be used to track effects for many programs in practice.
Marino et al. created a generic typeandeffect system [23]. This system uses privilege checking to describe analytical effect systems. Their system is very general and can express many properties but has no semantics on its own. Banados et al. [1] layer a gradual type system on top this framework. This may prove useful for annotating existing code bases where a fully static type system may prove too conservative.
Our current work relies heavily on a type directed monadic translation. This was also described in the context of ML by Swamy et al. [31], where we also showed how to combine multiple monads using monad morphisms. However, Koka uses rowpolymorphism to do the typing, while [31] uses subtyping. A problem with subtyping is that the inferred types can be too complicated to understand. For example, in an ηrobust version, the type inferred for function composition in [31] is:
which is much harder to read than the type inferred in Koka:
where no constraints are present.
A similar approach to [31] is used by Rompf
et al. [29] to implement firstclass delimited continuations in
Scala which is essentially done by giving a monadic translation. Similar
to our approach, this is also a selective transformation; i.e. only
functions that need it get the monadic translation. Both of the previous
works are a typed approach where the monad is apparent in the type.
Early work by Filinksi [8, 9] showed how one
can embed any monad in any strict language that has mutable state in
combination with firstclass continuations (i.e. callcc
). This work is
untyped in the sense that the monad or effect is not apparent in the
type. In a later work Filinksi [10] proposes a
typed calculus where monads are used to give semantics to effects. This
proposal has many similarities with our current work, but does not
explore effect inference and polymorphism, which both are features
crucial for a usable effect system.
Algebraic effect handlers described by Plotkin et al. [27]
are not based on monads, but on an algebraic interpretation of effects.
Even though monads are more general, algebraic effects are still
interesting as they compose more easily. Bauer and Pretnar describe a
practical programming model with algebraic effects [2] and a
type checking system [3]. Even though this approach is
quite different than the monadic approach that we take, the end result is
quite similar. In particular, the idea of handlers to discharge
effects, appears in our work in the form of the from primitives induced
by an effect
declaration.
Using the correspondence between monads and effects [36], we have shown how you can define the semantics of an effect in terms of a firstclass monadic value, but you can use the monad using a firstclass effect type. We provide a prototype implementation that builds on top of Koka's type and effect inference system.
In “The essence of functional programming” [35], Wadler remarks that “by examining where monads are used in the types of programs, one determines in effect where impure features are used. In this sense, the use of monads is similar to the use of effect systems”. We take the opposite view: by examining the effect types one can determine where monads are to be used. In Wadler's sense of the word, the essence of effectful programming is monads.
We prove a generalization of the Type Preservation Theorem 2 in the paper where the computed effect is not restricted to . Then our main theorem follows trivially.
Definition 1.
Since ,
we can rewrite dictionary and morphism types using , as:
Theorem 1.
(Lifting)
If ,
then .
Proof. Let
If , then but by assumption .
If then . By the type of we have that the call to unit is well typed and that the result has the appropriate type, i.e., .
Otherwise, . By the type of we have that the call is well typed and that the result has the appropriate type, i.e., .
Theorem 2.
(Binding)
If and
,
then .
Proof. Let
If , then and and by rule (TVal) .
If , then and then . By the type of we have that the call to map is well typed and that the result has the appropriate type, i.e., .
Otherwise, with and . By theorem 1 and the type of we have that the call is well typed and that the result has the appropriate type, i.e., .
Theorem 3.
(General Type Preservation)
If and , then .
Proof. By induction of the structure of the derivations, and checking at each rule that the results are welltyped.□.
(Con): Let . Assume and . Then . By (TCon) , but since , we have
(Var): Let . Assume and . Then . By (TVar) , but since , we have
(Lam): Let then . By rule (TLam) there exists so that and . By inversion of the same rule (1). By inversion of Lam we have (2). By (1), (2) and the IH we have (3). By (3) and Theorem 1 we have that . By this and rule (TLam) we get . But, by the definition of we have . Thus, we have .
(TLam): Let with . Then, by assumption and rule (TTLam) there exists so that and by inversion (1). So, . Also, by assumption and rule (TLam) , and by inversion (2). So, . By (1), (2), IH and since the computed effect is empty we have . By that and rule (TTLam) we get , equivalently .
(TLamE): Let . Then, by assumption and rule (TTLam) there exists so that and by inversion (1). So, . Also, by assumption and rule (TLam) , and by inversion (2). So, . By (1), (2), IH and since the computed effect is empty we have . By that and rule (TTLam) we get , equivalently or .
(TApp): Let with . Then, by rule inversion of (TApp) (1) and by the same rule . By inversion of the rule (TTApp) there exists a so that and by the same rule , so . By (1), (2) and IH we get , since the computed effect is . By rule (TTApp) , and by Lemma 2 . Finally, since the computed effect is we have . equivalently .
(TAppE): Let . Then, by inversion of the rule (TAppE) (1) and by the same rule . By inversion of the rule (TTApp) there exists a so that and by the same rule , so . By (1), (2) and IH we get , since the computed effect is . By rules (TTApp) and (TApp) , and by Lemma 2 . Finally, since the computed effect is we have . equivalently .
(Val): Let . Then, by inversion of the rule (Val) (1a) and (1b). By the same rule with a computed effect . By inversion of the rule (TVal) (2a) and (2b). By the same rule, , so .
By (1), (2) and IH we get (3a) and (3b) By (3a), (3b) and Theorem 2 we have , equivalently .
(App): Let . By inversion of the rule (TApp) we have (1a) and (1b). By the same rule we get , so . By inversion of rule (App) we have (2a) (2b) (2c) and (2d).
By the same rule we have that . By (1), (2) and IH we have (3a) and (3b).
By the definition of , is the computed effect of the binder , thus, and applying Theorem 2 twice, we have that .
(If): Let . By inversion of the rule (TIf) we have (1a), (1b) and (1c). By the same rule we get . By inversion of rule (If) we have (2a) (2b) (2c) and (2d).
By the same rule we have that . By (1), (2) and IH we get (3a), (3b) and (3c).
Applying Theorem 2 twice, we have that .
Lemma 2.
.
Proof. By induction on the structure of the type scheme .
As another example, we present how userdefined effects can be used to simplify asynchronous programming. Asynchronous code can have big performance benefits but it notoriously cumbersome to program with since one has to specify the continuation (and failure continuation). This style of programming is quite widely used in the JavaScript community for XHR calls on the web, or when programming NodeJS [34] servers. Consider the following asynchronous JavaScript code that uses promises [16]. This example is somewhat simple but also in many ways typical for asynchronous code:
function main() {
game().then(null, function(err) {
println("an error happened");
});
}
function game() {
return new Promise().then( function() {
println("what is your name");
readline().then( function(name) {
println("Ah, And who is your mentor?");
readline().then( function(mentor) {
println("Thinking...");
wait(1000).then( function() {
println("Hi " + name + ". Your mentor is " + mentor);
});});});});
};
The asynchronous functions here are readline
and wait
. Both return
immediately a socalled Promise
object, on which we call the then
method to supply its success continuation: a function that takes as
argument the result of readline
and uses it at its body. Finally,
then
takes a second (optional) argument as the failure continuation.
If a failure occurs within a continuation with no failure continuation,
execution skips forward to the next then
. In this example if any error
occurs, it will be forwarded to the main function which will die by
printing "an error occured"
.
Programming with callbacks in tricky. Common errors include doing computations
outside the continuation, forgetting to supply an error continuation, using
trycatch
naively (which don't work across continuations), and mistakes
with deeply nested continuations (e.g. the pyramid of doom).
In Koka, we can actually define an async
effect that allows us to write
asynchronous code in a more synchronous style while hiding all the
necessary plumbing. that behave asynchronous, i.e., have exactly the
same behaviour as the previous example. We define the asynchronous effect
as a function with two arguments, the success and the failure
continuation:
effect async〈e,a〉 = (a → e (), exception → e ()) → e () {
function unit(x) {
return fun(c,ec) { c(x) }
}
function bind(m,f) {
return fun(c,ec) {
m( fun(x) {
val g = catch( { f(x) }, fun(exn) { return fun(_,_) { ec(exn) } } )
g(c,ec)
}, ec )
}
}
function bind_catch(m,h) {
return fun(c,ec) {
catch({
m(c, fun(err) { h(err)(c,ec) })
}, fun(err) { h(err)(c,ec) })
}
}
}
This one is quite complex but that is somewhat expected of course –
getting asynchronicity right is complicated. As usual, Koka
automatically creates the primitives to_async
and from_async
. We
use these to define other functions: run(action)
turns the async
computation action
into an io
computation. readline
is an async
wrapper for the primitive readln
function that is explicitly called
with its success continuation.
function readline() : 〈async,console〉 string {
to_async fun(cont,_econt) { readln(cont) }
}
Finally, using these two primitives and the async
effect we can now program
nice and “vertical”, synchronouslike code for our initial JavaScript example:
function main() {
run(game)
}
function game() : 〈async,io〉 () {
println("what is your name?")
val name = readline()
println("Ah. And who is your mentor?")
val mentor = readline()
println("Thinking...")
wait(1000)
println("Hi " + name + ". Your mentor is " + mentor)
}
Moreover, just like other effects, we can use any abstraction
as usual, like mapping asynchronous computation over lists etc. Of course,
we need more primitives, like fork
which starts multiple asynchronous
computations in parallel and finished when all are done.
The strong typing is crucial here as it provides good indication that the code is potentially asynchronous! It is generally recognized that knowing about asynchronicity is important, but other effects like exceptions, divergence, or heap mutation, are not always treated with the same care. In Koka, all effects are considered essential to the type of a function and asynchronicity is treated in the same way as any other effect.
Semantically, the asynchronous effect is not quite like our earlier
examples which could be implemented in Koka as is. For example, note
the invocation of catch
in the definition of bind
. Why is that necessary?
and what happens if we would forget it? The reason behind this is that
asynchronicity cannot exist within the basic semantics of Koka (as described in
[20]). In particular, we introduce new constants supplied by the
runtime environment that behave outside of Koka semantics, like readln
.
Effectively, every time an asynchronous operation is executed, the program
terminates and is revived again when the continuation is executed. This is
also why run
cannot be implemented in Koka itself but needs special runtime
support too – it effectively needs to ‘wait’ until all asynchronous operations
are done and then return its result (or throw an exception). Of course,
since Koka is implemented on top of JavaScript all these primitives (and many
more) are provided the host runtime systems (i.e. NodeJS and browsers).
The current prototype of the work in this article is freely available online. It is not yet ready for primetime since some checks are lacking, but by the time of the conference we hope to have it online on http://www.rise4fun.com/koka. Currently, one can build the development branch though and play with the examples in this paper:
kokamonadic
branch from http://koka.codeplex.com and follow
the build instructions on that page.
test/monadic/esop15
, like amb
or parser
.
The basic Koka compiler compiles to JavaScript and is quite mature at this point. In fact, Koka was used to create a sophisticated online markdown processor called Madoko, which was used to completely write this article! Try it at http://www.madoko.net.
^{1.}In contrast to Haskell for example, where really stands for , i.e. referring to a value of such type may still diverge or raise an exception. ↩