In conventional functional programming, we provide all inputs to a function before we can access any outputs. This makes for convenient reasoning about program behavior, but it also restricts expressiveness and potential for concurrency. An intriguing alternative is to make partial-evaluation first-class and accessible in the language, enabling rich forms of function composition and concurrent interactions while preserving functional semantics.

## Trivial Example

As a trivial example, assume we have two functions, `f(in X, out Y, out Z)`

and `g(out X, in Y, out Z')`

. Here I’m using out-args instead of return values, but we could conventionally type these as `f : X -> (Y,Z)`

and `g : Y -> (X,Z)`

.

Assuming a suitable syntax and sufficient knowledge about data dependencies, it is feasible to construct a concurrent computation that ties the output of `g`

as input to `f`

and vice versa. Consider the following:

f(in x, out y, out z) { y <- x + 3 z <- y * 6 } g(out x, in y, out z) { z <- (y * 2) x <- 4 } p(out z, out z') { vars x,y f(x,y,z) g(x,y,z') }

Program `p`

declares second-class single-assignment variables `x`

and `y`

(these are not values), then evaluates `f`

and `g`

concurrently (based on data dependencies). Evaluation of `g`

will immediately produce `x=4`

. `f`

will consume `x`

then produce `y=7`

and `z=42`

, then `g`

will consume `y`

and produce `z'=14`

. The final results, `z`

and `z'`

are visible to the caller of `p`

.

Although trivial, this example exhibits concurrency: in `p`

neither `f`

nor `g`

will evaluate independently, each relies on the other. Without access to concurrent evaluation, we would be forced to inline `f`

and `g`

into `p`

then reorder assignments.

In general, a compiler can implement correct concurrent computations expressed in terms of concurrent partial evaluations by unraveling the components and reordering assignments. However, this does require access to the implementation details. Alternatively, if separate compilation is required, a compiler could use semaphores and threads, so `f`

and `g`

would evaluate in separate threads and wait on input variables.

The Oz language uses this idea as a foundation for deterministic concurrency, albeit using much richer first-class unification variables and explicit threads. What I propose is a constrained form of what Oz does, where we structurally and typefully guarantee that output variables are assigned at most once, that all required variables are assigned.

I’m still experimenting with alternative syntax, e.g. based on keyword parameters. I doubt that declared variables will be an efficient way to express these computations, especially after I introduce conditional interactions (below).

## Session Types for Concurrent Interaction

Relying on deep knowledge about data dependencies is terrible for modularity. In the trivial example, a programmer could break program `p`

by making a change to `g`

that does not affect its interface. Thus, we really need a more precise interface.

Session types can be adapted to this problem. Instead of pi-calculus ‘channels’, we can assume a set of named single-assignment parameters (which may be input or output), each of which is recorded in our type. Thus, for program `g`

from our trivial example, we might use a session type of form `x!int . y?int . z!int`

to record that `x`

is available before `y`

is required as an input. We can safely provide inputs earlier than required, so this is a subtype of `y?int . x!int . z!int`

.

Conventional functions with named parameters and results can modeled using the general form `{ a1 : ta1, ..., aN : taN } -> { r1 : tr1, ..., rM : trM }`

with names `a1..aN, r1..rM`

and types `ta1..taN, tr1..trM`

. With session types, this can always be modeled as `a1?ta1 . ... aN?taN . r1!tr1 . ... rM!trM`

. This will always be a supertype of interaction with concurrent partial evaluation.

Session types with just sequencing (the `.`

separator) are not very precise in representing a data-dependency graph. Input dependencies for an output from a function may generally form a directed acyclic graph. For example, consider a dependency graph with three inputs `a1,a2,a3`

and three outputs `r1,r2,r3`

where `r1`

depends on `a1,a2`

, `r2`

depends on `a1,a3`

, and `r3`

depends `a2,a3`

.

To represent a dependency graph, the best option is a graph data structure. Graphs are rather awkward to represent using sequential text, but there certainly are ways to do so (e.g. using labels). Session types are already very ugly and noisy, IMO. Developing an acceptable syntax is left as an exercise to the language designer.

## Conditional Interactions

A language can easily enforce that all ‘out’ parameters from a function call are always written. But an intriguing possibility exists: we can conditionally decide which output parameters are written or expected based on prior inputs or outputs. This would allow us to model flexible, dynamic ‘sessions’ while preserving a purely functional semantics, and it can be achieved in a type-safe manner.

In session type systems, we distinguish ‘external choice’ `(A & B)`

vs ‘internal choice’ `(A + B)`

. An external choice is one imposed on us by the environment, while an internal choice is one we impose on the environment. There exists a duality based on perspective (am I the environment?). But roughly, we might model `&`

as an input boolean, and `+`

as an output boolean, while `A`

and `B`

are entire session types. Essentially, we’re deciding on entire interactions using a very limited form of ‘dependent type’.

For scalability and extensibility, labeled choices are usually preferred, e.g. `&{ add: ?x:int . ?y:int . !r:int , negate: ?x:int.!r:int }`

.

In adaptation to functional programming, we also need a clear identifier for the label. Thus, we may use a representation such as `method&{ add: ?x:int . ?y:int . !r:int , negate: ?x:int.!r:int }`

. Then perhaps `case(method)`

will allow us to access the labeled choice of interactions like `add`

vs `negate`

, while `method.add.x`

gives us access to the `x`

parameter under `add`

. Access to fine-grained parameters might t

As a reminder, semantically all of these interaction variables are distinct input or output parameters. Logically, `method.add.x`

is a distinct parameter from `method.negate.x`

. Because of type safety, the compiler could allocate a space for only one integer then share it between these two choices (like a C union). But they’re still logically distinct.

With conditional interactions, functions will usually touch only a small subset of their parameters based on the `case(method)`

. The syntax based on single-assignment variables, used in the trivial example, is infeasible. We’ll instead require good support for keyword parameters.

Intriguingly, types for conditional interactions will fully subsume conventional types for object-oriented programming interfaces. They further can represent and enforce richly typed multi-step protocols. With support for conditional interactions during partial evaluation, objects are again the poor man’s closure. Conditional interactions hugely increase expressiveness.

## Session Types as Data Types

It is feasible to understand structured data types – records and variants – as simply ‘session types’ that are purely output (or purely input). Intuitively, this makes a lot of sense: access to data is essentially a ‘session’ without any interaction, no input requirements. A record is a simple sequence of labeled output data types, an a variant is an output conditional interaction where all cases are pure output data sessions.

## Recursive Session Types

Recursive sessions are quite feasible, via conditional interactions, and would model recursive arbitrary-length interactions. In context of functional programming, recursive sessions will form an arbitrary ‘tree’ of single-assignment input-output parameters.

Recursive data structures can be modeled as recursive pure-output data types. Like `type Tree a = +{ leaf , node: val!a . left!Tree a . right!Tree a }`

.

## Sequences and Channels

In a prior article, I described some ideas for a statically allocated functional language. General recursive data or session types are mostly incompatible with this goal. However, use of ‘sequences’ can still be a pretty good fit due to stream fusion, the ability to process a sequence in-place.

Sequences can be understood as a subset of recursive session types, where recursion is permitted but the recursive type is permitted to appear at most once in a case. Thus, a tree is not a sequence, but a list is a valid sequence: `type List a = + { end, cons: hd!a.tl!List a }`

.

In context of partial evaluation with functional programming, we can begin to output this list before we’ve finished computing it. Another function might then begin to process this list before it has completed. This gives a simple basis for long-lived communications between functional processes, reminiscent of Kahn Process Network channels.

Intriguingly, with session types we aren’t restricted to sequences of pure data. We can very easily model sequences of interactive sessions.

## Interactions as Effects

Effects can generally be understood as interactions between a computation and its environment. That is, early outputs are used to determine or influence future inputs. Between conditional interactions and potential use of sequences as long-lived channels, we can model a lot of effects with this approach.

In particular, potential support for lists whose elements are conditional interactions (rather than pure data) could feasibly cover a richly expressive ‘effects’ model. However, I’m still determining relative expressiveness compared to algebraic effects and handlers.

## Why Functions?

I’ve gone pretty deep into session types. Why not favor a process calculus?

Purely functional programming has some very nice properties: determinism and locality for easy comprehension and testing, we can generally abstract the program without a separate ‘mode’ of computation like templates, and termination is very natural (all outputs are computed). These nice properties are preserved even if we evaluate just functions a little at a time, or evaluate only a subset of the function’s outputs.

Meanwhile, conventional purely functional programming has proven awkward for modeling effects or support for stream fusion and pipelining. Leveraging the concurrency implicit to precise dataflow analysis and partial evaluation can help cover these weaknesses. These relative benefits would be diminished in context of a process calculi that already does a pretty good job at concurrency.

Hi David,

Interesting idea. Seems to play well with the current trends with NixOS and Bazel and using merkle trees to memoize a functions computation.

Thanks for the interest. I’ve fleshed out the article considerably since your comment, so it’s a lot more solid an idea now, and more expressive than I initially thought.