KPNs as an Effects Model

In this article, I motivate a variant of Kahn Process Networks (KPNs) with bounded channels as a better – more efficient, scalable, composable, securable, serializable, comprehensible, and debuggable – approach to modeling effectful application behavior. Especially in context of purely functional languages and conventions today.

Background

In context of purely functional programming, Haskell pioneered a simple and effective approach to effects by simulation of imperative programming: model a data structure that returns either a final value, or yields with a (request, continuation) pair. By making this data structure opaque, inaccessible to user code, a compiler aware of the structure can optimize and inline requests heavily to achieve acceptable performance. In Haskell, this opaque data structure is called `IO`. (This trivial pattern is also called by obscure and scary names from category theory, but that hasn’t done Haskell any favors.)

One fundamental weakness of this design, however, is the singular nature of that `(request, continuation)` pair. Implicitly, this becomes a performance bottleneck: we’re performing only one ‘request’ at a time. Fortunately, we can work around this limitation in a few ways:

  • Asynchronous requests. Rather than awaiting a result immediately, we may continue with a ‘future’ reference for the result that can be used in a future request. The request itself may then be processed in parallel with ongoing computation.
  • Multi-threading. We can support a request to fork a thread that itself can make more requests. Further, we can support requests for communication between threads, e.g. via channels or shared mutable variable.

And this is essentially what Haskell does. We have our `forkIO` request, and a panoply of communication methods (IORefs, MVars, TVars, channels modeled with them, etc..). Asynchronous futures are frequently implicit to Haskell’s lazy evaluation model.

Unfortunately, these workarounds come at significant cost. Those futures and variables deeply entangle the application with its environment, which hinders serialization, persistence, and debugging. Multi-threading introduces non-deterministic behavior, which can hinder robust testing or reproduction of error conditions.

A second major weakness of this design regards the opacity of that `(request, continuation)` pair.

Assume two applications with opaque effects – `foo.main()` and `bar.main()`. We could compose these sequentially, or in parallel, or invoke one from within the other. However, we cannot control the interleave of requests. We cannot intercept requests in a mockup environment for testing, security, precise embedding. It is not possible to construct wrappers around an application to tune requests and responses for a different environment. It is not possible to cleanly halt an application. Ultimately, this hurts composition, testing, security, portability, and process control.

Again, there are workarounds – e.g. sandboxes, ad-hoc dependency injection, object capability security. But again, there is a cost – this time to those performance benefits we originally received from compiling with opaque effects.

Kahn Process Networks

KPNs are a simple model consisting of processes and channels. A process is limited to two actions: read from a channel, or write to a channel. A channel carries a FIFO queue of messages, and may have at most one reader and at most one writer. Reading from a channel will block if there is no message available.

Generally, writing to a KPN channel does not block. But I believe a variant of KPNs where each channel is annotated with a maximum capacity is superior for every practical use case. Bounded channels can be trivially implemented above normal KPNs by adding a reverse channels for acknowledgements to every message channel, then simply waiting for an ack after sending.

Construction and composition of KPNs is readily achieved in the style of Flow-Based Programming (FBP). Instead of directly naming a channel, a process will read or write a port. Separately, a configuration wires output ports from one process to input ports on another.

Aside: KPNs and FBP are similar. Some important differences: FBP has bounded channels by default, ad-hoc side-effects, and non-deterministic ‘merge’ for multiple writers into a shared wire.

KPNs can be used as an effects model.

Open (aka dangling) output channels can be filled with requests to be read by the KPN’s interpreter. Responses can be written to open input channels. A lightweight, declarative configuration can tell our interpreter from which channels it should read requests, and in each case to which channel it should write responses. This might be informally understood in terms of declaring a subset of channels as ‘effectful’ when wiring up our KPN processes.

These ‘effectful’ channels have some nice features:

  • Multiple effectful channels represent concurrent requests.
  • Multiple responses can easily be injected concurrently.
  • An individual effects channel naturally sequences its effects.
  • Requests and responses are buffered and batched up to channel capacity.
  • Internal process model supports parallelism and distribution.

This offers a significant improvement over the singular `(request, continuation)` pair implicit to imperative programming.

Instead of a peephole view of our computation, our effects interpreter may handle many concurrent requests and provide many concurrent responses, and our KPN’s continuation is inherent to the unblocking of processes when our environment drains full request channels or fills empty input channels. Because buffering, parallelism, and concurrency are pervasive, we don’t need workarounds like futures, threads, or shared variables for communication between threads.

By avoiding those workarounds that entangle evaluation with the environment, the resulting system becomes much simpler, easier to serialize and debug.

Note: Alternatively to effects channels, we could augment our process type to support requests on the environment. However, this can trivially be represented by adding two ports to each process (one to write requests, one to receive responses), then wiring an effectful channel between them, then immediately reading a response after each request. Effectful processes would be less expressive for asynchronous behavior, offer less opportunity to intercept and translate requests for a different environment, and cannot buffer more than one pending request per process. Effectful channels are by far the superior design.

KPNs in context of Purely Functional Programming

In context of purely functional programming, KPNs have more nice properties.

A KPN can easily be described by an immutable value – e.g. a collection of named processes and wires, with a list of messages pending on each wire. Granted, this may prove a fair bit more difficult to represent in statically typed PLs, due to the challenges surrounding heterogeneous collections. I expect clever people can get it done.

Evaluation of a KPN is pure and deterministic. This might be represented as a pure function, effectively `evalKPN : KPN → KPN`. Of course, squeezing the most parallelism and performance from the KPN may require special attention from our runtime or compiler.

Intriguingly, evaluation of a KPN can easily proceed in parallel with further actions upon it. This could be expressed by a variant on the evaluator. Something like: `stepKPN : StepKPN k a → KPN → k (a, KPN)`.

During stepKPN, our KPN is running in the background and evaluating, e.g. using conventional threads and queues. The final returned KPN is fully evaluated. But the `StepKPN` type would represent a program that may read and write channels, potentially modify the KPN (add or delete processes and wires, adjust wire capacity, etc..). Importantly, every action on the KPN is deterministic. For example, a read will return immediately if possible (if the channel has data), but if the channel is empty we must wait until input becomes available or until all upstream processes block. Besides actions on the KPN, StepKPN might support operations in some surrounding context `k`.

Effects can be supported either way.

With `evalKPN` we might use a simple tactic to repeatedly:

  • extract pending effect requests
  • inject all available responses
  • initialize evalKPN in parallel
  • process the extracted requests

This would give us frame-oriented effects, with big batches of effects being processed efficiently between frames. This is easy to reason about, and offers adequate parallelism on a single system. Unfortunately, it doesn’t scale very well to distributed systems. However, we could tweak this to model a partitioned KPN system, each partition running its own frames.

With `stepKPN` we can very precisely extract requests, perform effects, and inject responses in a fine-grained manner. This could work very well because we can tune our strategy for reading and writing of open channels based both on the problem and on meta-knowledge of the KPN’s structure.

A third option is viable: associate effects handlers, e.g. of type `request → IO response`, with each effectful channel. Then run the KPN in context of IO. This could maximize parallelism and scalability, albeit at a cost to deterministic ordering of concurrent effects and reproducible system behavior.

That last option would most generically enable KPNs to replace simple imperative IO as the effects model, since it would effectively have the full power of multi-threading but with advantages of inherent batching and reduced complexity.

This entry was posted in Concurrency, Language Design, Modularity, Open Systems Programming, State and tagged , , , . Bookmark the permalink.

8 Responses to KPNs as an Effects Model

  1. Ross Angle says:

    Interesting post, but there’s something I don’t get. You’re saying evaluation of a KPN is deterministic, but if a process receives messages on two input ports, what’s the deterministic result of that?

    • Matt says:

      Unless you have some sort of `select`-like primitive that can read from two ports at once, then your process is sequential and has to read from one or the other first.

    • dmbarbour says:

      As Matt implies, there is no select primitive in a KPN, nor any means to test whether input is available. A process can only read from a port or write to one. If the process reads from a port with no pending messages, it will wait. Even if there are messages immediately available on other ports.

      • Ross Angle says:

        Ah, that all makes sense now. When I read the post again, I don’t know where I got the idea that two inputs to a single process would be handled concurrently.

  2. dmbarbour says:

    Some thoughts: It may be interesting to spawn new processes and wire them together, though deterministic naming may require some careful attention. Also, in a system with substructural types, first-class channel endpoints are viable.

  3. Pingback: Reactive Process Networks | Awelon Blue

  4. rajivabraham says:

    1 ) Thanks for sharing this post. When you say ‘frame-oriented’, do you mean https://en.wikipedia.org/wiki/Frame_technology_(software_engineering)

    • dmbarbour says:

      I’m alluding to video frames. A discrete time step in which we add take all available network outputs then push available inputs. Though, with the reactive KPN variant from the later article, we can get frame like behavior far more precisely.

Leave a comment