Reactive Demand Programming (RDP) is a reactive programming model I developed circa 2010-2012. The essential idea behind RDP is that “effects” should be **commutative** and **continuous**.

What I mean by “**continuous**” is that behaviors must be expressed statefully, attached to stateful representations. Continuity makes it easy to observe and directly manipulate system behavior, which is valuable from a human-in-the-loop perspective. Meanwhile, **commutativity** means that the order of expression for effects must be irrelevant. This property simplifies reasoning about concurrent behaviors, both within a program and for multi-agent systems.

To achieve commutative and continuous effects, RDP uses a concept of “**demands**“. A demand is a continuous request from a program (or subprogram), which may be observed and fulfilled by other programs (or subprograms). Demands are only observable collectively, as a set, not individually.

Unfortunately, my original expression of RDP was complicated by three factors:

- Expression of RDP programs was arrowized, which is relatively painful to work with when compared to monadic programming.
- I wasted efforts to support “instantaneous” reactive loops, i.e. such that we can observe effects without logical delay.
- I required idempotence, such that duplicate expression of effects do not contribute to system behaviors.

Monadic expression of RDP programs is not difficult, especially if we remove instantaneous loops. A simplified monadic RDP API:

typeclass (Monoid a) ⇒ Bag a // requires commutative mappend typeclass (Monad m) ⇒ RDP m where type Var m : * → * type Dir m : * dir : Dir m → String → m (Dir m) var : (Bag a, Typeable a) ⇒ Dir m → String → m (Var m a) get : Var m a → m a put : Var m a → a → m ()

Unlike conventional monadic effect loops, an RDP monadic behavior is applied *contininuously*, repeating at each time-step. We have a notion of persistent, labeled variables that are available from one time-step to the next. When we *get* a variable, we’ll receive the value from the previous time-step. When we *put*, we contribute our value to the next time-step. All contributions within a time-step are unioned together, via commutative *mappend. *

By relaxing idempotence – that is, by using bags of demands instead of sets – we can add entries together. This is convenient for modeling votes, priorities, weighted preferences. We can similarly model negation or suppression, which is convenient for modeling rules or constraints (see Behavioral Programming for examples of how this might be useful).

Although we do not allocate ‘new’ variables, a notion of *local state* is still valuable for RDP systems. For the API above, I borrow the concept of filesystem-like directories. Local state is achieved by assigning a distinct subdirectory to each distinct module or subprogram, such that subprograms cannot accidentally (or maliciously) interfere with each other. We can feasibly support object-capability patterns involving sharing of vars. Of course, a major goal of RDP is to preserve access to stable views of state for monitoring, extension, debugging, etc.. Thus, local state is never hidden from the parent process.

*Effects* in RDP system are simply modeled in terms of shared variables between a subprogram and its environment – which at the top-level should be a runtime or operating system. A subprogram can output demands – e.g. to display a window, to move a robot – into a variable that the environment observes. The environment can provide inputs into a variable that our subprogram observes. (The specific variables involved can easily be hidden behind a record of monadic operations/capabilities.) Of course, providing all possible inputs at all times would be inefficient. But we can leverage flexible patterns that involve registering available services or resources, or use of outputs to demand specific inputs.

In this sense, RDP has some similarities with tuple spaces or blackboard systems, albeit with RDP variables being more fine-grained than blackboards. There is also a similarity with functional relational programming, as described in Out of the Tarpit by Ben Mosely.

Implementing RDP efficiently remains a challenge. In general, we could try to optimize evaluation of an RDP subprogram by memoizing its partial evaluation and only re-evaluating sub-computations whose observable results have changed. This might require an explicit memoization combinator, with a comparable result type. Extending the monad with for explicit fork-join concurrency (e.g. `fork : m a → m b → m (a * b)`

and `fork_ : m () → m b → m b`

) may also help in isolating evaluation of subprograms. However, an efficient implementation of a monadic RDP API should at least be simpler than efficient implementation of arrowized RDP, by virtue of monads being simpler than arrows.

Aren’t your directories better represented as records? Then entry selection isn’t based on string values, but field labels that can be statically checked (and no longer depend on Typeable). In particular, something like first-class labels would get you quite a bit of safety and performance since directory indexing can be compiled down to integer offsets.

My original version of this article used types as labels. First class labels should also work just fine, and I agree they could offer superior performance and stronger typing. I chose to use String in the article mostly because I didn’t want to diverge from the main point while describing a simplified API.

Right, so then what performance challenges were you describing since accessing variables stored in an environment can be made constant time? You mentioned earlier that every variable is updated continuously, but presumably only the variables that registered changes will trigger changes in dependencies, similar to FRP (which seems to be what you describe in your closing paragraph).

You also describe a number of applications of RDP, but I suggest providing a code example or two to show how you anticipate some of these examples working. For instance, how shared variables might model effects and whether a language based on RDP would support higher-order functions, and if so, how that integrates with effects (a monadic model?).

I only check in on your blog once a year or so since you have lots of great ideas, but I often finish my reading without a concrete idea of how you foresee them working in a real language. This model of RDP has been the clearest expression of your ideas I’ve found, but perhaps a sample eDSL typeclass of a lambda calculus, or a logic language, or something similar would clarify how this is supposed to work as a language a programmer could use to express a solution.

> Effects in RDP system are simply modeled in terms of shared variables between a subprogram and its environment

This comment and the note about ‘tuple spaces’ reminded me of http://syndicate-lang.org which you might find interesting. It implements hierarchical ‘data spaces’ which are like tuple spaces and can hold ‘demands’.

I wonder what you think of this concatenative presentation of delimited continuations[0], wrt expression of monadic effects like RDP?

[0]: http://lambda-the-ultimate.org/node/5554

I’m a big fan of concatenative languages, and I have explored similar representations (although I’m more interested in bi-directional editable projections than one-way syntactic sugar). But I don’t have any opinion on it with-respect-to RDP in particular, which is more about having commutative operations and a distributed, external state model.