Reactive Demand Programming (RDP) is a reactive programming model I developed circa 2010-2012. The essential idea behind RDP is that “effects” should be commutative and continuous.
What I mean by “continuous” is that behaviors must be expressed statefully, attached to stateful representations. Continuity makes it easy to observe and directly manipulate system behavior, which is valuable from a human-in-the-loop perspective. Meanwhile, commutativity means that the order of expression for effects must be irrelevant. This property simplifies reasoning about concurrent behaviors, both within a program and for multi-agent systems.
To achieve commutative and continuous effects, RDP uses a concept of “demands“. A demand is a continuous request from a program (or subprogram), which may be observed and fulfilled by other programs (or subprograms). Demands are only observable collectively, as a set, not individually.
Unfortunately, my original expression of RDP was complicated by three factors:
- Expression of RDP programs was arrowized, which is relatively painful to work with when compared to monadic programming.
- I wasted efforts to support “instantaneous” reactive loops, i.e. such that we can observe effects without logical delay.
- I required idempotence, such that duplicate expression of effects do not contribute to system behaviors.
Monadic expression of RDP programs is not difficult, especially if we remove instantaneous loops. A simplified monadic RDP API:
typeclass (Monoid a) ⇒ Bag a // requires commutative mappend typeclass (Monad m) ⇒ RDP m where type Var m : * → * type Dir m : * dir : Dir m → String → m (Dir m) var : (Bag a, Typeable a) ⇒ Dir m → String → m (Var m a) get : Var m a → m a put : Var m a → a → m ()
Unlike conventional monadic effect loops, an RDP monadic behavior is applied contininuously, repeating at each time-step. We have a notion of persistent, labeled variables that are available from one time-step to the next. When we get a variable, we’ll receive the value from the previous time-step. When we put, we contribute our value to the next time-step. All contributions within a time-step are unioned together, via commutative mappend.
By relaxing idempotence – that is, by using bags of demands instead of sets – we can add entries together. This is convenient for modeling votes, priorities, weighted preferences. We can similarly model negation or suppression, which is convenient for modeling rules or constraints (see Behavioral Programming for examples of how this might be useful).
Although we do not allocate ‘new’ variables, a notion of local state is still valuable for RDP systems. For the API above, I borrow the concept of filesystem-like directories. Local state is achieved by assigning a distinct subdirectory to each distinct module or subprogram, such that subprograms cannot accidentally (or maliciously) interfere with each other. We can feasibly support object-capability patterns involving sharing of vars. Of course, a major goal of RDP is to preserve access to stable views of state for monitoring, extension, debugging, etc.. Thus, local state is never hidden from the parent process.
Effects in RDP system are simply modeled in terms of shared variables between a subprogram and its environment – which at the top-level should be a runtime or operating system. A subprogram can output demands – e.g. to display a window, to move a robot – into a variable that the environment observes. The environment can provide inputs into a variable that our subprogram observes. (The specific variables involved can easily be hidden behind a record of monadic operations/capabilities.) Of course, providing all possible inputs at all times would be inefficient. But we can leverage flexible patterns that involve registering available services or resources, or use of outputs to demand specific inputs.
In this sense, RDP has some similarities with tuple spaces or blackboard systems, albeit with RDP variables being more fine-grained than blackboards. There is also a similarity with functional relational programming, as described in Out of the Tarpit by Ben Mosely.
Implementing RDP efficiently remains a challenge. In general, we could try to optimize evaluation of an RDP subprogram by memoizing its partial evaluation and only re-evaluating sub-computations whose observable results have changed. This might require an explicit memoization combinator, with a comparable result type. Extending the monad with for explicit fork-join concurrency (e.g.
fork : m a → m b → m (a * b) and
fork_ : m () → m b → m b) may also help in isolating evaluation of subprograms. However, an efficient implementation of a monadic RDP API should at least be simpler than efficient implementation of arrowized RDP, by virtue of monads being simpler than arrows.