The disjoin and conjoin behaviors in RDP are the two primitives that connect sums and products:
disjoin :: (x :&: (y :|: z)) ~> ((x :&: y) :|: (x :&: z)) conjoin :: ((x :&: y) :|: (x :&: z)) ~> (x :&: (y :|: z))
The disjoin behavior means that we can take a pipeline `x` and split it into two parallel paths based on state of another pipeline (y or z). Conjoin is dual to disjoin, and allows a partial merge of a pipeline in order to execute the same behavior on both paths.
A little background: the :&: connective indicates a `product` – i.e. that two behaviors or pipelines are active in parallel (equal durations) and may be zipped together into tuples. The :|: connective indicates a `sum` – i.e. only one pipeline is active at any given instant (partitioned durations) based on some earlier decision, and the behaviors might later be merged into one contiguous signal. Use of :&: and :|: represent asynchronous values in that they come from separate pipelines and may have different latencies. `Asynchronous` doesn’t mean `distributed`, but readily allows for it. The zip and merge operators will synchronize and combine these asynchronous values into a single pipeline. Products and sums are duals, with zip opposite split and merge opposite dup:
zip :: (Signal s) => (s x :&: s y) ~> s (x,y) split :: (SigSplit s) => s (Either x y) ~> (s x :|: s y) merge :: (x :|: x) ~> x dup :: x ~> (x :&: x)
To express equivalent of if/then/else expression is a three step process involving split (lifting boolean into control flow), disjoin (makes environment observable within choice), and merge (which `closes` the if/then/else branches).
type EBool = Either () () ifThenElse :: (SigSplit s) => (env ~> rhs) -> (env ~> rhs) -> (env :&: (s EBool)) ~> rhs) ifThenElse onTrue onFalse = second split >>> -- lift boolean into local control flow disjoin >>> -- distribute decision to split environment (bfst +++ bfst) >>> -- eliminate vestigial boolean (onTrue +++ onFalse) >>> -- operations on environment merge -- combine results of true and false paths
Depending on `rhs`, the above could be a statement or expression.
I think this model of ifThenElse is interesting, in that it exposes some inefficiencies and compositional weaknesses of if/then/else. Every decision involves synchronization at three places: on opening the decision (split), on accessing the environment (disjoin), and on closing the decision (merge). For rich expression and composition, developers should favor naked (onTrue +++ onFalse), working with abstract decisions while delaying commitment to their source. The idea is to avoid local decisions and avoid shared environments, push these things closer to the edges of the application.
These conclusions aren’t unique to RDP. Even down at the CPU level, one might recognize branch prediction concerns as a workaround for the synchronization overheads inherent to if/then/else. There are proposed alternatives even for imperative and functional programming – cf. Multi-return Function Call by Olin Shivers and David Fisher [LtU node].
In practical terms, `disjoin` on (x :&: (y :|: z)) can be implemented by a multi-step process:
- Duplicate the `x` behavior for the LHS and RHS of the :|: connective.
- Mask the `x` behavior on the LHS with the activity of `y`, and mask the `x` behavior on the RHS with the activity of `z`.
- To generate the `control` signals for y and z, I need something like a `drop` behavior (drop :: forall x . x ~> u ()).
Steps 2 and 3 together imply that there is a global `unit` signal type – `u ()` – for each RDP behavior, such that every signal can be masked by u (), and every signal can reduce to u (). A global unit type implies a global time model, but still allows for a variety of signal types (i.e. continuous time-varying vs. discrete time-varying signals). Unfortunately, representing existence of this unit type in Haskell is somewhat painful (at least if I want to keep generality, it means enforcing a typeclass on every signal that becomes part of the behavior), and I’m still working on a `comfortable` way to do so, including tweaks to the signals model I described in an earlier article.
Static Sums and Dynamic Behaviors in Reactive Models
I have studied many reactive programming models, in many variations. First-class support for conditional expressions at the reactive layer is a very rare feature. It isn’t really a necessary feature for anything: dynamic behaviors (generate behavior based on observed conditions or history) and use of decisions as part of the value layer (Either a b) can adequately cover any cases where one might use a static set of choices in the reactive model.
While explaining RDP in the past, I have a few times been presented with a question: why bother with sums when you have dynamic behavior?
First, sums help complete the model, makes it less dependent upon the underlying `value` layer. Indeed, I could lift the notion of `boolean` values into my model (RBool s = s () :|: s ()), and conceptually (though not pragmatically) could model 32-bit integers as products of 32 RBool choices. I don’t really need any other values than asynchronous ones, so long as I provide some behaviors for `+` and `*` and the like. Sums are, as noted above, the dual to products.
I could wax on about how sums and dynamic behaviors serve nearly orthogonal roles, and about the virtues of duality (sum is yin to product yang), and present analogies to using `case` statements when you could use Church booleans in Haskell.
But here’s the real reason: performance and stability.
- Static sum behaviors provide stability, allowing a single behavior to serve under many common conditions, and thus survive longer in context without need to swap out. Stable behaviors avoid set-up and tear-down costs, and support many optimizations (memoization, caching, partial evaluation or tracing JIT) more effectively than volatile behaviors.
- Dynamic behavior provides specialization, allowing an application to make only relevant choices based on the observable context and accessible resources. Basically, dynamic behaviors implies dynamic compilation and linking. Developers can pay for a closer approximation to what they use.
- Dead-code elimination does not easily cross an `eval` expression since it is difficult to anticipate how much of the environment an evaluated expression will observe or influence. But if `sum` types are inputs or outputs from a dynamic behavior, one can at least avoid synchronization and possibly eliminate dynamic code based on dead sinks (i.e. an output that is dropped by the static context).
I envision developers relying on these combined properties, i.e. building towers of reactive staged interpreters and leveraging push-based reactive semantics to more easily identify points of specialization (any interface of slow-changing code with fast-changing code or data).