The ability to anticipate the future – even if only a small, fixed distance – is a valuable tool in reactive demand programming (RDP) that supports a variety of temporal requirements. Importantly, anticipate achieves these benefits without compromising valuable features of RDP such as eventual consistency or network resilience.
anticipate :: (Signal s) => PosDiffTime -> ((s a) ~> (s (Maybe a)))
The ‘Maybe’ on the right hand side is there because RDP signals have finite durations; you can ‘anticipate’ that a signal will be inactive in the future.
Where might we use this?
- filtering white noise (smoke, snow, engines, fans) from video or sound
- gesture recognition
- rhythm or temporal pattern detection
- visual target tracking
- route planning and collision avoidance between vehicles
- resource management and scheduling; e.g. pre-load textures or areas
- smooth animation of robotic arm by peeking at future positions or commands
How would we use this?
anticipate 0.1 &&& id >>> synch >>> afmap foo
If you are unfamiliar with arrow notation, the above behavior reads (in pseudo-Haskell):
- (&&&) input demand is duplicated and split on two paths
- (anticipate 0.1) on the first path, we look ahead 0.1 seconds
- (id) we pass the second path unmodified
- (synch) we combine the two paths back into a common signal.
- (afmap foo) combine the status on each path using function ‘foo’
Function ‘foo’ would look like
\(a',a) -> body, and could have any number of uses and meanings. If the input demand is button state, then ‘foo’ could generate button-press events by computing a difference between states. If the input demand describes an action or position for a robotic arm, then looking ahead would allow us to optimize our actions or positions to conserve energy and reduce stresses of abrupt change. If the input was position of a cooperating vehicle, we could use the positions and vectors to help avoid collisions.
The ‘synch’ primitive is part of the asynchronous arrows model I’m developing for RDP. I’ll detail it in a later post, but for now (in brief):
synch :: (Signal s) => ((s a) :&: (s b)) ~> (s (a,b))
delay :: PosDiffTime -> (r ~> r)
Basically, it is possible for the ‘delay’ to be different on the first and second paths, and ‘synch’ would normally equalize these delays and combine the response into a signal. This is logical synchronization, with real-time consequences, but the implementation can be wait-free. Synchronization is necessary to maintain the duration coupling property of RDP (duration of response equals duration of demand). I didn’t use ‘delay’ in this example, so ‘synch’ doesn’t need to delay; it just needs to zip together the separate signals.
[edit: I've since separated `synch` from `zip`.]
Where does the anticipated information come from?
Our ability to ‘anticipate’ depends on the RDP model to propagate the speculative ‘future’ of each signal before we actually reach it. For something like vehicle motion, this ‘future’ might come from communicating with cooperative vehicles, or based upon a world-model fusing sensor data to make good predictions, or very easily a combination of the two. Either way, it is assumed that the further we anticipate the more fallible our predictions. The future is not fixed in RDP; rather, a new future is reactively updated and propagated. The idea is that every agent in the entire RDP system will have access to gracefully degrading, continuously repaired model of its personal future… and will automatically propagate the future in a compositional manner (no explicit developer code is needed).
When we anticipate far into the future, we should expect the stability and determinism of the applications to suffer. For example, if we try display what a UI will look like 10 seconds from now, it won’t effectively account for user-input or network inputs of the intervening 10 seconds. RDP does not prevent you from looking far into the future since the distance and stability will vary heavily by domain (i.e. some plans are stable for one second, others for one day), but diminishing returns should keep developers bounded within the limits of sanity.
In presence of ‘delay’, we can stabilize our anticipation considerably. For example, if we delay 0.1 seconds then promptly peek 0.1 seconds into the future, we are effectively generating a 0.1 second temporal buffer that we can use for gesture recognition, noise filtering, and the like. This buffer would be just as stable as the input.
Is this secure?
Yes and no. You’ll never propagate any information that you were not about to send anyway, which is in most cases sufficient for security purposes. However, if you need military-style operations security, where you limit how much an agent is allowed to know until shortly before or during an mission, then the distance to which you can ‘anticipate’ must be controlled. It would not be difficult for an RDP language to introduce an ‘opsec’ primitive to model constraints on how much future information is propagated across the network.
opsec :: PosDiffTime -> (r ~> r)
Why not use world model to make predictions?
Chances are, many rich applications will also use a world-model to make predictions. I’ve already mentioned that a world-model is a potential source for anticipated information. A world-model is essentially a stateful agent (or configuration of agents) that integrates predictions with observations, continuously using the latter to correct the former. I believe we’ll find that ‘anticipate’ and world-modeling are very synergistic because (a) we may ‘anticipate’ what our world model will look like after a few seconds, and (b) relevant predictions from the world-model propagate through the system without sharing access to the model. The latter is especially useful because it avoids coupling to said model, allows a whole system to benefit from plugging in a world model even if the other components are unaware of it.
But we don’t want to express world-models inside RDP behaviors.
RDP behaviors are not stateless. As noted earlier, we do get a buffer by combining anticipate and delay. RDP for a distributed system may allow translucent use of a cached response during disruption (e.g. explicitly annotating it as a ‘cached(Response)’ during the disruption).
However, RDP behaviors forbid any use of state that would compromise certain properties: spatial commutativity, spatial idempotence, eventual consistency, resilience. The commutativity and idempotence properties are what make RDP very declarative, allowing RDP behaviors to be rearranged and refactored similar to pure functions. Those are also the basis for powerful optimizations, e.g. since demands are idempotent we can for ad-hoc content delivery networks to scale when a large flash crowd is making a whole lot of the same demand, rather than forcing the central server to process each demand independently. Eventual consistency and resilience allow developers to reason about the system despite disruption, temporary faults, startup times, arrival order of new agents, et cetera.
This reduces to a fairly simple rule: RDP behaviors can only keep state that they could swiftly regenerate after disruption or restart. Caches and delay buffers fit this constraint. Accumulators, integrals, and finite state machines do not… though arbitrary state is still allowed via resources external to the RDP behavior model. Stateful resources will represent sensors, actuators, UI, databases, world-models, et cetera.
So why should we favor anticipate instead of state?
Because of principle of least power. Because we can use anticipate to solve many of the same problems as state, without introducing the problems associated with state. Because it’s declarative, convenient, local, doesn’t require coupling to a world-model. Because it doesn’t risk network resilience and eventual consistency properties. Because it enhances the effectiveness of world-models that do exist in the system.