Almost every procedural language has out or in/out variables. Languages such as Oz/Mozart do something even more interesting – allowing variables to be declared and assigned in separate processes. This feature is sometimes modeled (at a larger granularity) as futures or promises.
Reactive or dataflow models – pipes, iteratees, arrowized FRP, etc. – generally lack a similar feature. Data flows only one direction through composition, except where we introduce explicit loops.
In common practice, this isn’t necessarily a problem. Many dataflow solutions are readily expressed with data moving in one direction. In the remaining cases – e.g. when we want upstream dataflows to adapt to downstream requirements (smart glue, transcoding, specialization, etc.) – we can leverage features in the hosting environment (such as type inference and typeclasses, or ad-hoc constructors, or a little shared state). It might be awkward. But we can make it work, move on, and forget about it. Most programmers don’t expect their pipelines to be a general purpose programming model.
I am developing a general purpose reactive dataflow language. I am interested in modeling “smart glue”, static negotiations, and otherwise context-adaptive components. I want a solution that is elegant and simple. I want this solution to work both for static information (types, interfaces, preferences) and dynamic information.
Fortunately, Amr Sabry provides an interesting solution: negative and fractional types. Together with more traditional algebraic sums and products, these are collectively rational types. I have no intuition for the negative types, nor a concrete idea where they might be useful, though they do seem elegant. But the fractional types fit the role for information flowing upstream.
In a type such as `
(a * 1/b)`, we can understand the `a` type as flowing downstream in the normal direction, while the `b` type flows upstream. We even have natural, uniform u-turn operators:
turn_forward :: 1 ~> (1/a * a) turn_back :: (1/a * a) ~> 1
Any operation we can flip around and apply backwards to `1/a`, by simply using a couple u-turns.
applyInverted :: ((a~>b) * 1/b) ~> 1/a applyInverted = intro1 [turn_forward] first -- (1/a * a) * ((a~>b) * 1/b) assocl [swap second] first -- (1/a * b) * 1/b assocr [swap turn_back] second -- 1/a * 1 swap elim1 -- 1/a -- (this code is pseudo-Awelon)
Of course, this sort of dataflow does have potential issue: we can now express ‘data cycles’:
UF = turn_forward; UB = turn_back <-- 1/a --- / \ 1--UF UB--1 \ / ---- a --->
This isn’t surprising. Similar data cycles are potential issues for promises and futures. But it could be problematic; if our language has side-effects, we would need to observe a concrete ‘a’. It would be nice if well-typed programs don’t “go wrong” in this manner. Fortunately, it is easy to discover these data cycles with static analysis – assuming a relatively rigid program structure like most reactive/dataflow pipelines. Unfortunately, I don’t know whether the analysis can be made fully compositional, i.e. such that a shallow type analysis could prevent cycles. (Maybe use relative indexes? This would still form a global constraint/unification problem, but should be relatively simple and independent of implementation details.)
Interestingly, these fractional types can also be understood similar to continuations, formalized in a manner that does not interfere with composition and pipelining, eliminating need for higher-order behaviors and intermediate state. I can only begin to imagine some of the expressiveness benefits: simple cases like reporting preferred data formats, potentially complicated static negotiations (involving linear types or sealed values), working with concepts similar to continuation-passing down a pipeline, etc.
Awelon will support fractional and negative types [edit: if I can make it happen without hindering static visualizations]. They’re simple, elegant, simple to type, easy to implement, and very, very expressive. Fractional types will greatly augment a tacit language with only composition and higher-order blocks.