Stability and RDP

Stability is a valuable property (or set of properties) for any system. For a quick background, I recommend you peruse the Wikipedia articles: metastability, superstabilization, dynamic algorithm, self-stabilization.

Stability in different dimensions is discussed with a variety of different names:

  • Stable functions – a small or continuous change in state or input has a corresponding small or continuous change in other state and outputs. For example, a function such as (1/sin(x)) is unstable as x approaches zero, even though it is continuous.
  • Dynamic algorithms – the cost of computing a new output is commensurate with the amount of change in input conditions. This suggests doing computations with the ‘patches’, and perhaps memoizing prior calculations for easy reuse.
  • Numeric stability – a shrinking epsilon on error for an incremental computation, conversely a monotonic increase of a ‘precision’ function. We often discuss numeric stability with respect to differential equations or integration – e.g. the Euler method is less stable than Verlet integration. This notion generalizes to stable algorithms, e.g. a stable ‘sort’ algorithm should not rearrange elements that are already in a proper order.

I am no expert in the subject of stability. Study of control theory, differential equations, and dynamic systems go further with these concepts than I have ever studied. The main reason it interests me:

In open live systems, stability is a requirement for scalability.

For example, we cannot create a large-scale, open ‘planning’ system if a small change in Alice’s plans can easily cause massive changes in Charlie’s plans. Nor can we scale if a small change in Alice’s plans requires a massive computation overhead to compute a small change in Charlie’s plans. Nor can we scale the planning system if ‘intermediate’ plans are highly erroneous such that we waste a lot of computations constructing invalid plans – i.e. it is important that replanning mean continuous refinement. Every form of stability matters to scalability.

Stability and RDP

Reactive Demand Programming (RDP) is designed for open, live, scalable systems, and therefore accommodates stability:

  1. If an RDP signal has a lot of regular structure, we might leverage memoization to avoid recomputing values that have not changed.
  2. Similarly, we can use caching and difference functions (i.e. computing patches) to reduce propagation and recomputing of fragments of a signal that have not changed. Patches on signals can be temporal (e.g. ‘earliest time of observable change’) or spatial (e.g. previous signal with some inserts and deletes) or both.
  3. Many computations in RDP can be performed directly on a ‘patch’ or ‘update’ to a signal without keeping intermediate state. This ensures costs are commensurate with the size of the update. Unfortunately, not every RDP behavior can be supported this way.
  4. If the language reliably provides optimizations (perhaps with support of annotations), then developers can systematically leverage them, e.g. use of ‘simplification’ behaviors to make more inputs look the same, or writing more of the program in a patch-transform style.
  5. Use of asynchronous arrows allow developers to partition subprograms based on various properties – rate of change, patterns of change, physical distribution, relative stability. These give developers a lot control regarding where instability is introduced, and how much of the model will need to be recomputed after a change in conditions.
  6. Use of dynamic behaviors allow developers to model partial-evaluation, staging, and compilation, i.e. at the interface between a more stable resource and a less-stable resource. This can greatly reduce the costs and latencies of propagating updates.
  7. RDP exhibits eventual consistency. Eventual consistency is not the same as stability, but there is a relationship: If the inputs and conditions are stable, the outputs will stabilize too. Eventual consistency thus makes stability easier to reason about across composition.

However, RDP behaviors are semantically stateless, so implicit stability is ultimately limited to whatever can be achieved by clever application of caching, memoization, staging, and delay. To improve stability further requires that we provide stable state models to the developers of the RDP applications.

Stable State and RDP

For stable state in RDP, we do not want changes to propagate very far through the model. I.e. we want certain aspects of the in-the-large system to have a lot of ‘inertia’. The idea is:

  • A small change in conditions often (but not always) results in zero change in state, and it is easy to compute this, or easy to compute the earliest time when a change in state might occur.
  • When a change in state does occur, it is not directly reversible. This might be modeled as moving transition conditions, or as some sort of ‘pressure’ or ‘cost’ model that prevents vacillations of state. A classic example would be modeling an air-conditioner: the transition from ‘on-to-off’ has a different temperature threshold than the transition from ‘off-to-on’.
  • Even a large change in conditions often (but not always) results in a small change in state. Put another way: state tends to be a simplification of conditions. The goal here is that staging and sequencing of state models tends to reduce to zero-change propagation after just a few composition steps. This gives us a sort of ‘inertia’ that protects the system as a whole from the whimsy of any given component.

Not every state model for RDP needs to be a stable state model, of course, but developers should have easy access to stable state models via their language or programming environment. Here are a few interesting possibilities:

  • Temporal Term Rewrite Systems – the ‘state’ is a term, and the RDP demands describe a time-varying set of transition rules. In the most trivial case, this could model a state machine, such as the air-conditioner: whenever temperature is above 75 degrees, we demand a transition from off-to-on; when temperature falls below 72 degrees, we demand a transition from on-to-off. A transition only occurs if at least one of the rules applies. In case of ambiguity, the transitions may be non-deterministic. Stability is achievable because not every change in demands will cause transitions. We could augment this further by explicitly indicating ‘stable states’ in our rules, i.e. to override lower-priority transitions.
  • Continuous Temporal Concurrent Constraint Programming – RDP demands describe a set of time-varying constraints (e.g. X<5, X>Y+3). The ‘state’ is a solution to this problem – a set of simple discrete or piecewise-continuous algebraic signals that ‘thread’ the variables through their constraints. Stability is achieved because we don’t necessarily need to change certain signal values as a consequence of changes in constraints. This is potentially subject to machine-learning. This can be augmented with soft constraints at varying priorities.
  • Relational or Tuple-space insert/delete – a specialization of temporal term rewriting is to use explicit insert/delete rules over time on a set of values. We have stability because inserted values remain until deleted, and deleted values remain until inserted. This would mostly be useful when building state from event-like behaviors (such as keyboards).
  • Generative grammar models are a specialization on constraint programming suitable for arbitrarily large sets and world models. Demands are time-varying grammars. ‘State’ is a set of indeterministic choices that allow an intersection of grammars to reach at least one valid sentence. Stability is achieved by machine-learning of search-paths that lead most rapidly to ‘successful’ sentence generation – sort of like computer-generated madlibs.

After I initially conceived of RDP (April 2010), I spent several months (June – September 2010) playing with embedding these notions directly into the RDP model in order to further enhance stability. My answer was nay, nay, nay, nay – RDP benefits far too much from its stateless, resilient, deterministic semantics, and predictable real-time performance properties, and I shall not sacrifice them in order to embed a questionable state model. On October 1, 2010, I decided to ‘commit’ RDP to the functional-behavior semantics – before then, I was imagining that RDP might be parameterized by different ‘substrate’ models. (But, nay.)

However, I still feel these would be very useful state models that the ‘programming environment’ could make accessible as a sort of de-facto standard, sort of like Haskell supports ‘newIORef’ as a standard state model.

I’ll discuss state for RDP in more detail in a later article.

Advertisements
This entry was posted in Language Design, Live Programming, Reactive Demand Programming, State. Bookmark the permalink.

One Response to Stability and RDP

  1. I remember eavesdropping on the generative grammars discussion on LtU, and my interest was piqued when the genetic evolution of Chistiansen Grammars was mentioned. I have often thought that the difficulty of evolving s-expression programs was somehow an indication that they are a fragile way of representing certain types of computations that we might care about. Have you found any more worthwhile references on the subject in the interim?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s