Stability without State

I understand `state` to be a summary of the past, maintained in the present, to inform the future. State is a powerful tool, and essential in many problems (such as writing this text). But state is often too powerful – and that power corrupts. State enables momentary inconsistencies of the past to propagate into the future, hindering resilience and complicating failure modes.

The future couldn’t last. We nailed it to the past. – Savatage

State is something to avoid whenever feasible.

Fortunately, many problems (planning, configuration, UI or graph layout, dependency management, resource scheduling and control, etc.) don’t call for state. They only need stability – i.e. inertia, so they don’t vacillate between decisions.

The difference is that:

  • For state, the past is meaningful. If we randomize state, we corrupt the program’s whole future. If we reset state, we lose valuable information. If we change the rules guiding state, we introduce inconsistencies or transition problems for the current state.
  • For stability, the past is meaningless. If we randomize or reset conditions in a stable system, or even change the rules, we’ll soon converge on a new stable future. At worst, we do some rework and experience brief turbulence.

To achieve stability without stateful semantics, we must eliminate any formal dependency between the present and the past. Yet, we still need some relationship to the past to have stability – not a causal relationship, but a correlation.

Consider a simple example of control software. If we choose the power-state of an air-conditioner (on | off) based on the temperature (76 degrees Fahrenheit) then minute fluctuations in measured temperature between 76.001 and 75.999 could cause significant changes in the power-state. For stability, we want to dampen the system – tiny changes in measured temperature should not cause rapid power-cycling of the air conditioner. What we want is, say, powering the air conditioning system up at 76 degrees and back down at 73.

This is, of course, trivial to achieve by use of explicit state. But we want to do without.

Any deterministic decision made without reference to the past will be volatile in the above sense, i.e. there would need to be a deterministic relationship between observed temperature 75.999 and whether the air-conditioner is powered up or down. Thus I can conclude: To achieve stability without state requires abandoning determinism.

Temperature 75.999 instead effects two states as valid – the air conditioner may be on or off. The present state of the control software is formally non-deterministic. But, informally, we make the actual state stable from instant to instant whenever feasible. With this understanding, we could express a stable air-conditioning example with two declared allowances:

  For temperatures less than 76, the air conditioner may be off.
  For temperatures greater than 73, the air conditioner may be on.

At 74 degrees, if the air conditioner is on, it will typically remain on until the temperature reaches 73 degrees. That’s what stability offers us. But if there is a moment of instability, and we decide the air conditioner should be off, we’ll settle into a new stable system where it remains off until we again reach 76 degrees.

In absence of `state`, the proposed software is blindly controlling the state of the air-conditioner. There is no query of current state to decide next state. This blind control is good enough. We don’t care which state the control software chooses for the air conditioner, so long as it doesn’t vacillate. A volatile pseudo-state and informal stability is simple and sufficient.

As far as program semantics are concerned, we could power cycle our air-conditioner every nanosecond between 73 and 76 degrees. But, in practice, we have very few motivations to implement it that way. We have much greater motivation to favor stable decisions (even more than stability; efficiency is also affected). And so, in practice, we can program and reason about our systems with a justifiable expectation of stability.

Stateless Stable Models

Constraint programming, answer set programming, satisfiability problems, and similar models seem to achieve the right mix of non-determinism and logically instantaneous semantics. The runtime must reactively tweak a solution in accordance with changing conditions, but indeterminism of the solution allows it to be `stable` across much variation in condition.

I’d like to take advantage of state-of-the-art work on SMT solvers. I believe that easy access to SMT solvers could be a silver bullet for many problems. (I’d need to stabilize the solution, anyway, to fit reactive systems.) For RDP, I also need an idempotent, commutative, extensible model, i.e. where each rule is introduced by one demand.

One feasible approach is to leverage the well proven constraint-logic programming model. But, rather than entailment rules, I split the extensions into constraints and allowances. For example, consider the system:

  • foo(X,Y) is allowed by X > 2*Y, bar(X,Y)
  • foo(X,Y) is allowed by X < Y/2
  • foo(X,Y) is constrained by X ≤ 100
  • foo(X,Y) is constrained by X ≥ 0

The resulting entailment rule for foo is simply the sum (∨ = or) of allowances times the product (∧ = and) of constraints – in this case: foo(X,Y) :- (((X > 2*Y) ∧ bar(X,Y)) ∨ (X < Y/2)) ∧ (X ≤ 100) ∧ (X ≥ 0). These constraints and allowances are trivially commutative and idempotent, and it is clear that we could `inject` more such rules, which makes the model very extensible.

Note: the RHS of each allowance or constraint uses only (∧), implicitly via comma. Allowances – choices and decision points – can only be introduced at symbols. This rule is for two reasons. First, it ensures that allowances are uniformly extensible. Second, it isolates choice to named symbols, which makes representing a sequence of choices much easier (e.g. for recording or learning from them). It might be better to make this even more extreme and require every constraint simply be a name for a rule (so it is easier to weaken constraints and identify duplicate constraints).

So far I’ve described only a system of constraints and allowances, forming rules with choices.

What happens next is an agent demands a particular symbol, such as `foo` (with no embellishments; for RDP, it must be easy to know when two agents ask for the same thing). The response is a value that meets the constraints – in this case an (X,Y) pair. In an RDP system, the allowances and constraints may change over time. The goal is to provide a stable answer – avoiding and minimizing changes in the answer, and also avoiding changes in choice (for allowances). The approach to stability is heuristic, not part of the program semantics.

Of course, it is possible that there is no answer to foo, or that we don’t find an answer for a long time. The client agent is also continuously informed of this status, i.e. that there is no answer or it’s undecided.

Stability with Optimization and Preference

Seeking an optimum solution is contradictory to stability. For stability, we need to stick with our old choices for a while. Sometimes this means standing by a merely adequate solution. But stability isn’t entirely contradictory to optimization – in particular, if we seek local optima – a local minima or maxima – we can achieve metastability. It would be convenient to leverage these optima to reach better preferred solutions when opportunity presents itself.

I only have a rough idea how to do this: for each allowance add some metadata about (abstract) sunk costs (which are always non-zero and positive) and recurring costs (which might be negative to model gain). The system will seek `better` solutions, where better means a lower total upkeep cost, or even a sentence that pays for itself. The upkeep for a `sunk` cost is zero until it needs to change, but in some environments (where rules are changing rapidly) repeatedly paying sunk costs might be the largest contributor to upkeep. Recurring are paid continuously, i.e. the value might represent cost per second. Preferences would be represented in terms of low recurring costs, and reluctance to change in terms of high sunk costs.

A hiccup is accounting for the cost contribution of constraints. Common cost combiners (+, *) are not idempotent, so we can’t just add constraint costs (unless we’re willing to remove equivalent constraints via symbolic analysis, which I am not). We do want constraint costs to contribute to each decision so we don’t get silly answers that would be impossible to implement. A viable option is `cost of choice + max of costs of constraints` so that if a constraint is repeated twice its costs are only counted once.

Addendum (2012 April 05): the weighted logic dyna might offer some neat ideas on how to express logic with values. It would be convenient to replace those commas (foo(X,Y),(X>Y),etc.) with the cost combiners (foo(X,Y)+(X>Y)+etc.). The value `false` then becomes an infinite sunk cost. But I’m not sure how this would work with recurring costs. Might be better to stick with simple weights (w1*foo(X,Y), etc.).

Orthogonal Intelligence

An interesting property is that these `stateless stable models` can transparently be turned into distributed intelligence systems, possessing both knowledge and creativity. It doesn’t take much: heuristically remember what has worked in the past, and try those solutions first. The intelligence principle is similar to a neural network, except that a constraint system provides its own feedback about what works. Keeping these `memories` becomes a source of stability without semantic state. The system is free to forget and learn new successful search patterns.

Ultimately, this small (but compositional) intelligence provides developers a predictable and stable behavior of a system, even across patterned variations in rules. We don’t need global analysis of change patterns, we’ll simply adapt to them and learn how to find solutions quickly even in a changing environment. Rules change in predictable ways, and learning captures these changes.

I was surprised to find this valuable feature emerging from a simple approach to effective stability. It actually worried me a little. I wrote `build Skynet` on my career goals in high school, but I didn’t actually mean it. Fortunately, secure modularity (object capability model) can ensure such an implicit distributed intelligence is chained to negotiating the interests of humans.

At the moment, I’m short on ideas on how to model the learning itself. Presumably one would need to keep a highly compressed summary of which search paths have worked in the past – enough to guide a search or offer a useful heuristic weight to a success path, but not so much that it becomes a memory burden. Ideally I want fixed memory per node, or perhaps logarithmic with time. More design work is necessary. I’m wondering about approaches involving exponential decay to combine and abstract similar search paths (e.g. based on similar symbols and values).

Conversely, I could probably turn many machine learning models into stateless stable models. For example, I could adapt a dynamic hierarchical clustering algorithm to provide a stable classification of points.

[Addendum Apr 18]: I have learned that the class of dynamic algorithms for time-varying data is called dynamic anytime algorithms. Many of these seem applicable to creating stateless stable models.

Advertisements
This entry was posted in Concurrency, Distributed Programming, Modularity, Open Systems Programming, Reactive Demand Programming, Stability, State. Bookmark the permalink.

12 Responses to Stability without State

  1. John Shutt says:

    This makes an interesting read, which for me turned into a frustrating read toward the end, as it seemed some subtle disconnect was preventing your key point from coming across clearly to me. The disconnect is apparently over what deep concept you mean to invoke by the word “state”. As I understand statefulness (and as I had thought you meant to define it, when I first read the start), what makes the system stateful is that there’s a correlation between the temperature now and the temperature in the past. That is, whether the system is stateful depends on the behavior of the world, not on the behavior of the air conditioner. By the end of the post, though, I was no longer certain you’d meant the word the way I’d thought, because whether the air conditioner controller is deterministic or nondeterministic makes no difference to the statefulness of the system in my sense. It does occur to me that the state of the world is somewhat nondeterministic, and introducing some fuzzy logic into the state of the air conditioner avoids an impedance mismatch; but I feel that’s still not quite what you’re getting at.

    • dmbarbour says:

      I said in my first sentence what I mean by `state`: a summary of the past, maintained in the present, to inform the future. This isn’t a mere correlation. It’s a causal relation.

      The world is stateful, of course. The simplest rock, for example, carries a summary of its own past – never a complete record, due to conservation of energy, but every scratch and nick and fingerprint tells a story to those who can read them. And this summary is always maintained: all of physics is integrals and differential equations. State gives stability to the physical world.

      The question is whether our software should be stateful, too.

      For the most part, we do not want our software to model the world. We put complex problems in software because it is easier to manipulate and maintain software than it is to manipulate and maintain a desirable physical reality. And we’d be wise to keep it that way. Stateful semantics place an extra burden on developers to correctly maintain a causal relationship between past and present, present and future.

      Maintaining state is essential to some problems – we can’t edit text without state, for example – but I posit that we could do with orders of magnitude less state if we only used it where essential.

      One way to avoid state in the causal sense is to make software stable by coincidence. To achieve this, much software – including the software that controls the air-conditioner – becomes non-deterministic rather than causal.

      I’ll add a few sentences above to help clarify that control of the air-conditioner is `blind`: there is no formal observation of current state to inform the future state.

  2. While your explicit constraints admit stateless solutions, your implicit constraint of inertia added by the solver requires memory of some sort, even just for correlation, doesn’t it? Not that that is bad in my opinion. I think you are on the right track in separating and composing these different levels of constraints. If some sort of state is needed to achieve a constraint, then so be it.

    Also, to align closer with common English, I think it would help for you to add “only” to your “may” sentences to make it clear that the permission is negated outside the condition. e.g. some variant of this:

    Only for temperatures less than 76 may the air conditioner be off.
    Only for temperatures greater than 73 may the air conditioner be on.

    The “allowed” syntax is also a bit odd since they’re packed into a single disjunction, but I don’t know how to improve it and I don’t quite understand what you’re trying to achieve there.

    Thanks for the LtU link, too. I recently started planning a reactive text editor similar to what Sean McDirmid briefly describes there, so it was neat to see some common thoughts on that.

    • dmbarbour says:

      Sure, some memory would be part of the implementation. But we wouldn’t be presenting stateful semantics to the user of this model. Consequently, the user would not encounter the problems of inconsistent state or getting stuck in a bad state. Persisting the memory wouldn’t be critical, only useful. The models would be naturally partition-tolerant.

      Regarding the word `only`: Allowances are supposed to be additive. So, for example:

      For temp less than 76 the AC may be off
      On Wednesdays, for temp less than 85, the AC may be off

      These two allowances would combine in a useful way. Converting these to the constraint-logic expressions:
      ac(S) ∨= off_state(S),temp(T),T<76
      ac(S) ∨= off_state(S),today(D),weekday(D,wed),temp(T),T<85

      The `∨=` can be read as `is allowed by`.

      If you use the word `only`, you cannot achieve the additive allowances based on different conditions.

    • dmbarbour says:

      I note that your proposed sentences “Only for temperatures less than 76 may the air conditioner be off” actually describe constraints. Taken pedantically, it doesn’t actually say the air conditioner may be off for temperatures less than 76; it only says that every rule for `air conditioner may be off` is subject to a constraint that the temperature is less than 76 degrees. Basically:

      ac(S) ∧= (off_state(S) ∧ temp(T) ∧ T< 76) ∨ (on_state(S))

      For various reasons of extensibility and annotation, I don’t want to express ∨ conditions in the RHS of my rules so I’d actually need to split this like:

      ac(S) ∧= ac_k(S)
      ac_k(S) ∨= off_state(S),temp(T),T<76
      ac_k(S) ∨= on_state(S)

      Anyhow, basically this would add the ac_k(S) rule to every ac(S) allowance (∨=) rule. It is acting as a constraint.

      It isn’t clear to me what you mean by the `allowed syntax`. But hopefully a few examples have helped clear things up.

  3. I see, you’re focused on weighting constraints as a strategy for better fault tolerance. i.e. which state is more expendable than other state.

    You’re right — in order to maintain your `OR` allowance semantics, “only” would have to be applied to the set of all those allowances, not to each allowance as I did.

    My general point is that you implicitly close the OR at the end of the definition, while English usually assumes an open definition and thus requires explicit closure (using “only” or something similar), which is part of why your terminology reads strangely.

    • I am assuming an open definition – i.e. any number of independent agents may contribute constraints or allowances to the model. And they can change over time that which they contribute. At any given instant, however, there will be a finite number of contributing agents and a finite specification for the constraint model. I think people will not have trouble understanding that allowances at any given instant are just what is specified.

      Weighted constraints can be used for fault tolerance, but they can also be used to `balance` between preferred properties, i.e. finding an acceptable compromise. It’s a flexible technique.

  4. So, the definition is always changeable and always assumed to be complete (i.e. all terms have been specified). The completeness is why I was calling it “closed”, while I think the changeableness is why you are calling it “open”.

    You’re right that once people understand how you translate the English, they’ll get used to it, but it’ll be odd for newbies since the English “may” does not usually imply completeness.

    It seems like two functionally equivalent logic circuits could respond differently to adding a new allowance depending upon how they are implemented. Is that right?

    • Note that `may` does not imply completeness in my model, either. That is, taken individually, each sentence is incomplete, open. No single sentence can close the set of allowances. This is important since independent agents should be able to contribute constraints and allowances without stepping (too much) on each other’s toes. I need open sentences for an open system. (Mutability isn’t the reason I called it open, but it is a necessary feature of any open system – agents come, agents go, by their own authority.)

      Stateless stable models are non-deterministic and thus can behave differently despite identical specification and stimuli. Differences may exist in search heuristics, optimizations, memoization or learning, timing anomalies, and a variety of implementation details.

      But “adding a new allowance” should not destabilize the model if it already has a good solution (unless we are using some sort of meta-stable mechanism based on weighted constraints). The idea is that agents can change the allowances much more frequently than the model changes its solution. That contributes to stability.

  5. Let’s consider your original example:

    (A) For temperatures less than 76, the air conditioner may be off.
    (B) For temperatures greater than 73, the air conditioner may be on.

    If the set of allowances is not complete, then that definition asserts (A or B or ?) which doesn’t tell us anything. It is only once we close the `OR` (treat it as a complete set so it is just (A or B)) that we can conclude that at least one of those two statements must be true, which allows us to derive that the AC must be OFF below 73 and ON above 76.

    This is in contrast to asserting an incomplete set of ANDed constraints (A and B and ?) which tells us that A and B are true regardless of any other terms.

    • Each allowance is, individually, incomplete and open. This is why the English I used is appropriate, even in connotation. The sentences say nothing about the completeness of their set. Nor should they. Only an agent outside has the ability to meaningfully judge completeness of the set.

      My inner pedant cries at your use of (A or B or ?). After all, it is the allowed AC states that are OR’d together, not the statements as a whole. But I think you know this well enough, and I can grasp your intended meaning.

      It is only once we close the `OR` (treat it as a complete set so it is just (A or B)) that we can conclude that at least one of those two statements must be true

      Perhaps. But that isn’t a conclusion I care to make. I say “find me one valid solution”, not “prove to me this is the only solution”. I don’t need to worry about those questionable `?` allowances because they cannot be assumed. I don’t even need to worry about B, if I can demonstrate A.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s