Declarative State

State models favored for imperative programming, such as mutable references, are unsuitable for declarative programming models including Reactive Demand Programming (RDP). Declarative state is not difficult or complicated, but it is different and might require its own eureka moment.

The general scenario is that we are manipulating state by use of declarations.

As discussed in a previous article, declarations are statements that may hold over a period of time, and may overlap in time. (Contrast events, which hold only for an instant.) Declarations that hold at any given instant must be commutative and idempotent. (Contrast events, generally processed in some order and may be counted.)

To help elucidate some of the constraints on declarative state, I break the general scenario into four quadrants on two axes – exclusive vs. concurrent writers, and discrete vs. continuous time.

Exclusive Writer, Discrete Time

By exclusive writer, I mean there is at most one declaration for how to write state. And discrete time means that there is a well defined `next` time that is distinct from the `current` time.

This is the easy case.

There is no risk of write conflicts. There is no risk of semantic confusion with regards to duplicate writes. The functions for manipulating state are simple – we just declare the next state to be a function of the current state and other observations the writer might have made. Example declarations might be `next = curr + 2` or `next = curr * 3`.

There will also be some default `idle` behavior for state – most likely `next = curr` or `next = 0`; the idle behavior will apply if there is no other declaration. (The auto-zero behavior would have some nice properties with regards to garbage collection.)

Concurrent Writers, Discrete Time

Concurrent writers introduce two challenges relative to the exclusive writers model: write declarations conflict, and instability from idempotence.

Write Declarations Conflict

Taking the earlier example, we might see BOTH declarations – `next = curr + 2` and `next = curr * 3` – at the same time. These proposed operations clearly do not commute – i.e. (curr+2)*3 is always 4 points different from (curr*3)+2. So we can’t just apply them in an arbitrary order.

Fortunately, we don’t actually need the operations to commute. For declarative definition, we need the declarations to commute.

Essentially, at any given instant we have a set of declarations from all attempting to influence the shared state resource. This set is gathered from all writers, local and distributed. This set will ultimately be represented in some ordered manner in physical memory, but the next state must not depend on that ordering nor on any properties that might have influenced it (such as race conditions, origin, position in source code).

So long as we obey that restriction, we have a lot of freedom on how to resolve conflicts. A few valid options include:

  1. Report a `write conflict` and perform the idle behavior. Push the burden of avoiding concurrent operations back on the developers.
  2. Choose one operation to win based on some arbitrary condition that is NOT related to ordering in the set. For example, execute both then choose whatever option results in the smaller value.
  3. Apply a true-random selection. This, perhaps counter to intuition, is valid because the cause of the ordering is independent of the declarations. Though, I’d disfavor this option because it unnecessarily introduces indeterminism.

So the write conflicts aren’t avoided. Rather, they’re resolved in some arbitrary manner.

While such resolution is sufficient to respect the declarative definition, the arbitrariness does seem… unsatisfactory, inelegant.

But declarative definition has nothing to say on the elegance of state models. The job of finding `collaborative` state models that effectively support concurrent writers is ultimately up to language and framework designers.

Instability from Idempotence

Consider two independent declarations: `next = curr + 2` and `next = curr + 5`. Additions do commute, so if we applied both the result would be curr+7. So it seems okay to apply both demands in an arbitrary order. This would be even more `obvious` if operations on state were restricted to additions, so we’re just looking at declarations of the form `+2` and `+5` (once we’ve isolated it to a particular state resource).

Since the writers are independent, they do not know exactly what the other writers are doing. The writer of the `+2` declaration might observe that state is actually increasing 7 points each round, but does not know why. There are many possible scenarios resulting in that observable outcome: {+2,+5} or {+2,+1,+4}, or {+2,+3,+4,-2}. Or even {+2,+2,+5} due to idempotence (which eliminates duplicates).

Now for a change: a writer of a +2 declaration changes it to a +3 declaration. What happens? Well, depending on the initial scenario, the increment per round might change by -2, 0, +1, or +3.

Due to circumstances outside local control, a small change in the declaration of one writer might cause a wide variety of changes in state. For developers, day to day, this is not fun. It is chaotic, unstable, frustrating.

So, how do we achieve stability?

Well, the problem was caused because the declarations were directly additive. +1 and +3 added to +4. +2 and +2 added to +2, due to idempotence. What we need is a state model that becomes more stable as the concurrent writers are closer to agreement – additive won’t work.

We could try averaging the values. But consider a scenario such as {+1,+1,+1,+5}. We want this to `average` to +2, but it would actually average to +3 since the +1 only counts once (due to idempotence). So, while this is better, it is insufficient to avoid the problem.

It turns out that we simply cannot treat writers the same as declarations.

When we need to count writers – e.g. to model a vote, or a set of independent forces – then we must make it more explicit by providing an identifier for the writer as part of each declaration. For example, instead of {+2,+2,+5} the scenario becomes {(w1,+2),(w2,+2),(w3,+5)} where w1, w2, and w3 are values (perhaps URLs or GUIDs) identifying distinct writers.

Otherwise, we can use simple mechanisms that do not depend on the number of writers. For example, I might favor simply selecting the smallest value, so {+2,+2,+5} reduces to +2.

Again, this sort of arbitrary choice seems inelegant and unsatisfactory. But if we want more elegant options, we’ll need state models where the declarations contributing to the future state are actually idempotent, so that we may combine them easily.

Exclusive Writer, Continuous Time

The discrete time assumption simplified many things – state is constant in each `instant` and there is a clear `next` instant that serves as the trigger for updates. Continuous time introduces new possibilities, such as continuous varying state (useful for physics simulations). But it also introduces many challenges.

Discrete State with Continuous Time

While discrete-varying state models can be used with continuous time, the lack of a `next` instant (or, rather, the infinite abundance of `next` instants) hinders certain expression.

Consider a declaration such as `next = curr + 2`, held for even one millisecond. What would that mean? Do we increment once? Do we increment a billion times because there are a billion picoseconds in a millisecond? Do we increment infinity times? None of those options makes sense.

Without use of `next`, we can still use discrete state in continuous time. However, the writer must describe a fixpoint computation. Conceptually, the declaration is applied an infinite number of times before even one picosecond has passed, and so we must have fixpoint convergence. (Note that idempotent updates qualify trivially as fixpoints.)

We can model many things with fixpoint state models. But it is difficult to prove that the fixpoint of arbitrary functions will converge. And the lack of expressiveness is frustrating. And design patterns to work around problems (i.e. using delayed writes to incrementally modify state) tend to be onerous and unstable.

Fortunately, there is an idiom to regain full expressiveness. Instead of declaring `next = curr + 2`, I might declare `next = curr + 2 with cooldown of 2ms`. The cooldown would prevent the operation from applying an infinite number of times, and essentially specifies its own concept of discrete time.

The `cooldown` concept leaves a lot to be desired. What if the write declaration changes and the state is still in cooldown? It seems we might easily miss important writes. But there are other concepts similar to cooldown that achieve the same purpose. I’ve explored a time debt approach and timeouts.

Continuous State

Continuous-varying state can potentially have a different value at every instant in time. This can be useful for physics simulations; for example, we could precisely model the path of a bullet. But there are a number of new challenges here.

I have only idly explored this space, as it is a mere curio to my current visions.

But I have determined that there aren’t many simplistic continuous models that actually seem useful. For example, I had considered representing state in terms of an integral of forces, but every practical use-case I’ve developed calls instead for a system of differential equations (to account for friction, air resistance, etc.).

One model I have found reasonably useful is a continuous follow-the-leader or dangling-carrot behavior, suitable for simple continuous animations (e.g. for a user interface, or robotic motion). I’ve been wondering if such a model might be generalized in terms of Bezier control points.

Concurrent Writers in Continuous Time

Concurrent writers again introduce the challenge of combining writes, resolving write conflicts, and ensuring stability.

For discrete-varying state, combining writes makes it even more difficult to validate whether a fixpoint function will converge. Also, while the time-debt idiom seems to work fairly generically to support multiple writers, it took me a while to discover that pattern.

For continuous-varying state, we can expect continuous-varying declarations to control it (essential, for composition). The problem of combining write declarations and making useful resolutions means we’ll start to encounter challenges of detecting zero-crossings – i.e. find the time that signal A becomes less than signal B. Unless our continuous signals are of an analytic nature, detection of zero crossings will generally be imprecise and expensive – i.e. implemented by sampling near suspected collision points.

RDP and Declarative State

RDP falls into that last quadrant. The time model is continuous. Since I cannot create state in RDP, I assume that other writers can also `discover` it, and I also lack freedom to provide an ad-hoc combining function. I must instead `discover` state with an appropriate combiner function already built-in.

The inability for normal developers to introduce ad-hoc combiner functions means I must invent a couple `standard` state models for RDP – that RDP languages can be assumed to support in the same sense that imperative languages are assumed to support mutable references.

I’ve written about a few of my efforts and ideas – reactive state transition, animated reactive state transition, and animated reactive term rewriting, among others.

One decent declarative state model that is so simple it is almost beneath mention is simply mutating a set. This is a common state model for temporal logic approaches (such as Berkeley’s Dedalus and BLOOM), and in practice it is used very much like a tuple space or an SQL table. It works as follows:

  • State is a set of values, initially empty
  • A declaration can add a value
  • A declaration can delete a value
  • In case of conflict, delete wins
  • Update is modeled as add + delete
  • Idle behavior is to maintain the current set.

Since add and delete operations are idempotent, and commutative with add and delete on other values, there are very few conflicts. The ONLY conflict is adding and deleting the same value. State of an application can be divided across many such spaces.

Unfortunately, this state model isn’t very effective for a broad class of problems – in particular, incremental updates of values, such as a editing a textbox. And there are no obvious and simple tweaks to fix that problem without causing other problems.

More practical for a broad spectrum of applications is the animated reactive term rewrite state model. It is a most promising `de-facto` option for RDP at the moment – unless I figure out how to make RST blazing fast and achieve my vision of model reactive databases with reactive indexes and such.


This article is not about the utility of declarative state. I speak only about what it means in practice, and how it compares to imperative state. I would like readers to take two things away from this article:

  1. how declarative state can be achieved, and the relevant constraints (inflexibility) compared to imperative state models
  2. that resolving concurrent write conflicts is rather arbitrary regardless of the imperative vs. declarative distinction, it just expresses in different ways (e.g. last update wins vs. arbitrary resolution rule)
This entry was posted in Language Design, Reactive Demand Programming, State. Bookmark the permalink.

3 Responses to Declarative State

  1. shelby says:

    Afaics, spatial declarativity in the context of signals is a red-herring, i.e. imho you may be overclaiming the potential advantages of RDP. This requires a fundamental understanding of what declarative programming really is.

    • dmbarbour says:

      My declarative definition tells me “what declarative programming really is”. But I do not ascribe any particular `goal` or `benefit` to declarative programming. I have not yet, on this blog, addressed the question “Why declarative programming?”. That might make a good future article, though.

      Your interest in declarative programming (as I infer from the linked article) regards precise expression of `intended` semantics, avoiding semantic noise such as caused by unintentional expression of order. Thus you focus on indeterminancy, aliasing, error. Declarative state does nothing to avoid those problems, so you see little advantage to declarative state – nothing to counterbalance the constraints it imposes on developers.

      My interest in declarative expression regards reasoning about a system and impact analysis for proposed changes. Declarative properties have a wide, beneficial impact on refactoring, abstraction, code reuse, optimizations (multicast, duplicate elimination), mirroring (for reliability, latency, or scalability), administration, debugging, maintenance, extension, live programming, federated programming. The benefits I seek are not much compromised by a little semantic noise.

      • shelby says:

        Seems you make a valid point. From the link I provided, events are never spatially declarative because time is not held constant over an instant for all expressions. So you are focused on the benefits of the spatial declarativity (as opposed to temporal declarativity, which neither signals nor events possess) to the extent they are not destroyed by the level of semantic noise.

        It will be interesting to see more research into the tradeoff due to the inflexibility I mentioned (with a research link) in my link, and to see which are the ideal domains for each approach and any models for interopt between eventless signals and events, if that turns out to be useful.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s