Hierarchical transaction machines (HTXMs) are an interesting idea I’ve been developing recently. HTXMs accomplish my goals of live, resilient, declarative, extensible software systems. Further, HTXMs accomplish these goals in a way that’s easy to explain and easy to integrate with existing service and device.
First, let’s ignore that ‘hierarchical’ adjective. I’ll get back to it!
A transaction machine (TXM) is essentially: a deterministic transaction, repeated indefinitely.
We assume our system has a set of TXMs. Non-determinism arises due to an arbitrary schedule for a TXM executions within the set. Process coordination is implicit – if a transaction is unproductive, then deterministic repetition would still be unproductive, so for efficiency we simply wait for a change to an observed variable. Programmers can leverage this, for example by aborting a stream-processing transaction when the input channel is empty. External agents and devices are implicitly modeled as TXMs interacting asynchronously and atomically through shared variables.
TXMs have many nice properties:
TXMs support liveness: We can implicitly read our TXM behavior definition at the start of each transaction. This enables us to modify system behavior atomically. This can be leveraged for process control, safe runtime upgrades, and developing application models around live-coding.
TXMs have robust process coordination: There is no need for polling, locks, signals, or interrupts. There is no risk of deadlock or dropped signals. We can wait on arbitrary conditions without specific condition variables. A process scheduler can ensure fairness or prioritize transactions that touch HCI variables.
TXMs are robustly securable: we can define TXMs as operating on an abstract environment provided as a parameter, without static variables. This makes it easy to sandbox the environment, or to simulate it for testing purposes. We can safely work with distrusted TXM definitions by shallowly wrapping them.
TXMs are openly extensible: TXMs are anonymous and operate on a shared environment, like a multi-agent blackboard system. It’s easy to add new features to a system by introducing TXMs that observe or tweak some variables also used by other TXMs. It’s equally easy to remove features.
TXMs intrinsically are idempotent: introducing more than one copy of a TXM won’t affect a system’s logical behavior. This allows us to reason declaratively about whether or not a system has a particular behavior or extension. A set of TXMs is also commutative, but that’s not unusual for concurrency models.
TXMs are fault tolerant and resilient: via hierarchical transactions, we can robustly support strong exception safety with fallbacks for graceful degradation. In a set of TXMs, even if some operate in a failed or degraded mode, the others can operate normally. After the cause for failure is fixed, a TXM will swiftly recover to optimal operating conditions.
TXMs are suitable for real-time programming: it is not difficult to develop a scheduler that will guarantees time slots for hard real-time mission-critical TXMs. We can prioritize transactions responsible for HCI to ensure low-latency reactivity. Compared to threads, TXMs give us very nicely bounded operations for scheduling.
TXMs are easily debuggable: It’s easy to capture and simulate the variable environment observed by a transaction for error reports and regression testing. If transactions return a value on error or commit, we can easily record the history of such values and render it in a task manager or debugger. Debuggers can be supported as normal extensions to the system, allowing ad-hoc views and direct manipulations, rather than hacked in via special compiler modes.
TXMs are simple and familiar: Asynchronous effects through variables are an excellent fit for network access, task queues, direct memory access, publish-subscribe, and other common software design patterns. We can leverage conventional object-oriented interfaces or opaque data types to help protect system invariants. It’s easy to explain and understand TXMs, and to integrate them with real-world systems.
Hierachical TXMs (HTXMs) are an extension to TXMs to support task-parallel concurrency and multi-stage programming while preserving liveness.
The idea for HTXMs is to form a read-fork tree with read-write loops at the leaves. After a change to an observed variables, we might rebuild a branch of the tree, or even the whole thing, thereby preserving liveness. But insofar as we stabilize the tree (via design patterns, caching and buffering, etc.), computation will mostly occur at the leaf nodes.
To support HTXMs is not difficult: we simply add a `fork` operation to our transactional API, which indicates we should attempt a transaction after we commit. Forked transactions can inherit the read assumptions of the parent. Then read-fork operations can be optimized. (This does awkwardly permit expression of mixed read-write-fork, but we could handle that arbitrarily or treat it as a runtime error.)
Conveniently, a single HTXM can model an entire set of TXMs. We can also hierarchically embed HTXMs.
The many advantages of HTXMs come at a cost: rework. When a transaction aborts after some effort, we have wasted at least part of that effort. The premise for TXMs as a practical model is that the amount of rework can be controlled to an acceptable level. Design patterns can help a lot to focus a system on productive work. We can also develop heuristic schedulers that are sensitive to read-write conflict histories to reduce rework due to concurrency conflicts.
Caveat: TXMs are unsuitable for distributed processing. I assume network access is modeled as an effect. If users wish to model PAXOS or CRDTs, they should do so explicitly. (As an exception to prove the rule, TXMs over RDMA might be acceptable.)