Awelon Progress Report IV

I spent most of March attempting different approaches to improve AO’s performance. This isn’t what I set out to do. At the end of February, I wrote: “My plans for the next month are to start building functions for data structures: maps/records, sequences, streams.” And I did make a little progress in that direction – but not nearly enough to pretend I met my goals. Performance remains a pain point for me; I really wish to start developing applications within AO – such as an IDE, so I can ‘dogfood‘ the development of AO. But that’s simply impractical if the performance of the resulting application is inadequate. I fell to the siren song of performance pursuits.

Early in the month, I finally cleared a few tabs from my browser, reading Laurence Tratt’s excellent article, Fast Enough VMs in Fast Enough Time (highly recommended to anyone developing a PL), which describes how PyPy can construct a JIT compiler from an interpreter. I tried my hand at this for ABC, but I suspect I’ll need a more explicit memory model to meet PyPy’s restricted python (RPython) constraints. I might get back to this later. At the moment, the PyPy interpreter for ABC has approximately the same performance as the aoi interpreter in Haskell.

I had more success with a second attempt:

I’ve implemented a simple AO to Haskell translator. From the AO package, `ao dict2hs` will convert the entire dictionary into one Haskell module, AODict, using a simple name mangling scheme. This relies on an externally provided AOPrelude module for definitions of the ABC primitives. This is a very naive, direct translation, but the AOPrelude can (and does) include GHC rewrite rules. The resulting (profiled) Haskell object file is about 12 megabytes for 1141 words, or about a third that for non-profiled code. (The raw Haskell code is 242kB, where the AO dictionary is 144kB. Much of this is documentation (doc.) words.) The only tricky bit was ensuring AO can use Haskell’s TCO, which I eventually achieved by specializing the `$c` sequence. This development comes with a few new utilities for running AO tests and executing AO programs (`aoTest` and `aoExec`).

The performance of this translation seems pretty good for “straight-line” code. For example, `aoTest` and `ao test` run the same tests, but the former runs two orders of magnitude faster (0.012s vs. 1.38s). I was very excited by these numbers. Sadly, they didn’t generalize. Upon implementing a simple loop code (`@bench.repeat100k 0 [4 .add] 100000 repeat 400000 assertEQ`) I discovered a performance factor closer to 6 (from 2.42s to 0.40s). Further, I suspect that the improvement will prove very marginal for deep metaprogramming (where partial evaluation and JIT become more essential).

Despite the limits of this technique, it may be enough to begin dogfooding further development of AO.

So, that’s my plan this April. I’ll need to develop the `aoExec` powerblock a fair bit to perform useful side-effects, but that should not take long. I will need to test the `{&async}` annotation, which I implemented in the AOPrelude. I’m hoping, within the next month or two, to develop a few simple web applications mostly within AO, and eventually shift development of AO from filesystem + gedit into a persistent, wiki-like service. As AO’s performance further improves, more domains should open up to it.

Posted in Language Design | 2 Comments

Awelon Progress Report III

Another month gone, so quickly. What the heck did I do? It’s difficult to recall, but my git history is helping.

It seems my major accomplishments this month were:

  • implemented automatic unit testing for the AO dictionary (runs all `test.` words)
  • implemented an automatic type analysis based on partial evaluation
  • implemented a few list processing functions (e.g. nub, partition, sort, map, each)
  • simplified and optimized fixpoint combinator (15 ABC prims, from 80)
  • cleaned up the conditional behavior words (e.g. `if` and `if_`).
  • partially developed an incremental processes model (based on `µP.[a→(P*b)]`)

Not too bad, but I can’t say I’m entirely satisfied. I had an aha-moment idea for direct compilation of ABC into a Haskell monad (to improve bootstrap performance) but that ran into a wall. After picking up the pieces and removing the debris, all I had left of that effort was the typechecker. I also ended up removing adverbs, after experimenting with the idea and finding I needed more specialization for context and category (e.g. lists vs. streams vs. automata).

Ah, six steps forward, two steps back. That’s progress. :-/

My plans for the next month are to start building functions for data structures: maps/records, sequences, streams. And to further fill out the set for lists. I hope to get far enough along that I can start modeling grammars in April. At the moment, I have 844 words in AO, of which slightly more than half (440) are documentation (`doc.`), tests (`test.`), or assertions (`id.` or `eqv.`). I don’t know how many words I’ll need to bootstrap an ABC or AO compiler – quite a few, I imagine, but hopefully fewer than 10k. Regardless, I need to pick up the pace.

More subjectively, I quite enjoy programming in my new language.

The REPL plus automatic tests mix is very effective. I’d still like some automatic visualization and continuous feedback tools, and I very strongly look forward to rendering colors/styles instead of suffixes/prefixes for large words. But I’m not desparate for these features. AO has been delightful in its simplicity, security, safety, and flexibility. The first-class nature of the environment gives me much greater feeling of ‘control’ than I’ve felt in other languages. Going back to Haskell is starting to feel a little painful.

My main pain point for AO right now is the awful interpreted performance. I’ll be a lot happier with AO when I can compete with other languages for performance.

Posted in Language Design | Tagged | 4 Comments

Substructural Types in ABC

Awelon Bytecode (ABC) has two operators, `k` and `f`, to support modeling of substructural constraints. In ABC, `k` will mark a block no-drop (relevant), and `f` will mark it no-copy (affine). If both marks are applied, the block is effectively linear. These properties can be enforced dynamically if necessary (i.e. halt the program upon attempted violation), but the intention is that they should (as much as possible) be checked statically.

A ‘block’ is simply a first-class function in ABC. Though, unlike first-class functions in many languages, a block in ABC always has a serializable form as a finite sequence of ABC – e.g. `[rwrzw+l]` will add two numbers on the current stack of AO’s standard environment. The primary operations on blocks are to compose two blocks, and to apply a block to a value.

o :: (y→z) * ((x→y) * e) → (x→z) * e -- compose (concatenate blocks)
$ :: (x→y) * (x * e) → (y * e)  -- apply block to just `x`; hides `e`
k :: (x→y) * e → (x→y)' * e -- mark block no-drop (relevant)
f :: (x→y) * e → (x→y)' * e -- mark block no-copy (affine)
^ :: (Copyable x) ⇒ x * e → x * (x * e) -- copy
% :: (Droppable x) ⇒ x * e → e -- drop

Sadly, I don’t have a good pseudo-type-descriptor notation for affine and relevant attributes at the moment. Suggestions are welcome.

Anyhow, the `no-copy` property simply means that the copy operator `^` may not be applied to the value. And the `no-drop` property means that `%` cannot be applied. If we mark a first-class addition function relevant, it would serialize simply as `[rwrzw+l]k`. When composing blocks, the composite inherits the no-drop and no-copy properties from the operands, i.e. if we compose affine with relevant the result is linear. The only way to remove a relevant or linear block from the environment is to apply it using `$`.

Why Substructural Types?

Substructural properties are very expressive for enforcing various kinds of structured behavior independently of a structured syntax. This is a useful separation of concerns, which leads to simpler code and reasoning.

By ‘structured behavior’ I mean for example enforcing that some operation occurs ‘at least once’ or ‘at most once’ or ‘exactly once’ or ‘before/after/between some other operations’. A typical example might be enforcing that a resource (e.g. an open file handle) is released exactly once and not used after release. But we can look at much richer examples, e.g. protocols, games, trade models, or workflows.

Conventional languages have a relatively rigid structured syntax. Fortunately, clever developers have consistently been able to leverage this structure to enforce, or at least discipline, higher level structured behaviors. For example, if we want to enforce that a file is released, we might develop a `withFile` function of (pseudo-Haskell) form:

withFile :: (MonadIO m) ⇒ (FileHandle → m a) → (FilePath → m a)

And this `withFile` function will acquire the file resource, apply the given function, then release the file resource. We could also develop a variation with MonadException to properly bracket the action for a monad that models exceptions.

Unfortunately, the `withFile` behavior is still not entirely safe! In `withFile action “foo.txt”`, our `action` might fork another thread to process the handle, or install a callback or return a closure or exception that contains the handle. In these cases, we might accidentally use the file handle even after the file has been released. The `withFile` pattern requires some discipline to use correctly, and implementation knowledge of the `action`, both of which tax our limited mental budgets.

So what can we do?

Well, one option is to develop a syntax – or EDSL (e.g. a monad or category) – that can control aliasing and enforce safe use of the file handle and similar `(Acquire Use* Release)` pattern resources. My intuition is that this wouldn’t be especially difficult for a single resource at a time – e.g. executing `action` within some variation of the StateT or ReaderT monad transformer should work well enough. Leveraging algebraic effects (or heterogeneous lists, or dependent types) we might effectively mix operations on multiple resources.

Developing an EDSL or structured syntax takes its own form of discipline, but it only needs to happen once per resource-pattern. It’s a decent approach if you wish to pursue it in a closed system. However, it can be difficult to integrate such an EDSL with independently developed frameworks.

Substructural types offer a simple – almost simplistic – alternative that greatly reduces the need for structure in the syntax. If we want an `(Acquire Use* Release)` pattern, we can almost model this directly. Example API:

type File (affine)
type OpenFile (linear)
type Command (regular)
acquire :: File → OpenFile
use :: OpenFile → Command → (OpenFile * Result)
release :: OpenFile → File

With this API, we can easily reason that an open file cannot be forgotten, that it is either closed or accessible. More importantly, our reasoning is context-free, beyond assuming that our context understands and enforces substructural types. We aren’t constrained structurally – e.g. we can easily use an OpenFile together with callbacks and coroutines or mashup with other ad-hoc resources, so long as the protocols are compatible. Additionally, we can reason that a file is not used by more than one thread, assuming we constrain how the `File` type is acquired in the first place (e.g. via a FileSystem object).

In ABC, objects such as files would be modeled using blocks, to which we can pass messages/commands. When serialized, an open file might look like: `[{openFile:42}]kf`. Text within {} braces is called ‘capability text’ and cannot be forged from within ABC. In open systems, capability text would generally be encrypted or include an HMAC to ensure security.

Files have a relatively simple, albeit common, life cycle pattern.

But much richer workflows, protocols, even games are possible. Substructural types combined with basic structured types and value sealing (or ADTs) are capable of enforcing any behavior protocol a context-free grammar can describe. If we also have dependent types, we can address context-sensitive behavior patterns. Of course, substructural types don’t hinder modeling of monads, categories, EDSLs, etc., which still serve a useful role for staged computing.

Substructure for Concurrency

ABC is a non-strict, implicitly concurrent language/bytecode. Relevantly, if you apply some ABC of the form:

-- apply two functions in parallel! (cf. '***' in Control.Arrow)
rzwz$wzwr$wl :: ((x1→y1)*(x2→y2)) * ((x1*x2) * e)  → ((y1*y2)*e)

Then we can compute y1 and y2 independently and in parallel, even if the functions are effectful. This property is also called causal commutativity – i.e. two operations commute if they don’t have a direct data dependency. Substructural types in ABC serve a significant role in ensuring deterministic behavior, by partitioning resources so that parallel subprograms never operate on the same resource. A rich resource partitioning model can be expressed using substructural types.

This isn’t a new idea. Other languages with similar properties include ParaSail, Clean, and Rust, and the languages that influenced them.

ABC also requires a related property for any effects model, which I call spatial idempotence: if the same function is applied twice to the same argument, it must return the same result and have no additional effect. Mathematical idempotence: ∀f . f f = f. Spatial idempotence: ∀f . dup [f] apply swap [f] apply = [f] apply dup. Together, spatial idempotence and causal commutativity offer ABC many of the equational reasoning benefits of pure code. (Basically, I get everything but dead-code elimination: I cannot eliminate a potentially effectful application simply because I ignore the result.) However, in practice, it is not easy to enforce spatial idempotence for arbitrary side-effects. Substructural types also play a role here: the spatial idempotence constraint cannot be violated (and is thus satisfied) if we forbid ‘dup’ for effects on otherwise problematic resources. Substructural types are, hence, essential if ABC is to work with conventional imperative effects.

(ABC is designed primarily for a new paradigm, RDP, for which spatial idempotence and causal commutativity are far more natural. Requiring these properties universally – even for programs in the imperative paradigm – is a boon for independent optimizers and analysis tools.)

Linear Structure for Simplified Memory Management

Memory is also a resource. Some people have pursued linear types to help control use of memory resources, e.g. to eliminate need for garbage collection without creating an undue burden of detail and discipline on the developer. Henry Baker’s paper on Lively Linear Lisp comes to mind, as does the Cyclone language with reification of memory regions.

Simplified memory management is NOT a significant motivation for ABC’s linear structure. But, as an additional benefit, it isn’t bad. :)

If ABC’s copy operator `^` is implemented using a deep-copy under the hood, then we really can eliminate garbage, and ABC’s primitive functions can be implemented using mutators. If copy is instead implemented using an aliased reference, then we’ll still need garbage collection and more conventional allocation of persistent data structures. I’m interested in exploring mixed/hybrid approaches based on global program analysis, and also the possibility of using separate heaps whenever I parallelize large computations.

ABC (and the thin human language layered above it, AO) tend to encourage linear programming idioms, minimizing use of copy and drop and favoring moves and data shufflers. That said, it isn’t especially difficult to statically optimize copy and drop, e.g. by eliminating dead copies.

As an aside: structural typing lends itself more readily than nominative typing to substructural constraints. If we have generic nominative types like `struct Point[a] { x : a, y : a, z : a }`, it becomes difficult to pick it apart to operate on just `x` or just the `x y` pair. Structural types allow us to pick apart objects, operate on the pieces, and reassemble them, maintaining substructural properties at every intermediate step – i.e. correctness by construction instead of by analysis. Similar can be said for parameters to functions, and why tacit programming tends to serve well for substructural types.

Posted in Language Design, Types | Leave a comment

Awelon Progress Report II

In the last month, I’ve had only three significant accomplishments.

I feel my biggest accomplishment was implementing the fixpoint function, a Z combinator, which becomes the basis for expressing recursive loops. The code for the Z combinator is not difficult – just `[swap dup bind swap compose] bind dup apply` in AO(1). But I found it plenty intimidating! It paralyzed me for days. I’m very glad to be past it. In any case, using the fixpoint function is relatively easy: `[foo] fixpoint` evaluates to `[[foo] fixpoint foo]`.

Shortly after implementing fixpoint, I implemented a few loops.

I quickly ran into rather pitiful performance problems of my REPL/interpreter. A simple loop such as `0 [3 +] 10000 repeat` – i.e. a loop that adds 3 a total of 10k times to an initial value of 0 – was taking 6 seconds. Further, a larger loop – around 150k steps – would bust the default Haskell stack.

So my second accomplishment was performance related.

The same loop will now run in 1.3 seconds if I leave stack-frame debugging on, or 0.3 seconds if I disable stack frames. This is still slow, but 20x less slow than before. I’m a little upset with this – I was aiming for a 100x improvement. But it should be enough for bootstrapping. I also implemented a relatively trivial tail-call optimization, i.e. such that blocks ending in `$c]` will not consume a stack frame. (This usually corresponds to ending a block action with `inline`, whose ABC definition is `rvr$c`.) I’ve run loops of 10 million steps in about five minutes.

My third accomplishment is an experimental development of inflection with adverbs (also discussed here).

This idea, in a simple example, is that I can map a word `foo` to every element in a list by writing something like `foo\*`. Or to every element in a list-of-lists (e.g. a matrix) using `foo\**`. This is as opposed to explicitly quoting the word – e.g. `[[foo] map] map` – which tends to hinder refactoring. Developers can define `\*` or any other inflection as words in the AO dictionary.

I don’t know whether I’ll keep inflection. I would like to see how they fare across a few significant projects. My intuition is that a small handful of inflections can greatly reduce noise in AO code.

(1) the Z combinator (fixpoint function) in ABC (weakly simplified):

[rwrwzwlw^zlw'l[l]wrzwowrzowrzol]
wrwzwlw'l[l]wrzwowrzo^w
vrwvvrrvrrz$wlcllccwlcl

Addendum 6 February: I’ve been searching for simpler, faster fixpoint implementations. I’ve found a few!

AO: [dup bind] swap compose dup bind (pre-compose)
ABC: [r^zlw'l[l]wrzwowrzol]wrzo^zlw'l[l]wrzwowrzol

AO: [%^'wol] %rwrzo %^'wol (specialize dup-bind)
ABC: [^'wol]wrzo^'wol

These respectively offer ~10% and ~17% performance benefit in the test loops I’ve been running in the interpreter, suggesting the bottleneck is now the loop body. That’s a good thing. :) The simpler fixpoint operation should also be easier to recognize for dedicated optimizations, later on.

Posted in Language Design | Leave a comment

Awelon Project Progress Report

Awelon project is now underway. In this early phase, Awelon consists of just two parts: Awelon Bytecode (ABC) and Awelon Object (AO) language. October and November were spent hammering out these design documents. Actual coding began early December.

As of this morning, I have a simple AO-to-ABC compiler, an ABC interpreter, and an AO interactive REPL (with tab completion!), all written in Haskell.

That should be enough to begin bootstrapping. In January, I expect to write more AO code than Haskell. My forward agenda includes typechecking, automatic testing, compiling ABC to native code, and implementation of a wiki-based integrated development environment (UI via web app).

Posted in Language Design

Semi-Static Structure

Lisp, Scheme, and a few other languages represent most structure using pairs of values. In general, these languages are very dynamic: the structure of any value can be introspected at runtime using queries of the form “are you a number? are you a pair?” For many use-cases, this dynamism is welcome. However, it also can hinder static analysis for safety or performance. To recover performance, dynamic languages often implement a second structural concept, such as arrays. Sadly, having both the pair and vector concepts seems redundant and raises complexity of generic code.

Recently, during development of Awelon Bytecode (ABC), I found a simple, elegant solution: unit type as a barrier to introspection. By doping a data structure with a few unit values, developers can crystallize structures that must have a fixed or statically known number of elements.

This concept came about when contemplating the nature of unit type `1` and void type `0`. My thoughts at the time were:

Unit is supposedly the identity type for a product. Ideally, `(a * 1)` has no more information than `a`. Similarly, void is the identity type for a sum, i.e. `a + 0` has no more information than `a`. However, if I allow introspection, then it is clear that `1` and `(1*1)` and `((1*1)*1)`, etc. could be distinguished, in which case the structure would carry information. Is there any way to have structure without information? Hmm…

Ultimately, I only need to avoid runtime information. I can typefully forbid introspection of unit-type, such that asking whether `(1 * 1)` is a pair will return the affirmative, but asking whether `1` is a pair is a type error. Similarly, unit value may only be compared with unit value, and is always equal. Essentially, there is no way to compute the number or placement of unit values at runtime… without statically knowing the answer.

Lisp and Scheme lists, then, would be modeled by terminating them with a number (perhaps 0). But if we want to model a fixed-width vector or fixed-size matrix in a manner the optimizer can easily recognize, we could terminate those with unit values. Our ability introspect structure ultimately depends on knowing where the unit values might or might not be placed.

Posted in Language Design | Leave a comment

UI as an Act of Programming

I’ve been thinking quite a bit, recently, about the relationship between UI and programming. A programming language is certainly a user interface – but usually a poor one from a human factors standpoint. And a UI can be considered a PL – a way for a human to communicate to a computer… but is usually a poor one with respect to modularity, composition, safety, extensibility, and other properties.

What I would like to see is a unification of the two. To me, this means: We should be able to take all the adjectives describing a great PL, and apply them to our UI. We should be able to take the adjectives describing a great UI, and apply them to our PL. Manipulating the UI is an act of programming. Programming is an act of UI manipulation.

Conal Elliott’s concept of Tangible Functional Programming seems related to what I want. Bret Victor’s Drawing Dynamic Visualizations seems related to what I want.

But neither of those is general purpose.

I need something that works for controlling robots, developing video games, reinventing browsers and web services, interacting with a 3D printer, implementing an operating system or a device driver. I want support for open systems. Different problem domains can have different domain-specific languages – which should correspond to domain-specific UIs. However, these DSLs must integrate smoothly, and with a common paradigm for basic communication and data manipulation. Can we do this?

In August, I developed a simple but powerful idea: a user-model – hands, navigation – represented within the static definition of the program. This, coupled with the notion of ‘rendering’ the environment type, has me thinking UI/PL unification is feasible. The user-model is not a new idea. Neither even is the idea of integrating the user-model with the program. It has been done before: e.g. in a MOO, or in ToonTalk.

The only ‘new’ idea is taking this seriously, i.e. with an intention to support open systems, down-to-the-metal static optimizations, a basis for UI and mashups, and general purpose programming. I think the different mindset would have a significant impact on the design.

Most serious PLs treat the programmer as a non-entity – ‘above’ the program with pretensions of omniscience and omnipotence. Even graphical PLs do this. As a consequence, there is no direct interaction with higher level types, software components, dataflows, composition. There is no semantic ‘clipboard’ with which a programmer can wire variables together. Instead, interactions are expressed indirectly. We look under-the-hood, investigate how components work, from where they gather their data; we perhaps replicate some logic or namespace management. The barrier between syntactic and semantic manipulation is painful, but is not a crippling issue for “dead programming” languages. But UIs are live. They have live data, live values, and there is a great deal of logic underlying the acquisition and computation of those values. They have live buttons and sliders, which may correspond to capabilities controlling a robot. In many senses, peeking ‘under-the-hood’ for UIs should correspond to reflection in a PL – an alarming security risk and abstraction violation, not something we should need to depend upon. Instead, we should be able to treat time-varying data, buttons, sliders as semantic objects – signals, parameters, functions, capabilities.

Users can navigate, grab, copy, create, integrate. Users can construct tools that do so at higher levels of abstraction. The syntax becomes implicit – the gestures and manipulations, though perhaps corresponding to a stream of words in a concatenative language.

To unify UI and PL, we need our programmers to be part of our PL, just as our users are part of our UI. We simply need to do this while taking the language and UI design seriously.

Posted in Language Design, Types, User Interface, UserInterface | 14 Comments

Staging is Simpler than Type Checking

I’m taking an unusual approach to types with Awelon, one that I hope will work better than traditional approaches. Instead of “type checking”, every Awelon program represents a compile-time program. If this compile-time program completes without errors, then the runtime behavior will be type safe.

The runtime behavior of Awelon is described by an arrow representing a reactive demand programming behavior. If you aren’t familiar with arrows, the TLDR concept of arrows is: “Oh, someone formalized box-and-wire programming. Meh.” Arrows are more expressive than simple pipelines since we can branch dataflows and recombine them, and arrows are quite useful for many problems. But, as a general purpose programming model, arrows tend to have the same weakness of box-and-wire paradigms: they become very unwieldy for certain real-world problems or at large scales. It’s all those little exceptions and messy real-world problems – configuration tweaks, singleton hub-nodes, and shotgun policies – that make a mess of the grand pipe-dream.

Fortunately, compile-time metaprogramming can enable these box-and-wire paradigms (minus the boxes and wires) to scale much larger. The more painful non-local wiring logic can be automated, and hub nodes are easily integrated without messy wire crossings. Configuration details can be represented as scoped environment variables, enabling different configurations for different subprograms.

What Awelon does is use its compile-time metaprogram as an implicit typecheck.

Awelon’s compile-time behavior is similar to (but completely different from) FORTH, Factor, and other concatenative languages. Developers do manipulate a stack of objects: they can add text and numbers or blocks of code, and operate on them in all the expected ways. If developers wish, they can use a number and iterate a function that many times. But there are also ‘runtime’ typed values – e.g. for future signals representing time-varying mouse position. Developers can operate on future signals, e.g. by specifying how they will be handled, or applying functions to combine or transform them. But the result of these operations are not visible to developers.

In addition to simple numbers, text, simple blocks, and runtime values, Awelon supports more exotic types, mostly at compile-time. There is the uniqueness source. There can be unique values constructed from this source – e.g. sealer/unsealer pairs that operate at compile-time, or GUIDs. There will be affine, relevant, and linear types to help model requirements and responsibilities. I am considering fractional and negative types to model promises or futures (if I can figure out how to model them without hindering visualization; maybe as a growing set of type-constraints).

A rather interesting property of RDP and Awelon is that runtime values have spatial-temporal information: every runtime signal has a partition (a logical location) and a latency value (a relative time that the signal becomes available). To combine two runtime signal values – e.g. to test mouse position against a window’s box size and location – requires that the signals have the same time and space coordinates. These partitions can also be used as a basis for security. Some partitions provide ambient authority. Some support capability-security by generally being confined but where some sealed behaviors can reach into the parent ambient partition. Some partitions may truly be confined, to enforce isolation if we want it. (Awelon will not do ‘sandboxes’, i.e. partitions with a limited subset of ambient authority. Those are just horrible.) An interesting feature I’m considering is latency-constraints in the type model: e.g. capabilities that expire if they’re not used within so many milliseconds. This could be useful for controlling how distributed systems communicate, how code is moved.

Awelon is very richly typed. Anyhow, Awelon’s “type checker” is simply the execution of this compile-time program. The compile-time is picky about types, is relatively simplistic, and doesn’t perform any coercions. But, if no errors are raised, and if the compile-time program terminates, the result will be a safe runtime behavior. This is safety by construction.

Fortunately, for the sanity of developers, Awelon does support type introspection, so it is possible to create programs that adapt to their inputs. With careful use of fractional types, it may also be possible to adapt to output types.

Unlike FORTH and most concatenative languages, Awelon’s compile time is effect-free and has a very rich ‘environment’ type. There is a stack. But Awelon also provides a whole set of named stacks that provide scoped configuration variables, singletons or hubs, compile-time databases, separate tasks or workflows, and so on. Awelon can easily represent ad-hoc complex structures – e.g. multi-media documents, diagrams, DSLs, ADTs, objects, zippers, lenses, and traversals – in its compile-time behavior. (Since it’s all in the compile-time, it is also readily visualized or rendered by an IDE.)

Runtime Flexibility and Types

Another issue with arrows is that they’re inflexible at runtime. There are ways to regain flexibility, such as:

  • collection-oriented behaviors, i.e. applying a single arrow to each element in a runtime vector or set
  • dynamic behaviors, i.e. providing an arrow as a runtime input to specify the runtime behavior of a static arrow

I wonder, in practice, whether these are truly needed. A great deal of flexibility can be achieved using static structures – e.g. using four tiling layers and a fixed set of tiles, SNES developers create very dynamic user interfaces. Further, RDP is carefully designed for runtime update: when we need more ‘flexibility’ it would often be easy to achieve it by extending the program code.

OTOH, RDP is also designed so that dynamic behaviors are updated just the same as their host application in a staged, metacircular manner. I see no reason to waste that. For Awelon, I plan to support both collection-oriented and dynamic behaviors. (Static collection-oriented behaviors need no extra support, so primitives only address the dynamic vectors or sets.) Dynamic behaviors in Awelon are second-class – very constrained on their input and output types. But even with those limits, they are potentially quite useful.

One feature Awelon lacks under its current design is the inability to prove a dynamic block will have the right type for a dynamic behavior without actually executing the associated compile-time code. We have safety-by-construction, but we don’t have safety-before-construction that can be validated without basically repeating the construction. I have a few ideas, but I am still thinking about how to best address this.

Posted in Language Design, Types | 5 Comments

Source-Stable Uniqueness

Live programming, continuous deployment, orthogonal persistence, debugging – a quality these have in common is that we want state to remain stable across external interference, i.e. so we don’t lose important work. One mechanism to achieve this is to externalize state, e.g. keep it in a database, or carefully save and restore in a manner that won’t break across version changes. But many developers dislike external state: in many cases it is nice to know we have exclusive control over state, or that we can hide implementation details (like how state is organized) from other subprograms. Fortunately, linear or uniqueness types are simple, powerful tools to regain exclusivity and implementation hiding.

But uniqueness and linear types don’t automatically promise stability across code changes. A naive application of uniqueness types might simply assign each ‘new’ state a unique, incremental name (1, 2, 3, 4, etc.). But this assignment would not be stable to changes that inject stateful code, or rearrange code, or change how many names a subprogram uses. We need to do better.

The simplest basis for stable names I know is the filesystem-tree. So that is what I use.

The uniqueness source, U!, models a directory structure. Developers can fork a new uniqueness source – a new subdirectory – but it must be explicitly named. Further, upon doing so, that name cannot be reused by the parent directory.

fork :: (Text * U!)~>((U! * U!') | NameAlreadyUsedError)
new :: ((Text * Model) * U!) ~> ((State of Model * U!') | NameAlreadyUsedError)

The forked subdirectory structure ensures that subprograms do not interfere with one another, essentially ‘chrooting’ each subprogram. The Text naming each subdirectory provides the stability – i.e. each subprogram is uniquely named in source, and unless those names change the program will be stable to reorganization or injecting new subprograms.

Note: It would not be difficult to represent fork/new in the type system, i.e. to statically enforce uniqueness. Haskell, C++, C#, all have expressive enough type systems to do it… but we really need linear or affine types for U!.

There are a couple remaining issues.

First, “upgrades” to the state models: implementation hiding doesn’t help if we get chained to the past. Fortunately, this can be addressed if our `Model` for state can explicitly contain information on how to update/convert from prior state models, and each model includes a version identifier.

Second, extension, introspection, and observation: it is sometimes useful to bypass implementation-hiding in a secure way, i.e. so that the larger subprogram can extend subprograms in ways the original developers did not foresee. This might be achieved by introducing weak identifiers to reach into subprograms. These might be observer-only, or allow constrained interference based on the state model.

forkWeak :: (Text * U!) ~> (UW * U!)

With forkWeak one would not be able to interfere with parent or sibling processes because one doesn’t have the parent’s uniqueness source. But one could extend, audit, etc. the subprograms, and debugging would have a more formal explanation within the language.

In Awelon, I will use a uniqueness source, primarily in a static manner, to uniquely bind external state to subprograms at compile-time. I plan to support the sort of extensibility I mentioned here. Awelon programmers will thus achieve all the convenience of ‘local state’ (such as potential to embed state directly in a live programming ‘canvas’) without any of the poison.

Posted in Language Design | Leave a comment

Multi-Stack Environment part 2

Having experimented with the multi-stack environment, I’m now contemplating the elimination of anonymous stacks (stepLeft, stepRight, newStackRight, remStackRight) and instead switching to a system that only has the current stack, one hand, and a list of named stacks.

The motivations for this:

  • A simple stack/hand combo, with “roll” (of current stack) and “juggle” (of hand), seems very expressive and convenient for the cases that are ‘near’ enough for short-term memory of relative structure. Anonymous stacks don’t introduce much utility in-the-small.
  • The idea of multiple stacks for multiple tasks is great! But skipping between tasks isn’t something I often do in the short-term, and I don’t want to remember relative positions long-term. “Named stacks for named tasks” seems like a useful policy.
  • In the few cases I do want to bounce between tasks, it is usually at integration points. But I can model promises/futures via fractional types; I can “make promises” to one task and “fulfill promises” in another, like creating a bundle of wires at the destination and running them backwards to the source. This would let me do more integration with less navigation.
  • Named stacks make it easy to carry named extensions, such as “AOPadvice” (list of advice for explicit join-points) or “keyring” (list of sealer/unsealer pairs). Further, with named extensions, it should be easier to explicitly forward some useful extensions to applied blocks in a standard way.
  • Named stacks are potentially more stable across code changes, extensions, and reorganization, i.e. no risk of a new anonymous stack breaking relative offsets.

If I make this change, the new environment will likely become: `(s * (h * (ns * lns)))`.

Here `s` is the current stack. `h` is the hand (also a stack). `ns` is just a string, the name of the current stack. And `lns` is a list of (name*stack) pairs. By keeping `ns` in the environment I can more easily switch stacks without explicitly naming the current one. (I can also rename the current stack as a trivial operation.) Most of my blockless data-plumbing code will need to change, but that isn’t a significant problem – much of it will be lighter weight due to the simpler environment.

In addition to navigating stacks, developers could directly load/store/loadCopy/etc. on a stack by name. Named stacks can easily serve a similar role as variables – a place to dump objects that you won’t get back to for a while. It should even be possible to have multiple stacks with a given name in `lns`, using a shadowing model (i.e. always grab first reference to a given stack name). That could be useful for temporary names. (I can also provide a library of functions to statically assert that a stack name exists, does not exist, or is unique.)

An IDE could still render the named multi-stack environment, though exactly how it organizes the named stacks would be an open question. (There is no ‘obvious’ render like for a list of anonymous stacks.) Given a named stack, it might be useful to highlight all the words that operate on it.

Posted in Language Design, UserInterface | Leave a comment