In the months since grasping the concept of stateless stable models, I regularly find scenarios where I think, “A stateless stable model would be useful here.” Unfortunately, I’ve not gotten around to implementing these models. (I still haven’t. I might find time within a year, though.)
While my interest in game development has waned since my undergrad years in university, I still have an occasional vision and urge to develop one… and, while considering possible demonstration programs for RDP, a game became what I am thinking to try – perhaps a metroidvania puzzle-adventure with watercolor styled art? Well, I’m not actually ready to start anything yet. But that got me thinking about how to construct art assets for the game in an RDP (i.e. highly declarative) manner. And I thought, “A stateless stable model would be useful here.”
In particular, I was imagining a stateless stable model based on a variation of constraint logic. Constraint logic programming is simple and expressive. Like any decent programming model, it is able to represent both specific scenarios and reusable abstractions. Constraint logics flexibly support both parametric abstraction (using parameters to configure or control result) and contextual abstraction (adapts to its environment, in this case via constraints). Contextual abstraction allows an artist to manipulate properties they cannot control parametrically, i.e. by constraining the shape of the result. Use of soft constraints (e.g. via a weighted logic) can easily model preferences and defaults, and have an added benefit of resulting in a system robust against error (e.g. for natural language processing). Constraint logic can support a flexible balance of procedural generation and artist specification.
The cost of power and simplicity is often performance. The worst-case bounds on performance for constraint logic systems tend to be exponential, and it is difficult to control performance with a simple discipline. Fortunately, the actual performance in practice is often quite good – assuming a decent implementation. For example, SMT solvers have proven adept at finding result. And, in an interactive context – like art development – we at least have the opportunity to interrupt and inspect a search if it seems to be taking much longer than expected.
Many artists would love direct access to the logic, would thrive with it (cf. POV-Ray, context free art). But I imagine many other artists will favor tools to help specify the constraints with a visual interface and gestures – cf. geometer’s sketchpad, teddy drawing program, recursive drawing. I imagine ability to mix and match front-ends would be favorable. This article doesn’t make any assumptions about the artist-facing development tools.
If all I had to say about this subject was “A stateless stable model would be useful here”, I wouldn’t have bothered writing this article. The idea to use constraint logic to specify art is by no means original – I’ve seen mentions of the possibility in many different books and papers. Unfortunately, it seems that nobody has successfully taken this idea and turned it into something artists and game developers can broadly and effectively use. (I’d love to be proven wrong, if you can point me to an application!)
When I began considering downstream developers and artists – i.e. modularization, reuse, editing, extraction, refactoring, and composition of art assets – I realized that that stateless stable constraint logic is far more promising than I had initially anticipated…
I will get back to that point. First, I’ll provide a quick refresher on the stateless stable models and the reactive variant of constraint logic I am considering:
Refresher on Stateless Stable Models
A stateless stable model is actually stateful, but is semantically non-deterministic and stateless.
The idea is that we can make some non-deterministic decisions – e.g. to solve a constraint system, or plan a path – then stubbornly hold onto those decisions as much as we can. Ultimately, changes in constraints or the environment may force us to change our decisions. In those cases, we favor small changes then stubbornly hold onto the new decisions. By throwing away decisions when they are no longer valid, stateless stable models achieve agility and resilience. By stubbornly avoiding unnecessary change, stateless stable have a predictable inertia allowing them to readily be anticipated, cached, mirrored, and scaled.
There are many potential applications of stateless stable models – e.g. in UI layout, planning, service discovery and linking, flexible joints and configurations between rigid components, and metaprogramming. Stateless stable models do not eliminate the need for state models, but clever use of stateless stable components can allow developers to approach much closer to the essential state of the problem domain.
Stateless stable models are highly synergistic with machine learning. The two mix wonderfully because machine learning within a non-deterministic decision space will not impact program correctness. We can leverage machine learning to improve the quality, stability, and performance of our decisions, but can still be assured the decisions are `correct` (even if not optimal). Conversely, the machine-learning will itself contribute to stability: successful learning means we can will find similar solutions for similar problems, and thus small changes in the decision set will tend to result in small changes in the decision. (Also, stateless stable models are very well suited to the best kind of machine learning: unsupervised learning.)
While the semantics are non-deterministic and stateless, the implementation of a stateless stable model may be deterministic and stateful. A deterministic implementation is favorable for various purposes (testing, validation, robust replication) and for various problem domains – such as stateless stable arts! (Similarly, unsupervised machine learning may be incremental, real-time, bounded-space, and deterministic – e.g. if implemented atop a decaying history.)
An interesting property of stateless stable systems is that they may be explicitly `reset` much like any stateful system. This is achieved by applying a strong constraint to forcibly shape the result, then releasing (or gradually weakening) that constraint. Stability forbids the model from simply snapping back into its original shape unless there are active constraints that would restore the shape. Use of temporary constraints to shape state was a common idiom in the experimental constraint-imperative language called Kaleidoscope.
Stable Constraint Logics
In the prior article, I described a constraint logic model tuned for an open reactive systems. It has the following properties:
- The model, at any given instant, is described as a finite set of constraints and allowances.
- Any solution must choose from the allowances and meet all the constraints.
- Constraints and allowances both apply to predicate symbols.
- The only queries on the model are simple predicate symbols.
- The answer to a query is one value for which the predicate currently holds true.
- The RHS of each constraint or allowance is a conjunction `∧` of constraints.
- To add new options, `∨`, is achieved only by allowance.
- By default, no solution is allowed, and there are also no constraints.
- The constraints and allowances may be weighted, e.g. with a cost and quality model.
Queries in this model occur on only simple predicate symbols, but arbitrary queries can be expressed by introducing allowances and constraints on a unique predicate symbol. The limitation to a single symbol in the query ensures the query remains open to extension, and simplifies reasoning about consistency of query results when dealing with opaque types.
A constraint-logic systems must support an ontology of types against which they may express constraints. Some of this ontology could be built-in primitives, while much might be library-provided (i.e. expressed in terms of the constraint model). Typical primitive examples include integers, tuples, and lists. For visual arts, I would assume an ontology that includes color, material, lighting, textures, splines, surfaces, geometries, and so on. An interesting, metacircular possibility is to support a constraint-logic system as the output of a constraint-logic system – i.e. another set of allowances and constraints that can be extended, constrained, and queried.
The set of constraints and allowances may change over time, and this ability to change introduces stability concerns. Stability may be observed both externally and internally:
- external stability is measured in terms of how long a particular query result holds, and how much it changes
- internal stability is measured in terms of how long a search tactic holds, and how much it changes
Both stability metrics are useful. External stability is the desirable feature for the downstream clients of the stateless stable model. Internal stability is a lot easier to work with, e.g. for machine learning. These two metrics will often coincide, but it is possible to develop constraint logic systems where they diverge pathologically. For best results it will be necessary to guarantee that internal and external stability are tightly coupled, i.e. that there exists an asymptotic relationship between change in the search and change in the query result.
One trivial way to guarantee stability coupling is to output a simplified representation of the search as part of the query result. Naturally, then, a change in the search will change the query result. This works very well for many artistic purposes, since the `searches` often correspond to spatial aspects of the generated results.
Another technique, which I find vastly more interesting (though , is to model improving values. For a spatial model, improving values might be described in terms of levels-of-detail, bounding volumes in a scene-graph, possibly even controlling contrasts and lighting (i.e. stuff you might otherwise have seen from far away). In a sense, the improving values technique corresponds to a gradual type system modeled in the constraint system: you’re representing high-level promises, enabling the constraint model to validate a partial solution, then building other components in the same level-of-detail against those promises so that the search won’t suddenly need to back off. These might be `soft` types and promises – more like guidelines, so long as the artist is aware of toeing them and thus able to make informed decisions
For now, I shall leave such guarantees to developer discipline. Preferably, discipline augmented by type model or EDSL per problem-domain.
With external stability coupled to internal stability, I can focus on internal stability – i.e. reusing as much of a prior search structure as feasible. In this role, I would like to try machine learning: to discover search tactics that consistently lead to high-quality results and perform well (succeed fast, fail fast). I am considering use of a decaying history to track recent searches, with the decay model progressively recording and weighting search patterns that are reused.
Stable Constraint Logic Art Assets
The artist ultimately observes the solution to a query. One option to `save` an art asset is just to save this concrete solution. However, that option would severely hinder flexible use of that art asset for downstream developers. Potentially, it may even hinder use of the art asset later in the tool-chain for the same artist. I would discourage this technique.
Instead, an `art asset` in the stateless stable constraint logic model will consist of:
- a set of constraints and allowances
- one query (just a symbol, perhaps `main`)
- the record the artist was using for stability
- a description of external dependencies, e.g. ontologies, libraries, assets.
The record must support deterministic reconstruction of the saved asset – i.e. we can restore the exact solution that the artist saved. This is important: artists are rightfully proud of their work!
But art assets expressed as stateless constraint-logic and stability models are not frozen. They retain the same robust flexibility that the artist had when developing. Constraints and allowances can be tweaked, extended, inspected, and refactored. Multiple instances of the asset may be instantiated using parameters and constraints. This makes the art much more usable by downstream developers, who can flexibly use parameters and constraints to fit the asset into a new scenario.
Just one structure can be used for ALL kinds of art assets: landscapes, character models, objects, dialogues, quests, plots, world events, AIs, puzzles, animations, generated music or sound effects. Each kind of art will need dedicated ontology and data types, and it will take time for clever developers to address each domain adequately, including integration (e.g. of animation with character model, or plot with dialog). I suggest Inform 7 as a source for inspiration for some non-visual elements.
The dependency system enables developers to build ontologies or libraries once and reuse them. A linked resource will extend the set of predicates available to the artist. (Some mechanism should be used to avoid namespace pollution and conflict, e.g. listing imported symbols or locally qualified imports.) The dependency system can also address performance concerns: a tool can be designed to recognize certain dependencies (for vector or matrix, pointclouds and polygon meshes, etc.) and provide down-to-the-metal specializations for the more critical operations. (One could potentially do flexible metaprogramming this way, e.g. use constraint logic to build an Agda function, which can then be compiled and applied through the specializations.)
Of course, not every tool will be adequate for developing every asset. Much depends on how the results should be rendered, how the constraints should be manipulated, and so on. But I expect it might be worth developing one big ZUI that can use plugins to interact with arbitrary art assets models.
Interesting Interaction with Machine Learning
During an incremental refinement process, artists would tend to adjust constraints only when they wish to change the immediately observed result. Therefore, every change in constraints would tend to alter the search and solution, and will potentially invalidate some search strategies. However, every part of the search that doesn’t change is tacitly approved. And these approvals will add up to highly favored search tactics. When these tactics are applied to an entirely new art asset, they will apply some of this implicit knowledge about the artist’s style. The quality of stylistic predictions should improve over many art assets, so some artists may prefer to preserve the record of tactics from one development of one asset to another. Later, one might choose to mix and match constraint-sets with art assets and see “how would another artist have done this”. Of course, the answer will be imperfect, but it would at least be entertaining and possibly quite useful – e.g. when attempting to reuse third-party art assets.
One can also `train` the machine learning, with training constraints – i.e. a bit like training wheels for the machine learning model. Basically, you apply a temporary constraint to control which searches succeed, but you can relax those constraints (giving the system some extra `artistic license`) after the model has begun favoring them.
Effective Support for Cooperative Work
The variation of constraint logic upon which this model is based was designed for open, reactive, multi-agent systems – i.e. where any agent can introduce constraints or allowances. This can be leveraged by artists, too, allowing multiple artists to operate on the same model concurrently. (And one might also cooperate with external software agents.) This would be quite useful for operating on a large world. (And I’m a bit curious whether I could apply it to live music generation…)
Heterogeneous Data Models
The world won’t settle on just one ontology per problem domain. There will inevitably be many ontologies, each subtly incompatible. However, if the languages are similar enough, one can create an intermediate art asset to translate another art asset, linking it as a dependency. (This can serve as a convenient alternative to codecs.) An advantage of soft constraint logic is that it can help smooth over the subtle incompatibilities, rather than tripping on them.
Stateless Stable Arts for Game Development
The space explored in a typical video game is a much larger than any mural or landscape, and it takes correspondingly more effort to fill that space with interesting detail. This is especially true in 3D games, and is exacerbated further by an interest in “open world” games (which requires a world much larger than the space explored by a typical player in one run). World size is, of course, about much more than space and visual clutter. The relevant art assets include quests, dialogs, plots, puzzles, music, sound-effects, animations, AIs, and so on.
Despite state-of-the-art tools and artist teams that outnumber core developers by a factor of ten to twenty, game worlds are often left with large uninteresting areas. Artists often achieve only a small fraction of their vision. Game producers cannot readily afford or coordinate teams much larger than they have. I imagine that, if artist productivity was increased by an order of magnitude, artist teams in game development would not shrink significantly. Artists would simply build larger, richer game worlds.
We need vastly more productive artists, which means better tools and abstractions. There are at least three dimensions in which we can amplify productivity:
- more artistic changes with fewer actions
- precise artistic effects with fewer actions
- effective use of third party art assets
I believe that the stable stateless constraint logic models can greatly contribute to the first dimensions, and weakly contribute to the second, and at least moderately contribute to the third. I would not be surprised to achieve an order of magnitude productivity improvement for artists, between these elements.
That first dimension – more change with fewer actions – corresponds well to procedural generation – a phrase that describes algorithmic generation of content (not necessarily from a procedural language). This is not a new technique. Procedural generation has gained a reputation for generating repetitive, uninteresting content. It is still used, of course, because repetitive uninteresting content is often preferable to a complete absence of content, especially for the background elements – i.e. it’s a foundation for an artist. However, procedural generation has potential to be utilized far more effectively than it has been historically. Consider the following weaknesses for how procedural generation is used today:
- Procedural generation is poorly integrated in the artist tool-chains. Each tool uses its own data models, and only a few tools provide procedural generation in the first place. Those tools that do support procedural generation each use ad-hoc, problem-specialized models and languages for it. Art assets created by procedural generation cannot easily be composed, integrated, or further manipulated in that form. (I also suspect a majority of procedural generation tools used in game development are developed for just that game, perhaps excepting some tools for flora and fauna. It seems difficult for artists to gain reusable skills as they might with POV-Ray.)
- Procedural generation utilities are rarely incremental, interactive, and supervised. It is difficult for artists to tweak, tune, extend, and adjust a procedurally generated objects on-the-fly. While it is possible to tune some parameters or tweak some procedures and regenerate the whole system, the expense ensures it does not happen often, and it can be difficult to review the changes. Instead of a flexible, gradual progression between unsupervised procedural generation and hand-built manipulations, artists are forced to abruptly shift modes.
- Most approaches to procedural generation rely heavily on pseudo-random generators, which (by nature) lack informational content. Human interest and entertainment relies heavily on surprise – i.e. unanticipated information or detail. The early use of random generation might be explained in the past by the information storage bottlenecks. But information is not nearly as much a bottleneck these days, and there is no reason we cannot generate from gigabytes of information rather than a 32-bit PRNG. Hypothesis: We can improve the quality (entertainment value) of unsupervised procedurally generated content by using an interesting non-random data source – such as Wikipedia. Consider use of text and hyperlink structure to direct generation of content – cities, politics, landscapes, flora and fauna, characters and relationships, puzzles, events, and so on.
The use of data instead of pseudo-random structure is an orthogonal concern to the tool I’m envisioning. However, stable constraint logic art assets will effectively address the first two concerns. Constraint logic is generic enough to support all stages in the toolchain, and the dependency model can address performance concerns. The notion of building art “by hand” is easily expressed by using fully defined parameters and precise constraints. By weakening constraints and leaving parameters unspecified, artists can take advantage of whatever level of procedural generation they desire. The stable model keeps it incremental and near real-time. A related advantage of real-time procedural generation is that one can observe multiple possibilities in parallel, i.e. due to different parameters, and pick between them – making efficient use of the artist’s time.
The second dimension – precise effects with fewer actions – usually corresponds to specialized tools. Specialized brushes, filters, lenses. Specialized stencils, compasses. Use of spirographs. Etc. This is an area in which state-of-the-art artist toolsets excel – the ever expanding toolbox, ideally with recomposable tools (scripts, etc.) so the artists can expand it even further. Constraint logics are certainly capable of expressing and composing ad-hoc toolboxes (in the form of libraries), so it at least can keep up in that regard. If stable constraint logic contributes to precision beyond state-of-the-art, I think it will be in two aspects:
- constraint logic makes it trivial to tightly couple parameters of different components, e.g. for sticky edges, or relative sizes
- machine learning of artistic styles might save a few gestures or edits, especially for controlling the middle
The third dimension for artistic productivity is effective use of third party art assets, which can enable the number of art assets to scale with the number of artists in the world rather than with the number of artists on the team. One might go to Renderosity for 3D models, or to SoundCloud for some sound effects or music. It does seem that certain art assets are underrepresented (where do I go if I want a city full of personalities, relationships, and dialogs?) but that just reflects the directions the game industry has pursued (eye candy, not AI candy). Stable constraint logic art assets can potentially help with reuse in many ways:
- libraries of reusable art will generally be more programmatic – i.e. allowing developers to more easily parameterize and tweak the third party art assets
- potential ability to tune machine-learning aspects for style
- art assets have a uniform structure for all domains, based in the constraint logic model.
- heterogeneous data models within a domain can be addressed with an intermediate or wrapper, rather than updating the program that uses the art
- more artistic domains can be addressed and reused, potentially including “broad world” integrated concerns like weather, traffic, economy, politics, ecology
Stable constraint logic art assets could, of course, be used in domains other than for video game development. I can easily imagine typesetting via constraint logic, for example. And there are even applications outside of art – i.e. in object design, CAD. But the crisis I see right now is in game development.