I’ve been thinking quite a bit, recently, about the relationship between UI and programming. A programming language is certainly a user interface – but usually a poor one from a human factors standpoint. And a UI can be considered a PL – a way for a human to communicate to a computer… but is usually a poor one with respect to modularity, composition, safety, extensibility, and other properties.
What I would like to see is a unification of the two. To me, this means: We should be able to take all the adjectives describing a great PL, and apply them to our UI. We should be able to take the adjectives describing a great UI, and apply them to our PL. Manipulating the UI is an act of programming. Programming is an act of UI manipulation.
But neither of those is general purpose.
I need something that works for controlling robots, developing video games, reinventing browsers and web services, interacting with a 3D printer, implementing an operating system or a device driver. I want support for open systems. Different problem domains can have different domain-specific languages – which should correspond to domain-specific UIs. However, these DSLs must integrate smoothly, and with a common paradigm for basic communication and data manipulation. Can we do this?
In August, I developed a simple but powerful idea: a user-model – hands, navigation – represented within the static definition of the program. This, coupled with the notion of ‘rendering’ the environment type, has me thinking UI/PL unification is feasible. The user-model is not a new idea. Neither even is the idea of integrating the user-model with the program. It has been done before: e.g. in a MOO, or in ToonTalk.
The only ‘new’ idea is taking this seriously, i.e. with an intention to support open systems, down-to-the-metal static optimizations, a basis for UI and mashups, and general purpose programming. I think the different mindset would have a significant impact on the design.
Most serious PLs treat the programmer as a non-entity – ‘above’ the program with pretensions of omniscience and omnipotence. Even graphical PLs do this. As a consequence, there is no direct interaction with higher level types, software components, dataflows, composition. There is no semantic ‘clipboard’ with which a programmer can wire variables together. Instead, interactions are expressed indirectly. We look under-the-hood, investigate how components work, from where they gather their data; we perhaps replicate some logic or namespace management. The barrier between syntactic and semantic manipulation is painful, but is not a crippling issue for “dead programming” languages. But UIs are live. They have live data, live values, and there is a great deal of logic underlying the acquisition and computation of those values. They have live buttons and sliders, which may correspond to capabilities controlling a robot. In many senses, peeking ‘under-the-hood’ for UIs should correspond to reflection in a PL – an alarming security risk and abstraction violation, not something we should need to depend upon. Instead, we should be able to treat time-varying data, buttons, sliders as semantic objects – signals, parameters, functions, capabilities.
Users can navigate, grab, copy, create, integrate. Users can construct tools that do so at higher levels of abstraction. The syntax becomes implicit – the gestures and manipulations, though perhaps corresponding to a stream of words in a concatenative language.
To unify UI and PL, we need our programmers to be part of our PL, just as our users are part of our UI. We simply need to do this while taking the language and UI design seriously.