The modern high-level-language programmer thinks (if he is of the thinking kind) of low-level system architecture as a stubborn enemy, or, at best, a harsh and indifferent force of nature. Anyone who suggests that everyday desktop apps ought to be written directly in a CPU’s native instruction set is viewed as much the same kind of lunatic as someone who brings up swimming as a practical means of crossing the Atlantic on a tourist vacation. Yet, unlike the Atlantic, the ocean of low-level machine ugliness which we perilously cross in our HLL boats is one of our own creation. Un-creating it is far from impossible. It is not even a particularly deep problem.
There are viable alternatives to the present way of building computers. Those in the know sometimes say that today’s dominant architectures are “built to run C.” In order to fully appreciate the truth of this statement, one must put on an archaeologist’s hat and unearth some which were not. There are many interesting lessons we could learn from the ruins of computer architecture’s Age of Exploration. Let’s examine the Scheme-79 chip: the only architecture I know of which was truly elegant inside and out. It eschewed the compromises of its better-known contemporary, the MIT Lisp Machine (and its later incarnations at LMI and Symbolics) – internally microcoded stack machines, whose foundational abstractions differed minimally from those found in today’s CPUs and VMs. The experimental S79 fetched and executed CONS cells directly – and was coupled to a continuously-operating hardware garbage collector. I will not describe the details of this timeless beauty here – the linked paper is eminently readable, and includes enough detail to replicate the project in its entirety. Anyone who truly wishes to understand what we have lost is highly encouraged to study the masterpiece.
Here is one noteworthy tidbit:
“A more speculative approach for improving the performance of our interpreter is to optimize the use of the stack by exploiting the observation that the stack discipline has regularities which make many of the stack operations redundant. In the caller-saves convention (which is what the SCHEME-79 chip implements) the only reason why a register is pushed onto the stack is to protect its contents from being destroyed by the unpredictable uses of the register during the recursive evaluation of a subexpression. Therefore one source of redundant stack operations is that a register is saved even though the evaluation of the subexpression may not affect the contents of that register. If we could look ahead in time we could determine whether or not the register will retain its contents through the unknown evaluation. This is one standard kind of optimization done by compilers, but even a compiler cannot optimize all cases because the execution path of a program depends in general on the data being processed. However, instead of looking ahead, we can try to make the stack mechanism lazy in that it postpones pushing a register until its contents are about to be destroyed. The key idea is that each register has a state which indicates whether its contents are valuable. If such a valuable register is about to be assigned, it is at that moment pushed. In order to make this system work, each register which may be pushed has its own stack so that we can decouple the stack disciplines for each of the registers. Each register-stack combination can be thought of as having a state which encodes some of the history of previous operations. It is organized as a finite-state automaton which mediates between operation requests and the internal registers and stack. This automaton serves as an on-the-fly peephole optimizer, which recognizes certain patterns of operations within a small window in time and transforms them so as to reduce the actual number of stack operations performed.”
What we are looking at is a trivial (in retrospect) method for entirely relieving compilers of the burden of stack discipline: a necessary first step towards relieving programmers of the burden of compilers. A systems programmer or electrical engineer educated in the present Dark Age might ask why we ought to demand relief from CPUs which force machine code to “drive stick” in register allocation and stack discipline. After all, have we not correctly entrusted these tasks to optimizing compilers? Should we not continue even further in this direction? This is precisely the notion I wish to attack. Relegating the task of optimization to a compiler permanently confines us to the dreary and bug-ridden world of static languages – or at the very least, makes liberation from the latter nontrivial. So long as most optimization takes place at compile time, builders of dynamic environments will be forced to choose between hobbled performance and the Byzantine hack of JIT compilation.
The instruction set of a properly designed computer must be isomorphic to a minimal, elegant high-level programming language. This will eliminate the need for a complex compiler, enabling true reflectivity and introspection at every level. Once every bit of code running on the machine is subject to runtime inspection and modification by the operator, the rotting refuse heaps of accidental complexity we are accustomed to dealing with in software development will melt away. Self-modification will take its rightful place as a mainstream programming technique, rather than being confined to malware and Turing Tarpit sideshows. Just imagine what kind of things one could do with a computing system unpolluted by mutually-hostile black box code; one which could be understood in its entirety, the way you understand arithmetic. Today’s CPU designers have mind-boggling swaths of silicon real estate at their disposal. Yet they are shackled by braindead architectural dogmas and the market’s demand for backwards-compatibility with a 1970s traffic light controller. This scenario could have been lifted straight from a 1950s science fiction comedy.
The foundations of the computing systems we use are built of ossified crud, and this is a genuine crime against the human mind. How much effort (of highly ingenious people, at that) is wasted, simply because one cannot press a Halt switch and display/modify the source code of everything currently running (or otherwise present) on a machine? How many creative people – ones who might otherwise bring the future to life – are employed as what amounts to human compilers? Neither programmers nor users are able to purchase a modern computer which behaves sanely - at any price. We have allowed what could have once become the most unbridled creative endeavor known to man short of pure mathematics to become a largely janitorial trade; what could have been the greatest amplification of human intellect in all of history – comparable only to the advent of written language – is now confined to imitating and trivially improving on the major technological breakthroughs of the 19th century – the telegraph, telephone, phonograph, and typewriter.
Brokenness and dysfunction of a magnitude largely unknown for centuries in more traditional engineering trades has become the norm in computer programming. Dijkstra believed that this state of affairs is the result of allowing people who fall short of top-notch in conventional mathematical ability into the profession. I disagree entirely. Electronics was once a field which demanded mathematical competence on the level of a world-class experimental physicist. Fortunately, a handful of brilliant minds gave us some very effective abstractions for simplifying electrical work, enabling those who had not devoted their lives to the study of physics to conceive ground-breaking electronic inventions. Nothing of the kind has happened in computing. Most of what passes for widely-applicable abstractions in the field serves only to hamstring language expressiveness and thus to straightjacket cube farm laborers into galley-slave fungibility, rather than to empower the mind by compartmentalizing detail. (OOP is the most obvious example of such treachery.) As for invention, almost everyone has forgotten what genuine creativity in software development even looks like. Witness, for instance, the widespread belief that Linux exemplifies anything original.
I predict that software complexity will eventually cross over the border into the truly unmanageable, and we will begin to see absurdities worthy of Idiocracy. Perhaps this time has already come. I realize that my claim to competence at re-inventing computing from scratch is tenuous at best; yet thus far almost no one else is willing to even contemplate the possibility that we are faced with systemic problems which cannot be solved in any other way, and will continue to worsen.