You have made your bedrock, now lie in it.

As a child, I was quite fond of old-fashioned Lego bricks.  One very endearing but rarely discussed property of such bricks is their durability, bordering on the indestructible.  Almost any abuse inflicted on a Lego structure will, at worst, leave you with a pile of bricks entirely like the one you started with.  Even the most baroque Lego edifice will always break into Legos, and not jagged shards of plastic.  In my childhood, this felt like something approaching an elementary physical law.  As with many real physical laws, it applies in a restricted (though usefully broad) domain: a Lego castle dropped from the Empire State Building or crushed in a garbage compactor will leave behind many pieces still recognizable as Legos, and almost certainly a few which are not.  However, in my childhood I did not have ready access to garbage compactors or the roofs of tall buildings, and so the Lego brick seemed like an impregnable elementary particle: a bedrock abstraction.

bedrock abstraction level is found in every man-made system.  No recoverable failure, no matter how catastrophic, will ever demand intelligent intervention below it.  Repair at those depths consists purely of physical replacement. [1]  No car crash, however brutal, will ever produce piles of loose protons and neutrons.  When a Unix binary crashes, it might leave behind a core dump but never a "logic gate dump" and certainly not a "transistor dump."  Logic gates and transistors lie well below the bedrock abstraction level of any ordinary computer. [2]

The computers we now use are descended from 1980s children's toys.  Their level of bedrock abstraction is an exceedingly low one.  This would be acceptable in a micro with 64K of RAM, but when scaled up to present proportions it is a nightmare of multi-gigabyte bloat and decay. Witness, for instance, the fabled un-debuggability of multi-threaded programs on today's architectures.  It stems purely from the fact that truly atomic operations can only exist at the bedrock level, and to fully comprehend what is going on in the entire machine requires wading though a vast sea of binary soup, boiled and stirred continuously by an asynchronous world.  The futility of this task is why programmers aren't usually given even a sporting chance - observe the lack of a hardware debugger in any modern computer.

Once in a while, barring a whiff of the proverbial magic blue smoke, a hardware-level malfunction makes itself known through a fit of non-deterministic behavior in that rare bird, a piece of known-good software. Yet, to a first approximation, solid state hardware will spend years doing exactly what it says on the box, until this hour comes.  And when it has come, you can swap out the dead parts and be guaranteed correctness again.

Software, on the other hand, routinely ships broken.  And software cannot be meaningfully repaired or replaced, only re-designed.  Yet witness the cries of righteous outrage among software vendors whenever hardware deviates ever so slightly from its advertised function.  What software company could honestly lay claim to even the lowliest silicon pusher's levels of bug-free operation and architectural soundness?  I cannot help but picture a stereotypically slick and unrepentant con artist, frothing with rage after having been duped into purchasing a costly full page ad in what turned out to be a Skeptic society's periodical.  It appears that Microsoft (and now Apple) is entitled to comfortable and stable bedrock abstractions, while you and I are not.

I sometimes find myself wondering if the invention of the high-level compiler was a fundamental and grave (if perhaps inevitable) mistake, not unlike, say, leaded gasoline.  No one seems to be talking about the down-side of the compiler as a technology - and there certainly is one.  The development of clever compilers has allowed machine architectures to remain braindead.  In fact, every generation of improvement in compiler technology has resulted in increasingly more braindead architectures, with bedrock abstraction levels ever less suited to human habitation.

Nevertheless, high-level architectures still developed, though most were strangled at birth by political forces.  Think of where we might be now, if the complexity of programming a computer had been ruthlessly pruned at the source, rather than papered over with clever hacks.  Dare to imagine a proper computer - one having an instruction set isomorphic to a modern high-level programming language.  Such a machine would never dump the programmer (or user) by surprise into a sea of writhing guts the way today's broken technologies do.  Dare to imagine a computer where your ideas do not have to turn into incomprehensible soup before they can be set in motion;  where there is, in fact, no soup of any kind present in the system at all.  It would be a joy to behold.

I posit that a truly comprehensible programming environment - one forever and by design devoid of dark corners and mysterious, voodoo-encouraging subtle malfunctions - must obey this rule:  the programmer is expected to inhabit the bedrock abstraction level. And thus, the latter must be habitable.

Notes

[1] Note that the converse does not hold: human ingenuity has gifted us with systems which break far above the bedrock abstraction level, yet are still not meaningfully repairable.  Witness any piece of electronics with buggy firmware, where the latter is non-flashable or the manufacturer is defunct (or unwilling to patch.)

[2] A system implemented on an FPGA, or built from the once-ubiquitous bit-slice logic chips would have a much lower level of bedrock abstraction.  However, it is worth noting that a modern FPGA architecture is somewhat less fine-grained than the "sea of gates" imagined by the novice hardware description language programmer.  An FPGA aficionado is still living in another's universe, and it isn't necessarily one where each and every constraint imposed on you makes sense - certainly if you lack the closely guarded architectural manuals.

This entry was written by Stanislav , posted on Monday December 07 2009 , filed under FPGA, Hardware, Hot Air, NonLoper, Philosophy, SoftwareSucks . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

16 Responses to “You have made your bedrock, now lie in it.”

  • Alex says:

    I'm a fan of your blog.

    What do you think about designing a proper high-level computer, then implementing it in software?

  • Oroth says:

    It's an interesting proposition, though one I'm not sure about. By raising the level of the bedrock it seems you are just passing on the complexity to the bedrock designer - typically the hardware maker. And where hardware is usually considered the hardest part to design, debug, and maintain (once you've published hardware any bugs are there for good) it seems desirable to push the complexity up as high as possible. Typically this is the compiler, but I'd argue one of the virtues of lisp is that it manages to push the complexity even higher, by having a simple core and building up a complex language in the library through macros.

    While I like the idea of raising the bedrock and so lowering the complexity (or at least passing it on), I'm also attached to the rather opposing ideal of maximising accessibility, of maximising the amount of control you have over the hierarchy of computation - from the lowest level to the top. Probably, the Forth language/system is what best typifies this paradigm - allowing extremely low access, but capable of building up very sophisticated programming concepts. Of course its hard to reconcile this with the desire for a system that doesn't fall out at the machine code level when something goes wrong. Then again even if you raise up the bedrock with a sophisticated VM, somebody is going to have to dip into C or whatever when it breaks.

    Essentially, I'm not sure that raising the bedrock is the best way to lower the total complexity in the system. I envision some system where we start with the lowest of levels - the simplest of axioms - and progressively build up complexity. Though I'd agree all current systems are horrendously broken 🙂

  • [...] This post was mentioned on Twitter by Corey. Corey said: http://bit.ly/cuV38A Also, good article on I think the complexity of machines. Kind of high level. I'm going to re-read. [...]

  • Programmable computers have a nice property: it is possible to create another abstraction layer on top of an existing one, and to make that new abstraction "bedrock". Is it not even all that difficult; all you need is a reasonable theory of your existing abstraction, a reasonable theory of your new abstraction, and a proof (under these two theories) that once execution has entered the new abstraction, it will never escape (that is, never return to the previous abstraction.)

    If the overhead of implementing the one abstraction in the other is acceptable, there seems to be no reason to commit this new "bedrock" abstraction to a much more expensive and difficult-to-change medium. It doesn't become any more "bedrock" that way.

    In fact, if to start you have a medium which is expensive and difficult to change, it seems reasonable to form only the most efficient abstractions out of it. This gives you an engineering option: you can either maximize efficiency by implementing directly in that efficient abstraction, or you can create a new abstraction on top of it which maximizes some other goal at the cost of efficiency. Because overhead is always positive, it is not possible to have this choice if your starting abstraction is designed for something other than efficiency.

    • Stanislav says:

      Dear Bassett Disaster,

      > ...all you need is a reasonable theory of your existing abstraction, a reasonable theory of your new abstraction, and a proof (under these two theories) that once execution has entered the new abstraction, it will never escape...

      My understanding is that such proofs are impossible in the general case. If you know of an example, please post a link here.

      In fact, I have yet to see a simultaneously useful and rigorous proof relating to any important aspect of a computer system (as opposed to a particular algorithm.)

      > ...it seems reasonable to form only the most efficient abstractions...

      I believe that efficiency in computer design is a false god. Give me an "inefficient" but responsive and "Seven-Laws Compliant" computer any day of the week in place of the monstrous PC, that race car which regards steering wheel input as mere advice and so eagerly speeds off cliffs and into embankments.

      Yours,
      -Stanislav

      • Vilhelm S says:

        > If you know of an example, please post a link here.

        You should look at CompCert (http://compcert.inria.fr/), Xavier Leroy et al.'s certified C compiler.
        It is structured around a stack of 10 or so languages/abstract machines, where the top one is (a simplified subset of) C, and the bottom one is (a subset of) PowerPC assembly.

        Between each layer L1 and L2, there is a translation function from machine-states in L1 to states in L2, and a (machine-checked) proof that an L1 state s takes a single step to state s' if and only if f(s) takes a nonzero number of steps to s'.

        (As the proof is actually structured, they prove only the "only if" direction, and conclude the "if" direction because all the machines involved are deterministic. Also, the actual theorem they prove has some C-specific stuff, e.g. the translation of a program which is in an "undefined" state is allowed to do anything. No doubt it would be possible to make a slightly more beautiful system by dropping the requirement that the top and bottom layers must be C and PowerPC...).

        For your purposes, I think you would want slightly more than they prove, for example a function g which is left-inverse to f, so that you can debug crashed L2 states as if they were L1 states. But at a high-level, I think this shows that computer science has advanced to a point where this kind of proofs really can be carried out: you can implement L1 on top of L2 with provably no "abstraction leakage".

        Steve Yegge once wrote that "the proper use of a static type system is to freeze software into hardware". Of course not every type system will suffice. But CompCert shows that with fancy enough types (i.e. Coq), you can actually get there. 🙂

  • Mike from Shreveport says:

    "Dare to imagine a proper computer – one having an instruction set isomorphic to a modern high-level programming language. .... It would be a joy to behold."

    And it is. Chuck Moore has created many of them in the form of Forth chips. His latest employs 40 Forth cores on a chip, each of which is a fully-capable tiny computer that uses the Forth programming language as its instruction set.

    htpp://www.intellasys.net/index.php?option=com_content&task=view&id=60&Itemid=75

  • David Galloway says:

    The computers we now use are not descended from 1980s children’s toys but from the very first comptuers as Von Neuman described in his first draft of a report on the EDVAC.

    I like your thinking but at the same time I see no evidence for your claims only an almost religious fervour that the current bedrock is poor. I do a agree that a lot could be improved by sacrificing little but your ultimate goal would need to be undertaken and shown to be more practical.

    Unfortunately efficiency is not a false god in many applications and although I kind of get Robert Barton's quote that 'Systems programmers are the high priests of a low cult.' any attempt at shifting the status quo cannot hand wave away the need for efficiency or the unchanging fact that computation really works a particular way on real transistors and real chips.

    • Stanislav says:

      Dear David Galloway,

      > The computers we now use are not descended from 1980s children’s toys but from the very first comptuers as Von Neuman described in his first draft of a report on the EDVAC.

      And I suppose you are not descended from your parents, but directly from protozoan slime.

      > ...I see no evidence for your claims only an almost religious fervour that the current bedrock is poor... ...your ultimate goal would need to be undertaken and shown to be more practical.

      Until you have used a Lisp Machine and studied - and understood - its design documents, everything I have written here will seem like madness. Dismiss it as such and go back to wherever you came from, or do your homework and read the recommended reading.

      > Unfortunately efficiency is not a false god in many applications

      Sure, there are applications that will productively soak up infinitely-many CPU cycles: weather prediction, chemical modeling (my own field), codebreaking, etc. This has nothing to do with desktop computing, where most of the cycles are used by MS-Windows to move garbage from one pile to another, and to execute malware. Leaving plenty of CPU time to spare, because in this particular resource, we have a crisis of abundance rather than scarcity. Programmers have suffered from this abundance the way a serious glutton suffers from cheap junk food.

      > computation really works a particular way on real transistors and real chips.

      Until you understand that I am intimately familiar with this process, and still write what I do, and understand why, you will be wasting your time here. Don't make the mistake almost all engineers make: believing that the evolutionary tree of technological possibilities is trimmed by a benevolent gardener, rather than torn up by a mindless storm. Leaving not the most viable and fruitful branches, but a fairly random selection of the thickest.

      Yours,
      -Stanislav

  • Caustic says:

    Some evidence toward the futility of trying to implement a high level programming environment on current architectures is the comical situation of JS in the web browser. They tried to push JavaScript as the bottom level, only to find that the competition between different JITs (unsurprisingly) yielded the best improvements for a statically typeable subset of JS. They added typed arrays to JS while putting WebGL together and then someone got the "bright" idea of using them to implement C's memory model. The eventual result was the specified "asm.js" subset, which is intended as a compile target for C, and which the browsers are starting to explicitly parse behind the scenes and do AOT compilation on. 😉

  • [...] me: it’s a fundamentally brain-damaged idea. see ‘bedrock complexity.’ [...]

  • [...] stack) and embedded text (shorthand for a list of small integers). When programs fall to pieces, comprehension starts at the bedrock, which for Awelon project is ABC. Every little bit of consistency, simplicity, transparency, and [...]

  • Gerard says:

    I worked for many years on Burroughs machines. There simply wasn't any way to access out of bounds. Another architecture killed off, sadly.

    https://archive.org/details/bitsavers_burroughsBkComputerSystemOrganizationTheB5700B6700_10821314

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre lang="" line="" escaped="" highlight="">


MANDATORY: Please prove that you are human:

57 xor 30 = ?

What is the serial baud rate of the FG device ?


Answer the riddle correctly before clicking "Submit", or comment will NOT appear! Not in moderation queue, NOWHERE!