The Hardware Culture, or: What They Build, Works! Can We Say the Same?

Yossi Kreinin throws down the gauntlet to all those who believe that a CPU ought to be designed specifically around the needs of high-level languages:

Do you think high-level languages would run fast if the stock hardware weren’t “brain-damaged”/”built to run C”/”a von Neumann machine (instead of some other wonderful thing)”? You do think so? I have a challenge for you. I bet you’ll be interested….
My challenge is this. If you think that you know how hardware and/or compilers should be designed to support HLLs, why don’t you actually tell us about it, instead of briefly mentioning it?


In his excellent follow-up to the challenge, he argues – fairly convincingly in my opinion – that a move toward “high-level” CPUs would come at substantial cost, and lead to no miraculous speed-up of dynamically-typed code. However, I still believe that ultimately, today’s architectures ought to be consigned to the junk heap in favor of a resurrected multi-GHz Lisp Machine descendant. The best argument for this is hinted at in another one of Kreinin’s posts:

How many problems did you have with hardware compared to OS compared to end-user apps? According to most evidence I got, JavaScript does whatever the hell it wants at each browser. Hardware is not like that. CPUs from the same breed will run the user-level instructions identically or get off the market. Memory-mapped devices following a hardware protocol for talking to the bus will actually follow it, damn it, or get off the market. Low-level things are likely to work correctly since there’s tremendous pressure for them to do so. Because otherwise, all the higher-level stuff will collapse, and everybody will go “AAAAAAAAAA!!”

Hardware really does tend to be the product of a more… adult engineering culture than software. I don’t believe I’ve ever encountered a piece of computer hardware which was a steaming turd of sheer dysfunction (non-deterministic behavior, illogical/undocumented operation, simple defectiveness) in the same manner the average piece of software of any substantial complexity almost invariably is. On further contemplation, I can think of a handful of products which might qualify – yet in each case the fault lay in firmware – once again the foul excreta of software engineers.

Bridges are expected to stand up, and on the “first try,” even! Planes are expected to stay aloft. And yet programmers seem to be content with forever competing in the engineering version of the Special Olympics, where different, “special” standards apply and products are not expected to actually do what they say on the box – at any rate, the idea of offering a legal warranty of proper function (or even of not causing utter disaster, in the manner customary in every other industry) for a software product is seen as preposterous.

I see this as a convincing argument for “silicon big government.” Move garbage collection, type checking, persistence of storage, and anything else which is unambiguously necessary in a modern computing system into hardware – the way graphics processing has been moved. Forget any hypothetical efficiency boost: I favor such a change for no other reason than the fact that cementing such basic mechanisms in silicon would force people to get it right. On the first try – the way real engineers are expected to. Yes, get it right – or pay dearly.

This entry was written by Stanislav , posted on Sunday March 08 2009 , filed under Hardware, Hot Air, ModestProposal, NonLoper, SoftwareSucks . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

14 Responses to “The Hardware Culture, or: What They Build, Works! Can We Say the Same?”

  • I think a LispM could make a great desktop/server because it would *force* safety and security. I think that GC, type checking, etc. ultimately converge to correctness in software implementations VMs. I think the problem isn’t correctness as much as it is optionality. The big problem with today’s commodity hardware is that nothing prevents you from writing in C or C++.

    On the other hand, not being a believer in big government – silicon or not – I tend to think along the lines of “if people prefer low cost and low security, so be it, even though I strongly disagree” (as opposed to something like “I wish they weren’t so stupid and made the right choice”, which eventually tends to imply “I wish someone made them behave properly”, which tends to imply non-silicon big governments.)

    I claim that HLLs in fact have a cost overhead measured in operations per (second*mm^2), no matter where this cost is payed – hw or sw. From that point it’s a matter of choice, and I still prefer HLLs.

  • Jason says:

    Anyone who hasn’t encountered hardware with “non-deterministic behavior, illogical/undocumented operation, simple defectiveness” just hasn’t encountered much hardware. I have encountered all of these in embedded processors.

    There are companies that make very low defect software for certain niche industries, and it is orders of magnitude more expensive.

    Also re the bridge analogy: Firstly, bridges are fairly simple, and people still occasionally screw them up when they encounter something new (see Tacoma Narrows). Secondly, if one person could build a bridge in a week for little cost (and then could make a change in minutes to the bridge) bridges would be built by iterative changes and testing, just like software, and quality would be largely related to quality of testing (just like software).

    • Stanislav says:

      Dear Jason,

      >Anyone who hasn’t encountered hardware with “non-deterministic behavior, illogical/undocumented operation, simple defectiveness” just hasn’t encountered much hardware. I have encountered all of these in embedded processors.

      Certainly. Consider, for instance, Intel’s F00F bug. But if hardware defects were as common as software ones, computers would be unusable. Also remember that I have carefully asked you to consider only *hardware* defects. Microcode is arguably software, albeit mostly non-upgradeable software.

      These days, firmware is approaching much the same levels of brokenness as other software. My wireless modem, for instance, crashed several minutes ago. This in no way disproves my point.

      >There are companies that make very low defect software for certain niche industries, and it is orders of magnitude more expensive.

      Now, why do you suppose it is so expensive? And does the process of cranking out software that isn’t shit have to be quite so labor-intensive? If so, why?

      >if one person could build a bridge in a week for little cost (and then could make a change in minutes to the bridge) bridges would be built by iterative changes and testing, just like software, and quality would be largely related to quality of testing (just like software).

      I like to think that, in such a world, ferries (with non-computerized engines!) would remain popular.

      Yours,
      -Stanislav

      • Yeah, I’m with Jason — it sounds like you just haven’t encountered much hardware at a deep enough level. Look hard inside the Linux kernel (for example) and you’ll find all sorts of documented hardware breakage, with software workarounds of various degrees of grottiness. Sometimes the workaround is to (say) completely turn off features. A particular multi-core embedded ARM processor I know of has totally broken multithreading, so it’s being run single-core. The Intel Poulsbo chipset has to run in linux with the “no-pentium” flag, because its hyperthreading cache coherence had race conditions that could only be turned off by page table features introduced back in pentium days. Check out the errata for any processor of any sort of sophistication and you’ll find a long list of wont-fix bugs. Power management that doesn’t, audio subsystems that don’t work as designed, etc, etc. Lots of cut-and-paste bugs, too, where some hardware unit was lifted from some previous design and not-quite-connected correctly to the new chip. Poulsbo’s GPU was bought from another company, whose chief designer then decamped for another firm, so no one at Intel had any clue how to make its drivers work. Hardware is just as bletcherous as software, trust me, and there are plenty of hardware engineers I certainly would not trust to design a bridge.

        • Stanislav says:

          Dear C. Scott Ananian,

          I suspect that most of the examples you listed could be traced to buggy firmware/microcode – that is, ultimately software (in the von Neumann sense.)

          On the other hand, dysfunction flowing from erroneous or entirely missing hardware documentation isn’t really a bug in the usual sense of the word. It is a political problem.

          Yours,
          -Stanislav

  • This is all a bunch of crap.

    First of all, I met Yossi’s challenge, and like the dirty lying self-promoting sanctimonious hypocritical fuck he is, he just blithely ignored it. Because MY idea didn’t meet with his narrative.

    Second of all, the reason software breaks and hardware doesn’t isn’t because of pressure, it’s because anything that COULD break is pushed UPWARDS into software. Firmware isn’t the product of incompetent software developers, it’s the product of scared hardware engineers.

    The reason software breaks more than hardware is simply that software’s more complex than hardware. There really is fuck-all complexity in memory chips, or even in CPUs, since they always reuse the same elements that have been gone over a thousand times already and even have sophisticated expert systems and modules to lay out everything.

    • Dave Cameron says:

      CPUs and the other silicon in computers are very complicated devices. The difference is all in methodology, that is the expert systems and modules that you alude to. In hardware design, rigorous correctness proofs are written, and then evaluated against the actual design. The real reason hardware and other engineered things are more reliable than software is that they are engineered, software is not.

      There is a small amount of software for engineering software out there, but just about no one uses it, and it’s usually not very good, and not easy to use. A large amount of software is written in text editors/word processors that provide no assistance to the programmer besides syntax highlighting and possibly autocomplete. Not a single aircraft is designed in a text editor. Not a single cpu is designed in a text editor (and if it was, it never made it to market). Programmers today are proud of overcoming the adversity of creating a program from scratch using nothing more than a text editor, they relish the difficulty, and curse and condemn anyone who would suggest that we need engineering platforms that do 99% of the work for them. Where they do reuse things, it is by adhoc insertion of blocks of code (libraries) with little or no proof of correctness of either the module or the interface, and never by the mechanism used by every other kind of engineer: a correct-widget-generator. There is no formal mechanism in software tools for ECN (engineering change notification), no place to provide rationale for why a feature was added, or even a mechanism to define what constitutes a feature, instead code changes are handled almost exclusively by primitive tools that don’t even understand the language being used: diff, and the multitude of VC systems built on top of it.

      Simply put, software sucks, because the tools used to create software suck, and software developers as a majority are damn proud of it.

      • Stanislav says:

        Dear Dave Cameron,

        > CPUs and the other silicon in computers are very complicated devices.

        Complexity can be usefully divided into the compartmentalizable and non-compartmentalizable. For instance, the physics which describes exactly why the materials comprising a car engine behave the way they do is quite complicated, and yet your mechanic does not need to know almost any of it. This is because nothing short of direct involvement in thermonuclear war can ever re-arrange your car’s elementary particles in a way that would require an advanced understanding of physics to correct it. Thus I would argue that the complexity of a CPU is often high, but is well-compartmentalized. And that of software is uncompartmentalized, and arguably un-compartmentalizable.

        > In hardware design, rigorous correctness proofs are written, and then evaluated against the actual design.

        No. Hardware errata abound. And proofs of correctness are rare not solely due to their expense, but due to their utter uselessness in most practical situations. In short, you cannot transition from the informal to the formal by formal means. There is no way to prove that the assumptions underlying such a “proof” (the use of the word in application to physical computing systems is a serious abuse of mathematical terminology, IMHO) are physically real.

        > The real reason hardware and other engineered things are more reliable than software is that they are engineered, software is not.

        No. Bureaucracy is not the answer. Understandability is the answer. Engineers understand the steel beams of bridges (and VLSI subcomponents) because those systems are compartmentalizable – they consist of well-defined, well-specified standard parts whose behaviours compose non-emergently. Not so with software.

        Where is the proof of correctness for your doorknob? And yet it turns. “Proofs of correctness” are fascinating to those who wish to pretend that human minds are fungible and hate the idea that complex systems ought to be designed by systems designers: people who are capable of holding a complex system in their minds in its entirety, understanding it the way you understand a pen or a doorknob.

        > A large amount of software is written in text editors/word processors that provide no assistance to the programmer besides syntax highlighting and possibly autocomplete.

        Just about all of it. And, at present, those who think that they are using intelligent editors are almost always working around the crippling inadequacies of a braindead language.

        > Not a single aircraft is designed in a text editor. Not a single cpu is designed in a text editor (and if it was, it never made it to market).

        The best way to design aircraft appears to be… pen-and-ink.

        I’ve been informed that modern CPU designs are nearly 100% HDL.

        > Programmers today are proud of overcoming the adversity of creating a program from scratch using nothing more than a text editor, they relish the difficulty, and curse and condemn anyone who would suggest that we need engineering platforms that do 99% of the work for them.

        > Where they do reuse things, it is by adhoc insertion of blocks of code (libraries) with little or no proof of correctness of either the module or the interface, and never by the mechanism used by every other kind of engineer: a correct-widget-generator.
        > There is no formal mechanism in software tools for ECN (engineering change notification), no place to provide rationale for why a feature was added, or even a mechanism to define what constitutes a feature, instead code changes are handled almost exclusively by primitive tools that don’t even understand the language being used: diff, and the multitude of VC systems built on top of it.

        Bureaucracy is not the answer. Waterfall methodology and other planner-isms are not the answer. The answer is short OODA loops and design by strong-willed, polymathic individuals who truly understand what they are doing, as deeply as you understand arithmetic. And who have tyrannical control over every aspect of the system, down to the transistors.

        > Simply put, software sucks, because the tools used to create software suck, and software developers as a majority are damn proud of it.

        Software sucks because the foundations of electronic computing, as presently understood, suck. Everything else follows from this.

        Yours,
        -Stanislav

  • Aodh says:

    The intangibility of the building blocks might also be of issue here:

    - Hardware – predictable components, well defined, well understood, physical, defined tolerances.

    - Software – unpredictable components (who built it / how did they test it?), not necessarily well defined (docs?), changing (deprecated APIs) and not physical.

    This, is what differentiates software from other engineering disciplines – Civil engineering – concrete blocks, metal rods etc.

  • Benjohn says:

    I read an interesting claim yesterday that “complex adaptive systems” tend to have a conservative, dependable, slowly changing core, and a highly dynamic, rapidly changing and evolving, adaptable, periphery. The author was trying to apply this to formulate a new political position joining the radical left and radical right, but … It seems to me hardware and software are a bit like this. You bake hard and nail down the parts that absolutely must not fail, and the rest you build from polystyrene, sticky tape, whimsy and dreams.

    I think computing systems, hardware, paradigms, languages, modelling tools, the whole of Computer Science is very much like an evolving system. I like a lot if non main stream stuff, and I get frustrated by its marginality, but I think raging about it is a bit like going back in time and being losses that the other wee beasties in the pond haven’t got any mitochondrial DNA yet, or are still using RNA. They’re going to get it when it’s good and ready, when it’s the cheapest systemic solution, plus some uncertainty.

  • John Wright says:

    Writing software is similar to writing a movie script and can benefits from lessons learned in that creative endeavor. The software writer is both the script writer and the director. In the script he or she should strive for coherence and work iteratively, until the script reads better and better. As director he or she should find a balance between tyrannical control and flexible interplay between all the component actors allowing for some cool unexpected but useful results. I have an idea for a software scripting tool where software could run an English script and play if back to the software writer using logging statements, both validating the flow of the software and making explicit the way the software works instead of keeping that script hidden in the engineers head.

  • jbm says:

    JavaScript is a well defined language which works reliably in all browsers, and has done so for a decade. What hasn’t worked well is the APIs you use JavaScript to talk to, because: 1) the web evolves continuously, 2) Microsoft had no incentive to fix IE due to their monopoly share of the market.

    You can’t call for sanity and then simultaneously make dumb, sweeping generalizations. JS has the dubious honor of being a language nobody feels they actually have to learn.

    • Stanislav says:

      Dear jbm,

      > What hasn’t worked well is the APIs you use JavaScript to talk to…

      Well yes, a Javascript program with no I/O will work quite reliably in all browsers. That is, it will reliably do… nothing at all. It is a waste of time to consider a language apart from its natural habitat.

      > because: 1) the web evolves continuously…

      Here lies the issue. The Intel architecture has also “evolved continuously” since 1978. And yet code written for the 8086 will execute just fine on my current desktop. MS-DOS will boot on a modern Intel/AMD machine, etc.

      It is entirely possible to have architectural improvements which do not break everything on a weekly basis, but this is not the case with the idiot Web. Example: the latest version of Chrome will render black screens for about 3%-5% of the pages I’ve visited. If this is “evolution,” then give me stagnation any day of the week.

      The continued popularity of Flash is proof that Web “standardization” and the HTML/CSS/JS stack remain a crock of shit. People will put up with a proprietary ball of crud just so they can write something which actually works (and works identically) for every user.

      The idiocy of the “software culture” as compared to the hardware world lies not in breaking backward-compatibility, but in the habit of doing so silently and frequently – and through sheer slovenliness, with the questionable “evolution” being a mere excuse.

      Yours,
      -Stanislav

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre lang="" line="" escaped="" highlight="">