Computer users are forever being misled, successfully lied to, sold “old wine in new bottles,” bamboozled in a myriad ways large and small. Why? Simply because we are, to use the technical term, suckers. Not always as individuals, but certainly collectively. The defining attribute of the sucker is, of course, an inability to learn from experience. And it seems that meaningfully learning from our mistakes is a foreign concept to us. Nay, it is anathema. The darkest heresy imaginable. Something no one would bring up in polite company. Something only spoken of by rabid crackpots, on their lunatic-fringe blogs, during full moon.
We will happily savor the same snake oils again and again, every time the same non-solutions to the same non-problems – because we refuse to learn from the past. And much of the history of personal computing can only be understood in light of this fact. For instance, we appear to have learned nothing from the GIF debacle. Unisys tried to use software patents to impose a tax on all Internet users, and everyone jumped ship from GIF to other graphics formats – ones supposedly out of the reach of patent trolls. As though anything could be safe from the well-funded and litigious while software patents remain legal. So nearly everyone switched to PNG and the like, and the storm died down. And no one learned the real lesson, which is that the whole notion of a Web “format” was a fundamental mistake.
And now format wars rage once more – this time over video codecs. Patent trolls smell the blood and fear of lucrative, juicy prey: YouTube et al. Web users and content providers live in terror, dreading the day when they will have to switch video codecs. As we all know, this is an exceedingly unpleasant process. First, the web browser or server must be lifted on hydraulic jacks. Then, its hood is opened, and greasy mechanics will grimly crank the codec hoist, lifting the old video engine out from its moorings. The vacant compartment must be scrubbed clean of black, sooty HTTP residue before the new codec can be winched into place.
Wait, this isn’t how your WWW stack works? What do you mean, it’s a piece of software? Surely this doesn’t mean that it is a magical artifact with functionality which can be altered in arbitrary ways at any time? Turing-completeness? What’s that? “This room stinks of mathematics! Go out and get a disinfectant spray.” There’s simply no such thing as a machine which can be rewired on a whim while it runs! Everybody knows that! If you want altered functionality, someone must physically replace the shafts and gears!
If this isn’t how our computers work, why do we act as if it were? The core idiocy of all web format wars lies in the assumption that there must necessarily be a pre-determined, limited set of formats permanently built into a web browser. Or, if not permanent, then alterable only through the use of clunky, buggy “plug-ins.” Of course, this is pure nonsense. And the fact that it is nonsense should have been obvious from the beginning, because the idiocy of laboriously-standardized data formats was obvious half a century ago – long before interactive personal computing:
“So here’s a couple of knocks on the head I had over the years. I just want to tell them to you quickly. This one I think you’ll find interesting because it is the earliest known form of what we call data abstraction. I was in the Air Force in 1961, and I saw it in 1961, and it probably goes back one year before. Back then, they really didn’t have operating systems. Air training command had to send tapes of many kinds of records around from Air Force base to Air Force base. There was a question on how can you deal with all of these things that used to be card images, because tape had come in, [there] were starting to be more and more complicated formats, and somebody—almost certainly an enlisted man, because officers didn’t program back then—came up with the following idea. This person said, on the third part of the record on this tape we’ll put all of the records of this particular type. On the second part—the middle part—we’ll put all of the procedures that know how to deal with the formats on this third part of the tape. In the first part we’ll put pointers into the procedures, and in fact, let’s make the first ten or so pointers standard, like reading and writing fields, and trying to print; let’s have a standard vocabulary for the first ten of these, and then we can have idiosyncratic ones later on. All you had to do [to] read a tape back in 1961, was to read the front part of a record—one of these big records—into core storage, and start jumping indirect through the pointers, and the procedures were there.
I really would like you to contrast that with what you have to do with HTML on the Internet. Think about it. HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats. This has to be one of the worst ideas since MS-DOS. [Laughter] This is really a shame. It’s maybe what happens when physicists decide to play with computers, I’m not sure. [Laughter] In fact, we can see what’s happened to the Internet now, is that it is gradually getting—There are two wars going on. There’s a set of browser wars which are 100 percent irrelevant. They’re basically an attempt, either at demonstrating a non-understanding of how to build complex systems, or an even cruder attempt simply to gather territory. I suspect Microsoft is in the latter camp here. You don’t need a browser, if you followed what this Staff Sergeant in the Air Force knew how to do in 1961. You just read it in. It should travel with all the things that it needs, and you don’t need anything more complex than something like X Windows. Hopefully better. But basically, you want to be able to distribute all of the knowledge of all the things that are there, and in fact, the Internet is starting to move in that direction as people discover ever more complex HTML formats, ever more intractable. This is one of these mistakes that has been recapitulated every generation. It’s just simply not the way to do it.”
Alan C. Kay: “The Computer Revolution Hasn’t Happened Yet”
Why exactly does a browser need to ship with any preconceived notions of how to decode video and graphics? Or audio, or text, for that matter? It is, after all, running on something called a programmable computer. Oh, that’s right, because running code which came in from the network in real time is a dirty and immoral act, one which endangers your computer’s immortal soul. Which is why it is never, ever done!
In all seriousness, modern hardware provides more-than-sufficient horsepower to make the idea of replacing all media formats with a “meta format” at least thinkable. Such a thing would consist of a standardized “sandbox,” perhaps one somewhat specialized for media processing. Something not unlike a competently written, non-user-hostile incarnation of Adobe Flash. It goes without saying that this would be a far easier sell were we using a non-braindead CPU architecture – one where buffer overflows and the like are physically impossible. There is, however, no reason why it could not be built on top of existing systems by competent hands.
As for the question of hardware accelerators: FPGAs have become so cheap that there is simply no reason to ship a non-reprogrammable video or audio decoder ever again. Why pay royalties and fatten patent trolls? Let the act of loading the decoder algorithm – whether a sequence of instructions for a conventional CPU, or an FPGA bitstream – be co-incident with the act of loading the media file to be played. The latter will contain the codec (or a hash thereof, for cache lookup) as a header.  Media player vendors will then cease to be responsible for paying codec royalties – the player hardware or software will have become a “common carrier.” Let the trolls try to collect danegeld from a hundred million consumers!
At present, working around a software patent is difficult only because switching formats takes considerable work and requires some unusual action on the part of variably-literate users. An end to this situation may very well mean a decisive victory over patent trolls – not only because software and hardware makers will be able to skirt accusations of patent infringement by out-pacing their attackers , but also because it will undermine the main source of income sustaining the patent trolls’ day-to-day corporate existence: royalties from proprietary decoders shipped with consumer equipment such as DVRs and MP3 players.
Wipe out the patent parasites, and at the same time fulfill the original promise of the Web by liberating us from the mind-bogglingly idiotic notion of the “browser” and its “formats.” Sounds like a good deal to me.
 Of course, it is not necessary to include a given decoder blob with every corresponding media file. Caching can be used to conserve bandwidth. I need not spell out the details of how to do this – it should be obvious to the alert reader. However, such clever tricks are not as necessary as one might imagine. Just compare the bitwise footprint of a typical media codec (implemented on existing systems) with, say, that of a typical YouTube transfer session!
 It may even be possible to automate the process of making minor-yet-legally-significant alterations to decoder and encoder algorithms, faster than patent trolls could search for new angles of attack.