Erik Naggum died slightly more than a year ago.
I have never met the man in the flesh, and yet he is the one person who had most often and most radically re-shaped my opinions at their core, solely through the printed word – not only on the subject of computer programing, but on every kind of philosophical question imaginable. Reading his posts convinced me of (among many other things) the necessity of achieving virtuoso-fluency in Common Lisp before attempting to re-invent it – a journey on which I have now traveled for two years (and it is far from over!)
Naggum would have recoiled in horror upon being “reduced to sound-bites”. Even so, I would like to quote and link to a selection of his writings. His choice of medium – the cesspool of late-1990s Usenet – unfairly condemned his words of wisdom to obscurity. Naggum was not a celebrity, he did not participate in the cultish pissing contests of blogging or Open Source; yet I find his thoughts far more valuable than those of any among the supposed luminaries of computing whose insipid drivel litters the Net and the bookstores alike.
Without further delay:
On the Free Software movement:
The whole idea that anything can be so “shared” as to have no value in itself is not a problem if the rest of the world ensures that nobody _is_ starving or needing money. For young people who have parents who pay for them or student grants or loans and basically have yet to figure out that it costs a hell of a lot of money to live in a highly advanced society, this is not such a bad idea. Grow up, graduate, marry, start a family, buy a house, have an accident, get seriously ill for a while, or a number of other very expensive things people actually do all the time, and the value of your work starts to get very real and concrete to you, at which point giving away things to be “nice” to some “community” which turns out not to be “nice” _enough_ in return that you will actually stay alive, is no longer an option.
All of this “code sharing” is an economic surplus phenomenon. It works only when none of the people involved in it are in any form of need. As soon as the need arises, a lot of people discover that it has cost them real money to work for the community and they reap very little benefit from it, because they are sharing value-less services and getting value out of something that people take for granted is hard to impossible. This is unfortunately even more true when employees are considered “free” while consultants are not, so buying the supposed “services” from people who know the source code is not an _exercised_ option.
Just because it is nice to get things for free does not mean it is a good idea to organize anything based on removing the value of those things, but until people _need_ that value, getting stuff for free is _so_ nice that looking to the future is something most people simply will not do, and those who refuse think about will also refuse to listen to those who have. Thus they will continue to deplete the value of software to the point where nobody _wants_ to pay for any software, be it of a particular kind or in general. Software development tools are already considered to be give-aways by some people, threatening commercial vendors and those who would like to make money providing software tools to developers.
On how C “feels fast”:
People use C because it /feels/ faster. Like, if you build a catapult strong enough that it can hurl a bathtub with someone crouching inside it from London to New York, it will feel /very/ fast both on take-off and landing, and probably during the ride, too, while a comfortable seat in business class on a transatlantic airliner would probably take less time (except for getting to and from the actual plane, of course, what with all the “security”¹) but you would not /feel/ the speed nearly as much.
On “Worse is Better” (aka the New Jersey Approach):
Much of the New Jersey approach is about getting away with less than is necessary to get the /complete/ job done. E.g., perl, is all about doing as little as possible that can approximate the full solution, sort of the entertainment industry’s special effects and make-believe works, which for all practical purposes /is/ the real thing. Regular expressions is a pretty good approximation to actually parsing the implicit language of the input, too, but the rub with all these 90% solutions is that you have /no/ idea when they return the wrong value because the approximation destroys any ability to determine correctness. Most of the time, however, the error is large enough to cause a crash of some sort, but there is no way to do transactions, either, so a crash usually causes a debugging and rescue session to recover the state prior to the crash. This is deemed acceptable in the New Jersey approach. The reason they think this also /should/ be acceptable is that they believe that getting it exactly right is more expensive than fixing things after crashes. Therefore, the whole language must be optimized for getting the first approximations run fast.
A reply to any programmer who sticks to C because of “speed”:
Ask him why he thinks he should be able to get away with unsafe code, core dumps, viruses, buffer overruns, undetected errors, etc, just because he wants “speed”.
On the value of reading Open Source code:
In the much-lamentable Old Days, when fewer programmers could get at the source code, it had higher quality and was better to learn from. As more people with less commitment to quality and much less attention to detail got involved in writing it, its educational value diminished, too. It is like going to a library full of books that took 50 man-years to produce each, inventing a way to cut down the costs to a few man-months per book by copying and randomly improving on other books, and then wondering why nobody thinks your library full of these cheaper books is an inspiration to future authors.
On markets and “giving people what they want”:
what people “want” is a function of what they learn is available. e.g., do Americans want three-ring binders, and Europeans four-ring binders? or do they want binders and take whatever number of holes they come with? or do they want something that can help them organize their papers and take whatever is available? or do they really want a less cluttered office and ease of storage and retrieval of the information they receive? so, did people really _want_ three-ring binders, or is that just what they could buy? …look at Microsoft. nobody in their right mind would want their buggy shit. what Microshit users want is something entirely separate from the products. Microshit is _not_ user-friendly, but it is marketed as user-friendly, and then other software products, far more user-friendly, are made to look as if they missed the whole point about what “user-friendly” _is_, namely to look cool in nice colors while you’re crashing and destroying the disk, importing a virus from a disk, or letting Word destroy your day’s work.
On the decline of professional equipment for programming:
why are we even _thinking_ about home computer equipment when we wish to attract professional programmers? in _every_ field I know, the difference between the professional and the mass market is so large that Joe Blow wouldn’t believe the two could coexist. more often than not, you can’t even get the professional quality unless you sign a major agreement with the vendor — such is the investment on both sides of the table. the commitment for over-the-counter sales to some anonymous customer is _negligible_. consumers are protected by laws because of this, while professionals are protected by signed agreements they are expected to understand. the software industry should surely be no different. (except, of course, that software consumers are denied every consumer right they have had recognized in any other field.) …they don’t make poles long enough for me want to touch Microsoft products, and I don’t want any mass-marketed game-playing device or Windows appliance _near_ my desk or on my network. this is my _workbench_, dammit, it’s not a pretty box to impress people with graphics and sounds. when I work at this system up to 12 hours a day, I’m profoundly uninterested in what user interface a novice user would prefer.
On the wastefulness of language proliferation:
There is a simple and elegant answer to this question: Just learn Common Lisp well first. New languages are exciting to people who know mostly new languages, so learn an old language before you learn new ones and get out of the maelstrom that will drown you in ever new languages that add nothing at all except some miniscule additional feature from another language that someone needed to make a whole new language to implement because he did not know (Common) Lisp to begin with. A “new” language that differs from the rest of the crop by one or a couple features is proof positive that both what it came from and what it has become are mutations about to die. There are tens if not hundreds of thousands of such “languages” that people have invented over the years, for all sorts of weird purposes where they just could not use whatever language they were already using, could not extend it, and could not fathom how to modify its tools without making a whole new language. They never stopped to think about how horribly wasteful this is, they just went on to create yet another language called Dodo, the Titanic, Edsel, Kyoto-agreement…
On the “Imponator”:
From the Latin word “imponere”, base of the obsolete English “impone” and translated as “impress” in modern English, Nordic hackers have coined the terms “imponator” (a device that does nothing but impress bystanders, referred to as the “imponator effect”) and “imponade” (that “goo” that fills you as you get impressed with something — from “marmelade”, often referred as “full of imponade”, always ironic).
On the deficiencies of HTML:
The fundamental deficiency in HTML is that it reduces hypertext and the intertwinedness of human communication to a question of how it is rendered and what happens when you click on it. By giving the world a language in which numerous important concepts can no longer be expressed, these concepts are removed from our world. When music was written down with notes, a lot of music that was very hard to write down with notes vanished, and music became note-friendly. When tasks are automated, the skills that went into the automation vanishes and only the skills required to keep the automated solution going remains. When children learn to speak particular languages, their ability to speak other languages deteriorates and vanishes.
On the decline of programming as a serious professional field:
something important happens when a previously privileged position in society suddenly sees incredibly demand that needs to be filled, using enormous quantities of manpower. that happened to programming computers about a decade ago, or maybe two. first, the people will no longer be super dedicated people, and they won’t be as skilled or even as smart — what was once dedication is replaced by greed and sometimes sheer need as the motivation to enter the field. second, an unskilled labor force will want job security more than intellectual challenges (to some the very antithesis of job security). third, managing an unskilled labor force means easy access to people who are skilled in whatever is needed right now, not an investment in people — which leads to the conclusion that a programmer is only as valuable as his ability to get another job fast. fourth, when mass markets develop, pluralism suffers the most — there is no longer a concept of healthy participants: people become concerned with the individual “winner”, and instead of people being good at whatever they are doing and proud of that, they will want to flock around the winner to share some of the glory.
On the lack of ads for “Lisp jobs”:
neurosurgery is another field that requires an actual investment and lots of dedication to get into, is really rewarding to those who get good at it, but whose jobs are not advertised in regular newspapers. there is a shortage of neurosurgeons, but very little advertising in the media that the patients read. programming is both similar and different. whether you are a user or a programmer these days is often hard to tell (this has good qualities to it, too), but some programming tasks are still reserved to highly skilled people who are not afraid to take huge risks. ignoring for a moment the power of the American Medical Association, we still wouldn’t see a huge amount of books on neurosurgery for dummies in 21 days or whatever. it’s just plain inappropriate, and it’s intentionally out of people’s reach.
Lisp is somewhat like that. people can get lots of medicines at the drugstore, but they can’t be trusted to carve out a malignant tumor in their child’s brain. all sorts of users can do lots of customization and cool stuff in their “apps”, but they really can’t be trusted to run actual flight control systems, configure the telephone network, write software for video-synchronized magnetic-resonance imaging for brain surgery, or write automated stock-trading systems. at some point, the risk of letting unskilled people do the task becomes too high. that’s when you can’t trust more than 1% of the programmers out there, and a surprisingly large number of them know and use Lisp and tools that are can be trusted. (consider an ATM that gets one of those frequent Windows crashes, or a naval warfare vessel that has to cold-boot because a certain display suddenly goes all blue, or any other story in comp.risks that would have been hilarious if it had been a joke.)
On the “Y2K Problem”:
to take but one simple example: suppose you thought of the new millennium when you wrote your application back in 1972 — not only wouldn’t you be invited to the party, those who knew you had done it right from the start and who probably laughed at you at the time would positively hate you now, and they sure as hell wouldn’t tell people about you. and the more stupid they are, the more important it would be to pretend that nobody was smart enough to see the next millennium coming.
On why there isn’t an ocean of “Lisp jobs”:
certain areas of interest are best catered to by adding lots of manpower to their solution, areas which will be “popular” in the most obvious sense of the word, while other areas of interest will not attract people in great spades regardless of the monetary rewards, such as those that ask for significant dedication because of such things as very high risks, skill requirements, entry costs, etc. if you choose one of those areas of interest, no manager in his right mind places silly demands on your programming language of choice and he will probably fire you if you choose “popular” languages subject to vendors who care only about the mass market and not about quality, unless his real plan is to fire you, anyway, only to replace you by someone equally uncritical of his tools. …solution: find areas of interest not invaded by populistic opportunists.
This is actually no different than any other language. To what extent do C programmers _see_ its punctuation? Commas, semicolons, braces, parens, brackets, etc, all convey meaning immediately without been obsessed about as such. Getting them right can be a significant hurdle as you struggle with the syntax. Once you stop seeing the & and instead think “address”, you have got the hang of it. Perl hackers have mastered this technique. The reason many people who have learned C prefer to continue on the C branch of evolution is that they found the process of learning all that syntax quite _painful_. This is especially true for parentheses in C. You only need them in expressions that cross a fairly high complexity threshold and you can get rid of them by simplifying the expressions. Lisp is chock full of parentheses, with no way to get rid of them — if you simplify your expressions, you end up with _more_ parentheses. The “syntax = pain” equation in most C programmer’s heads translates to a desire to reduce the cost of learning a new language by trying to adapt it to the pain they have already been through. However, once you grok the parentheses in Lisp, they are _not_ painful. In fact, they are so much more liberating and enabling and downright _friendly_ that you would just _love_ to have similar tools available in every other language you use. This is why the “economy of syntax” in C is really a “poverty of syntax” and the perceived verbosity of Lisp translates to a wealth of opportunity. Like so many other things in life, you rarely get _only_ what you optimize for.
What killed micropayments:
The Internet will not become a money machine until the banking industry figures out how to transfer money for free so you can charge USD 0.005 (half a cent) for some simple service like, say, reading a newspaper article you have searched for. With today’s payment system, the cost of the transfer of the funds completely dwarf the cost of the service paid for. Various ways to deal with “electronic cash” have failed (I attended the opening of the First Virtual Internet Bank, but it folded after losing money mainly due to a severe shortage of cooperation from the banking industry), and I think the biggest hurdle is that the banking industry has a negative incentive in letting transactions be cheap or free — they are lending the money that people have effectively lent them out again to other people and only make money if they can have stable capitalization. If transfers were free, people would move money around all day to get better interest rates from wherever, and then the interest rates would drop and probably make borrowing much more expensive. This situation, however, is what acutely prevents the Internet from taking off as a network for paid services. (The other options are to let micropayments accumulate at each site and only to charge or credit credit cards when the amount surpassed certain thresholds on the one hand, which exposes the receiver of the funds to high risk, and prepayment of some small amount that is effectively always unavailable to the payer on the other hand, which exposes the payer of the money to high risk.)
On “CD-R brains”:
Much could be said about this affliction of the mind that causes people to assume that what they do not understand does not matter, that they have reached such a level of omniscience that they no longer need to observe and listen and learn. Having learned enough, some people evidently stop learning altogether. What they learned first is the standard for everything that comes later. That the probably only _truly_ random element in anyone’s life is the order in which they experience things, seems not even to be understandable — they somehow believe that the order they run into them is universalizable and important, that first impressions really tell you everything you need to know about something. I have seen people who have the mental capacity only for the transition from “have not experienced” to “have experienced”, and who are unable to make a distinction between their observations and their conclusions, such that they are unable to change their conclusions about what they observed. They walk around like they had CD-Rs for brains.
On programmers with the “poor farmer” mentality”:
Common Lisp is a big-city language. Spit out the hayseed, pronounce “shit” with one syllable and “shotgun” with two. You’re not in Kansas, anymore. C is the language of the poor farmer village where the allocation of every seed and livestock matters, where taxes are low and public service non-existent. Appreciating the value of a large language is evidently hard for many people, just like many people find themselves miserable in the big city and go to great lengths to create a small village for themselves in the city where everything is like it used to be where they came from. Common Lisp can accommodate people who want to program in any old language and re-create what they are used to, but if they want to get the most out of it, the only way to do it is to adapt to the language and accept that somebody else may be better than you are at designing languages.
On market fragmentation:
…when [sharing source] happens in cooperation with the owner of the code, it is good. when it causes a myriad of independent fixes to the same problem and branches into a variety of incompatible “products”, we have instead of causing people’s creativity to be employed usefully, failed to contain the biggest problem inherent in the free market, that it costs too little to fragment the market. look at Unix. it was essentially open source before anyone invented the term, and that caused a large number of ways to solve the same problem and left the market to sort them out, which they didn’t (the market never will sort out bad quality in anything but the single most important property of the products), and Unix got itself into a position where some horribly demented crapware from Microsoft could compete with it and fool a whole bunch of people for a while.
On how sharing source may discourage design flexibility:
by giving people something they don’t need, but which covers a need by sheer accident, they don’t discover the solution to their _real_ needs. people don’t need source code to modify if the system doesn’t work, they need working systems. people don’t need source code to add functionality to some static function design, they need flexibility in the design. the more we give random people unrestricted access to source code, the more we will design software that is expected to be modified just that way, and the less we will design software that is not intended to be modified by random users at the source level, and the less we will design systems that are actually able to adapt to people without changing the code.
On the idiotic fallacy that “a good programmer shouldn’t care about language choice”:
Using such inferior languages is like asking a chef who could have done wonders with any kind of raw materials, to use a dirty kitchen, a broken refrigerator with food that is about to die a second time, broken tools, and brownish tap water that tasted of swamp land. His first task would be to clean up the place. Creating food in there would be the furthest from his mind. That’s how I feel about Perl and C++. I prefer to call it “good taste”, not “tunnel vision”. I don’t like rap, either. Call me intolerant.
People whose only distinguishing mark is that they are not different are fundamentally inconsequential. They will change when people around them change, insofar as they do not believe that they have a /right/ not to change because they think being just like everybody else is a /virtue/. In a world where almost everything except human nature has changed so much that an 80-year-old must have been /really/ mentally active all his life to be indistinguishable from an Alzheimer’s patient, the kind of people who have a strong desire /not/ to think become not just a liability on their immediate surroundings, they force a change in how civilization can sustain itself when these people think they should have some power, and indeed /have/ some power qua mass consumers, where everybody is in fact just like everybody else and were being a minority costs real money if not convenience.
So why do I not want Common Lisp to be a mass market language? Because this kind of people will want to exert influence over something that is good because it has been restricted to the “elite” that has made a conscious choice to be different from /something/, indeed to /be/ something. The very word “exist” derives from “to step forth, to stand out”. To be just like everyone else is tantamount to not exist, to leave not a single mark upon this world that says “I made this”. Likewise the people who form the mass do not want those exceptions, the minority that has decided to stand out, to /exist/. All the brutality of the mass hysteria against that which threatens the meaningless lives of those who do not wish to have any meaning to their lives illustrate with which vengeance meaningless people will fight the requirement to think, to form an opinion, an idea, a thought of their own, different from what everybody else have already said they would approve of. People who program in the main-stream languages because they are main-stream languages have yet to form the prerequisite concepts to say “I want to program in C”. They have not yet developed an “I” who can actually want anything on its own.
I tend to measure things by the weight and thickness of the paper and the spaciousness of the typography of the books you find on the market about something. I have seen books on Visual Basic with the same page count as books on C but with 3 times the shelf space. I have seen books on HTML with same amount of contents as books on Ada with 9 times the volume. I have come to believe that large print, thick and heavy paper, and wide margins and oversize leading is indicative of the expected intelligence of the reader. If the reader is expected to be unable to concentrate or experiences mental fatigue just by looking at a page of text without oceans of whitespace, the material is probably geared towards people whose reading skills plateaued before they entered high school.
Compare children’s books and books on Web Duhsign or other X-in-21-days books. If the reading level of a specification is below college level, chances are the people behind it are morons and the result morose. If typography and reading level are comparable, manual-inches is probably a good measure, but a children’s specification for something may be thinner than a solid work of engineering that it would actually take less time to grasp because it is so hard to sink to the level of children who need to be told things over and over and usually do not remeber subtle differences from repetition to repetition like reasonably smart people do.
Reply to someone who complained about the cost of ANSI Standard documents:
with respect to the lives and the fortunes they save, they’re dirt cheap. with respect to what they make possible, they’re also dirt cheap. note that ANSI standards also cost way too much compared to toilet paper, and they’re pretty bad quality as toilet paper goes, too. I recently bought a fountain pen. it cost the equivalent of about 1500 throw-away ball pens. 1500 ball pens would have made me very frustrated, but this sleek, elegant pen made me happy. I also choose Common Lisp.
On how certain languages discourage conscientious programming:
you become a serious programmer by going through a stage where you are fully aware of the degree to which you know the specification, meaning both the explicit and the tacit specification of your language and of your problem. “hey, it works most of the time” is the very antithesis of a serious programmer, and certain languages can only support code like that. over time, you get a good gut feeling for the failure mode of your assumptions, and when you can trust that you are unlikely to make grossly invalid assumptions, the dangers that people run into in the absence of that trust vanish in a puff of standardization: it’s the kind of trust you buy from somebody who claims to be conformant. non-conformance is about violating this sense of trust. certain languages support serious programmers, and others don’t. e.g., I don’t think it is at all possible to become a serious programmer using Visual Basic or Perl. if you think hard about what Perl code will do on the borders of the known input space, your head will explode. if you write Perl code to handle input problems gracefully, your programs will become gargantuan: the normal failure mode is to terminate with no idea how far into the process you got or how much of an incomplete task was actually performed. in my view, serious programmers don’t deal with tools that _force_ them to hope everything works.
On labor unions:
…although some would have you believe that people can be forced to accept anything under threat of becoming destitute if they don’t. the problem is not that they would become destitute, but that they want something so badly they will accept the worst possible conditions because there’s something at the other end to hope for. some people are good at defrauding people of their present and future in this particular way, but I wonder why so many fall for them. at issue is why people “invent” solidarity at the wrong time and accept absolutely everything as long as they are alone, but speak up only when they think they can gang up on others, and especially why they have to wait until things are really, really horrible before they react. this is the stuff I don’t understand. historically, labor unions arose when people had gotten a taste of a different lifestyle and were willing to pay a lot more for their basic livelihood and had gotten into a fix they couldn’t get out of — because they had accepted the unacceptable to begin with. accepting something you have to form a labor union to fight after the fact only tells me that people were acting against their own best (or even good) interests for a long time. I don’t see any rational, coherent explanation for this sort of behavior in humans, but it’s all over the place.
On how the corporate dead can prey on the living:
companies that go bankrupt are a danger to healthy competition. They are able to make their creditors and shareholders pay for their losses and bad management and then to start anew with assets that they essentially got for free, quite unlike the competition that has not gone bankrupt, who have to pay full price for their assets, but quite similar to how their customers have wanted their products, for too little money.
Reply to someone who loudly took offense at the idea of naming a compiler “Stalin”:
the only _important_ property of evils of the past is that they not be repeated in the future, in any way, shape, or form. by refusing to accept humor about past evils, you lend them an importance they do not deserve and which will ultimately destroy your _own_ future, while those of us who can distinguish what we learn from the lessons where we learn it, can hope to find a future that doesn’t need to have reruns of past evils with new names as the only difference just because some _morons_ can’t learn from the past.
Today I learned that Naggum’s personal library has been cataloged and put on sale.