RIP jmc.

A giant fell.

And circus pyramids of idiot midgets make cargo-cult noises.

Sniveling trendoids, have you run out of crocodile tears yet?

And his "Maxwell's Equations of Software"?  How many have even heard of, much less understood them?

They will be remembered long after the last idiot shiny toy maker has closed up shop.

They will be remembered long after all traces of the Great Chiefs of the Iron Age are dust.

They are destined to be forgotten and re-discovered, perhaps many times.  Because mathematics is eternal.

Did you know that there once lived a Soviet John McCarthy? Who remembers?  No one but me? No matter.

But yes, there did indeed.  Because mathematics is "one for all of us, like victory."  It never goes away.

Generations of time-servers will live and die, but the forgotten jewels will wait patiently for the next explorer, the next brave soul who is not afraid of Upsight.

The universe keeps its most beautiful jewels in a safe that most of us cannot crack or even see.  But JMC could.  And did.

He was worth a billion smarmy hucksters.  Ten trillion superstitious do-gooders.  A googol of Googles.

But now he is gone, and you and I are here.

This entry was written by Stanislav , posted on Tuesday October 25 2011 , filed under Lisp, Mathematics, NonLoper, ShouldersGiants . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

13 Responses to “RIP jmc.”

  • derp says:

    Expect strife and trash-talk in this month's waiting room of the reincarnation bureau. I can't imagine a single topic on which John McCarthy, Dennis Ritchie and Steve Jobs agree.

  • Mathnerd314 says:

    I found that site before, but could never figure out what was there because all of the actual information was in Russian. Is there a recent description in English somewhere?

    How goes development? Did I miss the post that said you'd abandoned it?

    • Stanislav says:

      Dear Mathnerd314,

      Re: Refal: material in English is scarce but can be found with some effort. Here’s a start.

      Re: Loper: development has not been abandoned, because I am still alive.

      I’ve been doing a from-scratch system design, currently prototyping on a Xilinx Virtex-5 FPGA. If you’re curious why, check out the posts in the Hardware category. The short version: x86 hardware is an enemy to anyone trying to write a sane OS.

      The demo board is an off-the-shelf unit of moderate cost, described here.

      Unfortunately, for the past year or so I have been working mostly on entirely unrelated things so that I could eat.

      Yours,
      -Stanislav

      • Aneesh Mulye says:

        I want your opinion on something - specifically, about an alternative approach to the same goal.

        What if someone could come up with what I call a provable-leakproof-realtime-linear reduction from the x86 to, say, the LISP Machine, or the Warren Abstract Machine? The Transmeta Crusoe did in the other direction; it tried to maintain compatibility with the x86 by translating x86 instructions into its own instruction set. Not having Intel's marketshare, of course, it lost. (I'll explain the importance of the four properties later.)

        What if, however, you could build a software layer on top of the x86, but beneath everything else, that provided its own abstractions of all hardware, and if this reduction had the four properties I mentioned above (provable+leakproof+realtime+linear)? You would then be independent of the x86, and what happened from that point on would be irrelevant to you or software built for this 'new machine'.

        The four properties:
        1) Provable: you have formal proofs that your mapping works. You also have proofs that the other three properties hold.
        2) Leakproof: as you are acutely aware, leaky abstractions, whether just one layer or bedrock, place a limit on how tall a building can be built on top of them. If, however, there was no way in which your implementation of the LISP Machine on top of the x86 was absolutely uncompromising in its reduction of the underlying hardware, and the programmer could not distinguish between a genuine LISPM and your reduction of the x86 (short of using back-channels, such as timing different operations), you have now managed to completely separate the two; the underlying silicon is rendered irrelevant to anyone above that layer.
        3) Realtime: only a linear speed loss, fixed at some upper bound. This is for the system as a whole. For individual instructions, we have (4).
        4) Linear: your LISPM instructions are not, and cannot be, slower than your corresponding base instructions by some linear factor. (If there is an instruction which requires an algorithm to execute, the bound must be shown. Garbage collection - the cost has to be amortized, and even then an upper bound has to be shown on what just how much the single hit at the end can be. And so on.)

        Would such a reduction, and its implementat.ion in actual software, do the job?

        (This approach is inspired by RMS' ju-jitsu use of copyright; the stronger copyright gets, the stronger the GPL becomes. The parallel with the x86 and the new architecture should be obvious. And once you have this reduction, and you have a system built on top of it, what is to prevent someone from going and swapping the hardware that underlies this system from an x86 reduced to the LISPM, to an actual, multi-gigahertz, shiny-new LISPM?)

        • Stanislav says:

          Dear Aneesh,

          You have more-or-less described my original goal.
          However, you have not fully understood the message of my Bedrock Complexity essay.

          Timing is a meaningful abstraction leak.

          Regardless of mathematical tricks, if there is an x86 layer below the Lisp, there is an incentive for programmers to pry the cover loose and optimize at that level. That incentive simply must not be there. There is no point in closing "the gates of hell" if they won't stay closed.

          On top of this, an x86-LispM emulator could hardly be described as "multi-GHz" if you look at anything other than the clock crystal. In a true Lisp Machine, type bit checks, array bounds checks, etc. are never optimized away, as they are under high-quality x86 lisps (SBCL, for example.) The checks are always performed -- because on a genuine LispM they cost nothing. Try running an existing x86 lisp in interpreter mode to see a very optimistic estimate of the performance a true LispM emulator would show on the x86. Now take a tenth of that, to see what would happen if you were to smooth out the timing variations in order to remove the incentive for programmers to micro-optimize and re-introduce x86ism.

          Your system would "get cancer" the day you released it.

          See Naggum re: CLISP (last paragraph.)

          Lastly, complexity is a pollutant. Complexity pollution is to your brain as air pollution is to your lungs. It makes everyone stupider by increasing the cognitive load needed to fully understand a system. In order to make genuine progress in computing possible, this complexity has to be removed, rather than merely papered over. Especially if the paper is likely to be peeled off by cycle-shavers, as I described above.

          The epicycles have to go.

          P.S.: Number four is false. Multiple memory allocators / garbage collectors on top of one another (SBCL on Linux, for example) result in impedance mismatches, leading to slowdowns far in excess of linear.

          Yours,
          -Stanislav

  • Chris Smith says:

    A fitting tribute for someone worthy of one. Thank you for making my day brighter.

  • PhilM says:

    They will be remembered long after the last idiot shiny toy maker has closed up shop.

    Yes, they will. As soon as the current generation of "i" lovers evaporates, what will be left remembered is the true gift of such greats as jmc and dmr. How unfortunate that our collective intelligence is so low that we worship the "successful" but care not for the those who advanced our understanding of the universe.

  • jhuni says:

    Dear Stanislav,

    I have noticed that you haven't been making much progress developing loper. Is it possible this is because modern development tools (e.g text editors) suck? Would you agree that we should have decent development tools before attempting to develop an OS?

    Recently, I have been thinking of designing (but probably not implementing) a Lisp-compatible multitouch visual programming system. I no longer think its sensible to develop an OS, a desktop environment, or basically anything else without first having access to effective development tools.

    • jhuni says:

      * I meant this to be a reply to your above comment [October 26, 2011 at 2:48 pm] on the state of loper development.

    • Mr Foo says:

      I'm strongly of the opinion that, once you're past bootstrapping, there should be no disconnect between "development tools", "user environment", and "OS".

      Simon

      • Stanislav says:

        Dear Mr. Foo,

        I agree completely.

        Yours,
        -Stanislav

      • jhuni says:

        Dear Mr. Foo,

        There will always be an input/output dichotomy. Today most computer systems are concentrated on outputting information which is observed by users, and with web standards like HTML and search engines like google they do that relatively well.

        However, when it comes to actually processing inputs, such as developments, our computer systems are an epic failure. Our systems are are based upon archaic development tools (like text editors) and unintelligible components (like Windows 7 or iOS applications).

        In a future computing platform, we could elevate most inputs to the point of becoming developments. Nonetheless, there would still be an input/output dichotomy, with some people who are "users" that are primarily focused on observing outputs and not on developing.

        • Mr Foo says:

          I do agree (and despair) that many systems are now targetted at the consumption (and purchase) of pre-produced media rather than being a tool to help creativity, but even those users who are basically "passive" are still programming their computers in many respects. Even icon positioning and wallpaper choices are user data, modifications to the system.

          So, in the end, all you're talking about is a difference in usage patterns. Sure, many of the tools available for those wanting to create rather than consume are (to put it mildly) "difficult" to use, but creating has always been vastly more difficult than consuming.

          A good touch-based programming environment might be useful, and is certainly an interesting exercise. I'd suggest looking at Sean McDirmid's work for some ideas.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre lang="" line="" escaped="" highlight="">


MANDATORY: Please prove that you are human:

15 xor 61 = ?

What is the serial baud rate of the FG device ?


Answer the riddle correctly before clicking "Submit", or comment will NOT appear! Not in moderation queue, NOWHERE!