On the Insanity of Computer (in)-Security.

Forget for a moment about the security of your computer.  Instead ask yourself: how secure is your body?

Don't ask a computer security "professional."  Instead, ask an anatomist.  Or better yet, a trauma surgeon.  Or a prison medic.  A weapon no deadlier than a pencil, driven through soft flesh into your abdominal cavity, brings a miserable septic demise.  What keeps the pencils on your desk out of your abdomen, out of your neck, out of your eyes?  Do all of your pencils require authorization codes before they can be handled?  Are your kitchen knives protected by passwords?  Does the air in your home require a capability-bit check before one might breathe it?  Is the lock on your door indestructible?  Did you pay thousands for state-of-the-art security widgets?  And yet, $50 worth of dynamite could make short work of it all.  How, then, can you sleep at night?

Do we handle the perfectly genuine threats of bodily harm and property damage that many would certainly like to inflict on their fellow human beings by trying to make ourselves and our homes physically impregnable and entirely indestructible?  Or is this problem perhaps handled in some other way in civilized societies?

We are social beings first and computer users second, and appear to have forgotten this.

The concept of "every home a fortress" as a serious approach to physical security will strike just about everyone as insane - indeed, as a symptom of paranoid schizophrenia -- but it is the normal way of thinking among the computer-insecurity charlatans, for whom the destructive social norms set by Micros0ft are a given, like the law of gravity, and the option of not allowing opaque, deliberately incomprehensible, potentially hostile blobs to visit our computers is simply not on the table.  Neither are we permitted to seriously discuss the idea of punishing a man who leads to the destruction of untold wealth through botnet-driven DDOS attacks in just the same way as we would punish one who derailed a train.  And I speak not only of the miscreant whose finger "pulls the trigger", but also of the villain who placed the "gun" into the hand of the dumb and otherwise harmless hooligan, and "aimed" it for him - the malicious, monopolistic peddler of steaming turds disguised as software, which enabled just about every known computer security problem of any consequence whatsoever.  Neither should we forget his imitators or supposed opposition.  In the end, the most profound (albeit accidental) evil wrought by the criminal in Redmond may have been making UNIX look good - giving it a fraudulent fresh smell, far, far past its expiration date.

This entry was written by Stanislav , posted on Saturday September 11 2010 , filed under Hot Air, ModestProposal, Philosophy . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

13 Responses to “On the Insanity of Computer (in)-Security.”

  • dmbarbour says:

    I take it you haven't actually studied the multitude of redundant mechanisms by which the body attempts to secure itself against parasites and poisons.

    • Stanislav says:

      Dear dmbarbour,

      It is interesting that you say this, because the study of some of these mechanisms is included in my professional duties.

      If you think that I argue against the study of security mechanisms, you have misunderstood my post. The idea is that there is a place in the world for soft and vulnerable things. And that the correct way to deal with miscreants (digital or otherwise) is on their end, and ultimately, physically.

      Yours,
      -Stanislav

  • Marius says:

    Also, love the bit about making UNIX look good. 🙂

  • Fice says:

    Yet you can not convert thousands of humans into zombies obeying your commands — something you can do with insecure computers. Imagine if anyoune with enough knowlege in biology could create trojan virus infecting humans and taking control of their minds.

  • Aaron Davies says:

    Someone at slashdot has long had the signature "OS X: because making Unix user-friend was easier than debugging Windows."

  • dmbarbour says:

    the option of not allowing opaque, deliberately incomprehensible, potentially hostile blobs to visit our computers is simply not on the table

    I'm not approving of Microsoft's so-called "security", but I think you misunderstand or misrepresent the issues.

    Any sufficiently interesting library or application becomes an opaque, incomprehensible black box (not quite a blob) even to skilled programmers - even with full disclosure of source code. Skilled programmers don't have motivation, time, or the space in their heads to grok the guts of every library and application they use. Regular end-users are even worse off, lacking even the education.

    This is especially true as we move to more dynamic systems. Web applications ship small chunks of application code to our browsers - JavaScript, Flash, applets. If we ever are to achieve the full performance and flexibility of "native" apps, we'll be shipping and executing rich code that is only vetted by machine-computable or machine-enforceable properties. (A machine-computable property includes static type-safety, whereas a machine-enforceable property would include dynamic type safety.) Many users want rich interactions between these services, the ability to adapt and extend and share them - such as we see with mashups, Greasemonkey, Chrome extensions. Sandboxing and identity-based (a priori trust-based) security are simply the wrong approach - they prohibit the desired levels of flexibility and dynamism.

    In the large, security cannot be provided by simply putting "enough eyes" on the code. Indeed, much of that code will be written by machine, adapted specifically for each recipient (similar to web-apps today), then delivered across the network.

    The "every computer is a fortress" mentality is the one required by the sort of society and deterrence and trust-based security that you are promoting in your above post. Basically, you're restricted to using applications and extensions you trust. Anyone who tricks you into trusting them - or, transitively, tricks anyone you trust - will have a lot of privileges to abuse.

    Object capability security is all about controlling damage - the ability to perform local reasoning about authority. There is no "capability bit", only secure references - if you know an object, you are free to talk to that object. Security is based upon controlling distribution of those references, and upon capability patterns. This doesn't preclude keeping track of who provides code so you can 'punish' them, provide deterrence. There are also patterns for tracking responsibility.

    Stanislav unwittingly promotes "paranoid schizophrenia" by not allowing opaque, deliberately incomprehensible, potentially hostile blobs to visit our computers. He wants a world where everyone is forced to personally vet the code they use, vet the people from whom they receive it. (It isn't that I don't trust you... it's that I don't trust the people that you trust happen to trust.)

    The right solution is to allow the black boxes, but make it easy to control the authority they possess, limit the damage they can cause. Our black boxes shouldn't have authority to consume arbitrary CPU or memory, to deadlock the system, to set! arbitrary parameters...

  • [...] the stick read-only in hardware. [↩]Law I - Obeys operator [↩]Microsoft is of course Microsoft and everyone involved should spend the rest of their life in jail, but even ubuntu, supposedly this [...]

  • another says:

    eh. paranoia should still be maintained about comp sec, just as it should be with other types of sec.

  • While this does make an excellent point, I find it worthwhile to point out how the world can be designed to avoid unnecessary damage. The knives are in a drawer or otherwise sequestered; the spice cabinet doesn't contain deadly poisons, which are in a different cabinet; I'm more careful when carrying something dangerous or fragile, and seek to minimize the time under which related issues could occur; and it goes on and on.

    The proper solution I see is to express programs using methods which simply prevent issues by construction. Expressing computation without access to time prevents all manner of side-channels; memory should be more reliable, but expressing computation in a way which allows a rewriting system to optimize and remove redundant segments effectively removes the ability to, say, repeatedly poll memory in order to change nearby memory; and the other applications are obvious. While this resembles capabilities, it's different in important ways.

    • Stanislav says:

      Dear Verisimilitude,

      > expressing computation in a way which allows a rewriting system to optimize and remove redundant segments effectively removes the ability to, say, repeatedly poll memory in order to change nearby memory

      Generally you very much don't want this. "Smart" (in this sense) compilers have done far more harm than good, historically.

      E.g. gcc5+ often enough removes not only bounds checks but stack zeroizations in crypto routine exits as "redundant".

      Yours,
      -S

      • I thought it was clear such a language would be so high-level that such things are irrelevant. It would perhaps mostly be a means of defanging programs that may be hostile, when they couldn't be avoided.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre lang="" line="" escaped="" highlight="">


MANDATORY: Please prove that you are human:

89 xor 39 = ?

What is the serial baud rate of the FG device ?


Answer the riddle correctly before clicking "Submit", or comment will NOT appear! Not in moderation queue, NOWHERE!