## How Much Computation Does the Universe Perform?

From Richard Feynman’s __The Character of Physical Law__:

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all of its apparent complexities. But this speculation is of the same nature as those other people make — ‘I like it’, ‘I don’t like it’, — and it is not good to be too prejudiced about these things.”

This paragraph is a perfect summary of the notion which continues to bother me as I study chemistry – the reason why all practical molecular modeling systems are hodgepodges of approximations and empirically-”cheated” heuristics rather than from-scratch calculations based on wave mechanics. The latter rapidly explode in computational complexity for all but the simplest substances. But why must this be so?

Being tormented by this question led me to the following thought:

Either:

I) An ordinary, work-a-day microscopic bit of matter really does require a planet-sized supercomputer to simulate accurately at the particle level. Or:

II) It doesn’t. Or:

III) ???

The consequences would be, respectively:

I) Chemical reactions of some yet-to-be-discovered variety are an untapped gold-mine of truly epic computing power. If some physical system, let’s say a protein, truly has that level of intrinsic complexity – we should be able to harness it to perform computations currently undreamed-of.

II) It doesn’t, and that means we can solve protein folding, “the Universe, and everything” – and emulate respectable fragments of physical reality a la Conway’s “game of life.”

III) I have no idea what III would be. Presumably, some flaw in my reasoning – which held that either I or II is a logical necessity. Perhaps as I continue, I will find out what III is.

Quantum computers should be able to simulate chemical reactions efficiently, I’ve read. That doesn’t solve Feynman’s problem, but I think it solves yours.

My money is on the first …

As far as I can see it our consciousness is like a Lisp dialect, interpreted by Prolog on a Java VM running in a VMware instance simulated by Erlang on an array of FPGA’s emulating Apple ][’s within the Quantum Computer we call the universe, so ‘undreamed-of computational abilities’ sound fairly plausible to me.

Hmm.

I) I believe the quote suggested infinite computational power and not just large computational power requirement. I think the assumption there is continuous space and/or time.

II) Even if you assume that at a certain small size, things can be reasonably simulated, it does not mean that:

a) The computation scales nicely with the space simulated (what if it is exponential)

b) Knowledge can be derived from simulation. I think simulation are inherently anti-knowledge, they get you data, but not understanding.

Just my opinions of course.

You should also look at the n-body problem for a different perspective: the physics are much simpler, but there’s not much you can do for n>3 beyond simulate.

NB what Sundman did for N=3: there’s something of a solution, but it converges far too slowly to be of much use.

And that’s for comparatively simplistic, macroscopic mechanics.

It’d behoove you to get familiar with the basics of chaotic dynamics (the real stuff, not the popular press version) if you’re looking for a III.

Such III would summarize as absolute simulation requires absolutely accurate information; absolutely accurate information about something else cannot be obtained; ergo, only systems whose observables-of-interest are *not* strongly sensitive to initial conditions are efficiently simulatable; complex molecular dynamics (eg protein folding) ergo are not accurately simulatable.

This shouldn’t be surprising: proteins look like the mother of all multi-armed pendulums with the added bonus of heavy self-interaction, which would strongly suggest chaotic dynamics and ergo minimal prospects of accurate simulation.

In case it’s not clear: III differs from I in that the problem isn’t so much computational resource as it’s informational availability; ability to harness the “computation” may be limited b/c of constraints on transferring information from region to region.

If the universe is a chequerboard, what’s in between the gaps?

Dear Stanislav,

Option 3 means ‘wrong question,’ like asking what colour a direction is. Is up green or blue? It would mean computation is an incoherent concept.

If computation was inherently incoherent it would explain why software is crap without having to suppose massive intellectual corruption. So there’s that. However, it conflicts with the finding that theoretical physics is effective.

Supposing option 1, extracting the computation may require Maxwell’s demon. Or it could simply be the influence of true-random inputs – you can’t simulate ‘a particle,’ you have to simulate a tree of futures and then count the outcomes to get a probability. The branches-per-second of a protein, with thousands of random nodes, is insane – but since physics collapses the true-random inputs in real time, at any moment the required physical computation is not herculean.

The general attitude in the physics departments I know respects option 2. The equations can be analytically solved, we just don’t know how, and the computation necessary to salve the ignorance is pretty huge.