**THE ORIGIN OF TIME, QUANTUM MECHANICS, AND FREE WILL **by** ***Linas Vepstas*

A metaphysical exploration of the origin and nature of Time, Quantum Mechanics and other issues. The goal of this exploration is to provide some alternative viewpoints to many of the popular interpretations and dogmatic beliefs that surround these topics, such as the notion of the Quantum Multiverse, or the interplay between pre-determination and free-choice. The hypothetical scenarios described herein attempt to be anchored in principles from basic physics, dressed up with hand-waving and imagination. One particular (heretical) hypothesis is advocated: Quantum mechanics (and time itself) is a side-effect, an outcome of physics at Planck scales, rather than inherent in it.

This is not a religious tract, nor is it physics, nor is it philosophy. It is meant to be no less and no more than a rational discourse on some of conundrums at the outer limits of our known physical understanding today. Since its arguably B.S.; lets just call it metaphysics and be over with it. The intent is, however, serious. The exploration here is meant to be part of a puzzle piece, along with a parallel exploration of freewill and the existence of Platonic realms, and a critique of Heidegger. Some of this might be experimentally tested? And maybe we can even associate some mathematical equations with these ideas, giving them at least a little bit of traction.

**Pre-Determination – NOT!**

Sir Isaac Newton presented (ref?) the philosophical concept that all events in the universe are pre-determined or pre-destined. This concept is anchored in the Newtonian idea that the motions of bodies are given by differential equations, and that differential equations can be solved, and that therefore the future can not only be known and predicted, but is in fact inevitable and inescapable. As long as motion is governed by fixed, immutable equations, so then is our life on earth pre-destined, and we run through our lives as cogs and gears in a mechanical clockwork, inevitably doing what these equations of motion force us to do. We have no free will, and the past, present, and future is just an apparition. (See the Gestalt of Determinism)

Bunk. Of course this isn’t right, and with the exception of a few brave individuals, (e.g. Julian Barbour) nobody but nobody actually believes that this is the case. We have a too-strong anthropocentric experience to accept this view: we all know what the past is, we all sense that we live in the present, and we all seem to agree that much of the future, or at least, the truly important parts, are unpredictable. Curiously though, in this popular view, it is never pointed out that in fact, the past fits the above description to a tee. The past is so perfectly “predictable” that, in fact, we don’t even use that word: we “remember” the past, we don’t “predict” it. The past is immutable and unchangeable: there is nothing that we can do to change the past. That is, the past is exactly like those immutable, unalterable equations of motion: predestined and inescapable. So it seems that the past is exactly like this Newtonian world-view: its just that something funny happens in the present, somehow making the future unknowable.

One might think of the past as a block of ice forming, as the liquid present freezes onto it. The present, the ‘here and now’, is like a wave of crystallization, like Kurt Vonegut’s Ice-9, freezing, propagating through space-time, segregating what was from what might be. This might make for nice literary allusions, but doesn’t fit with the Newtonian view. Differential equations know of no past, present, or future: they don’t distinguish between these, and this is precisely where the metaphysical problem originates. Ordinary, ‘classical’ mechanics states that the future is just like the past, and this seems to be inescapable, even as we all know intuitively that this is not so.

Two modern developments in Physics seem to offer avenues of escape from the conundrum of pre-determination. The first, Quantum Mechanics, introduces a certain amount of randomness that provides the wiggle-room needed to make the future unpredictable, and offer at least a glimmer for free will. Unfortunately, Quantum is saddled with a number of messy interpretational problems that leave an unsatisfying taste in the mouths of anyone who cares to seriously try them. The second, Chaos Theory, doesn’t offer immediate escape from classical dynamics, but does show us how a whole lot of unpredictable things can happen in a short amount of time.

**Quantum Multiverse – NOT!**

The most popular interpretation of quantum mechanics these days is the Quantum Multiverse. A popularized version of the Many-Worlds Hypothesis, it asserts that there are zillions of parallel universes being created every instant, through acts of quantum measurement. According to the popular press, this is the hypothesis that real physicists like the best.

The need for such an interpretation is due to some curious puzzles that arise in discussions of quantum measurement. We review some of these below, and after that, we dive in.

**Quantum Measurement**

Lets review some popular quantum conundrums:

- Schreodinger’s Cat Paradox
- Mott, Heisenberg, 1929: alpha particles with spherical wave functions leave straight tracks in cloud chambers (alternately, consider a detector consisting of several plates at various distances and covering a 4pi solid angle. The non-detection of a decay at a closer plate has caused the wave function to ‘collapse’ at least partly, avoiding that plate, and eventually being detected on a more distant plate. Clearly, the language used to describe this makes ‘wave function collapse’ sound even more disconcerting than it is. The Mott/Heisenberg straight-track observation is a special case of this, where the wave function collapse has occurred on some microscopic scale, by ionizing an atom, as opposed to being delayed for macroscopic time periods.)
- EPR; and specifically, the decay of a singlet to a pair of spin-1/2 particles. Bell’s Theorm.
- Quantum interference over macroscopic scales: e.g. proton interferometer
- Clauser Horne Shimoney etal

Lets review the popularly discussed quantum measurement proposals. All of these proposals seem to be lacking in that none provide any sort of detailed description of the mechanics of wave function collapse. They attempt to resolve the meta-physical paradoxes without proposing the physics.

**Many Worlds**, aka ‘Quantum Multiverse’. The ‘universe splits in two’ every time a measurement is made. Since measurements apparently happen all the time, there’s a googleplex of parallel universes. Without a more detailed proposal as to what the process is, and what sort of experimental predication it can make, its just more metaphysics. Popular among freer thinkers.- The
**‘brain/mind’**causes quantum measurements to occur. This is one extreme response to the Schroedinger Cat Paradox: The wave function doesn’t collapse until the experimenter looks the instruments. I don’t think any mainstream thinkers take this seriously. There are too many ways to poke holes into this. **Many-body/Thermodynamic Interaction Hypothesis**. This is probably what most mainstream physicists think is the ‘correct’ answer, mostly because they haven’t really thought about it. The wave function collapse occurs when the wave function interacts with a many-body, chaotic, thermodynamic system. Indeed, there is a huge grain of fact to this: wave functions really don’t collapse until they interact with a whole bunch of atoms. There’s a multitude of ways of supporting this position through experimental and theoretical arguments. Unfortunately, it has an Achilles heel: it fails to actually resolve any paradoxes. For example, consider the following: a spin-0 singlet state that decays into two very energetic spin-1/2 particles. We’ve arranged the fast decay products to pass through space-like separated stern-Gerlach magnets before hitting cloud chambers at each end. Think about it: you’ll see why the mantra of ‘thermodynamic interaction’ won’t work. (The singlet decays into a pair of EPR correlated particles; we know their spins point in opposite directions (yada yada Bell’s Theorm yada yada). As they pass through the Stern-Gerlach magnets, the trajectories are deflected up or down. Then, finally, the ‘measurement’: the interaction between many bodies in the cloud chamber that collapses the wave packets. To maintain correlation between the measurements, the final collapse can’t just happen when the wave function interacts with the gas in the cloud chamber. What happens in one cloud chamber is intimately connected to what happened in the other. Measurements seem to violate locality.)

There are various ways of dressing up this hypothesis in high-falutin’ language. For example: during interactions with multiple bodies, certain paths that may have at one point contributed to the Feynmann path integral in fact bump into analytic ‘cuts’, and stop contributing in a phase-coherent fashion to the path integral, thereby causing a collapse of the wave function. These cuts in the analytic plane don’t exist in two-body interactions, but absolutely litter the phase space when the N in N-body is large enough. In other words, for N sufficiently larger than two, most phases follow chaotic paths, and the phase relationships are no longer coherent, but become incoherent, thereby marking the wave function collapse. (For example, consider the chaotic regime of the forced harmonic oscillator). This view of wave function collapse seems at first to be very appealing, precisely because it can be dressed up with all sorts of flowery appeals to chaos and the like. But, as we mentioned, it founders because it is ultimately a local theory, and fails to explain EPR correlations.

- Negative-energy influences travel backwards in time, thereby ‘correcting’ the initial state. This hypothesis states that when a measurement occurs, some influence travels backwards in time, and modifies what happened ‘back then’, to bring it into a state consistent with the measurement. There’s a certain amount of beauty to this hand-waving argument. Time symmetry makes anti-particles look like particles traveling backwards in time, and all solutions to the Dirac equation (with a non-zero momentum) have a negative energy component. This hypothesis, when coupled to the ‘thermodynamic’ hypothesis, actually seems capable of resolving the EPR correlation issues. The measurements work out because some influence travels backwards in time, alters the original state, and sets things aright. To put it another way: we know that when we make predictions about future experiments with Bell’s theorem, we know that the wave must be in this and that state. We know that when we analyze experiments performed in the past, they are consistent with Bell’s theorem. The one thing we can’t verify is that the experiments aren’t somehow “rearranging” themselves in the “past”, with unexpected wave-function “collapses” that harmonize the results. Unfortunately, this hypothesis seems to get almost no serious discussion, and is therefore quite vague as to specific mechanisms and details.

**Looking for Cracks in Quantum Theory**

Where might we search for chinks in the armor? QED and the second quantization underlying it is the most accurate physical theory known to man. How might it be wrong? The symmetries underlying it (and the more stunning symmetries in the Standard Model and QCD) are quite beautiful. They’re experimentally unarguable, so we take them as truth. The other major ingredient to this theory is second quantization. For every point in space-time, second quantization states that we must take an integral over all field values. The algebra of second quantization is a marvelous tool for working between Lagrangians/Actions and propagators/vertices. This algebra is vital for tying QED calculations into a coherent conceptual whole. However, the experimental evidence for second quantization is much slimmer. We know that we need to sum over *many* states, otherwise, it wouldn’t be quantum-mechanical. For the algebra to work cleanly and nicely, its easiest to sum over *all* states. When presented in textbooks, the integration is always done as if each point in space-time was independent, and scant attention is paid to the measure. For example, it is claimed that discontinuous Feynmann paths can’t contribute to the path integral, because the action would be infinite, thus damping out the exponential. Whether the measure of the discontinuous paths mightn’t also be large isn’t considered. This kind of a hand-waving argument completely ignores a variety of known mathematical monstrosities, such as the Farey number mapping, which is a smooth, monotonically increasing function whose derivatives are zero at all rational numbers! Yuck! After examining a few of these monsters, the hand-waving about the contribution of paths to second quantization becomes weak.

We could attack the path integral from a second direction. For example, in thermodynamic problems, one averages over an avogadro’s number of states, which, for all practical purposes, we can take as ‘infinite’. This is because experimental evidence provides close support for the taking of thermodynamic averages. We also know that thermodynamics breaks down when dealing with dozens or hundreds of atoms, because by then, the averages no longer accurately approximate the situation. Unfortunately, we have few similar experimental explorations on the contributions of the path integral to second quantization. We know that it must be correct for the most part, because otherwise quantum mechanics in general wouldn’t work. The Casimir effect, studies of dielectrics, and surface tension physics provide some experimental tests of second quantization, but the linkage is not precise. (In the Casimir effect, one sums over all standing waves trapped between a pair of metal plates. Or, at least, one sums up to a frequency at which the plates become transparent to EM waves. This summation is a kind of a path integral. The experiments work, and the summation is quite sensitive to the cutoff frequency/regulator. In certain geometries, the summations lead to infinities that need to be regulated or dealt with, much like the free-field QED theory. What’s different is that in the Casimir effect or in surface-tension or dielectric calculations, the infinities are ‘real’ in the sense that they really do depend on the cutoff frequency, and different materials with different cutoffs can be measured in the lab. But there is still a big leap from these experiments to the path integral.

**Planck Scale Spacetime Foams**

OK, now we leap off the deep end. We can’t attack super-string theories and the like because their geometrical principles are quite beautiful, quite believable, and rather unassailable. For example, if the first GUT doesn’t work, there’s always another, and if you can pull a fine-tuning trick from up your d-brane’d sleeve, that’s better. So instead, we mount the attack based on the a naive, geometrical interpretation of the Feynmann path integral and the process of second quantization. Lets just dive in.

**Abstract:** By assuming that Planck-scale spacetime resembles a ‘foam’, it is deduced, by means of hand-waving, that most quantum phenomena are best understood as interactions classical geodesics on this foam. Furthermore, it is argued that the arrow of time, as something distinct from 3D space, and having a fixed past and unknowable future, is a side-effect of the reconciliation of anomalies in this foam.

Lets assume that spacetime, at Planck scales (10e-40), resembles a foam of worm-holes. Lets imagine geodesics on this foam. Now, lets imagine that these geodesics interacted like billiard balls, i.e. had point interactions with each other. This clearly leads to grandfather paradoxes throughout the foam: a future billiard ball could emerge from a wormhole in the past, and prevent itself from going into the very wormhole it emerged from.

Hypothesis: The space-time foam tries to organize or equilibrate itself so that such grandfather-paradoxes between billiard geodesics do not occur. Hypothesis: this act of organizing or equilibrating propagates through the foam as a ‘wavefront’, a 3D slab of a surface traveling along a fourth dimension. This thin 3D slab is what we call ‘right now’; what lies behind this slab is ‘the past’, and what lies ahead is ‘the future’.

Now let us pause to notice that this simple model does a decent job of ‘explaining’ second quantization. How does it do this?

First, note that the act of second quantization consists of writing a sum, a ‘Feynmann Path Integral’ over all possible paths, weighted by the exponent of the action. The action defines the classical path; as any given path deviates from the classical path, it is weighted away. With only the slightest of handwaving, it should be obvious that if we imagine geodesics on a space-time foam, that these could well be taken to be the ‘paths’ that contribute to the path integral. Now, a purist might start arguing about the Hausdorff Measure of geodesics on a foam, as compared to the ‘uniformly distributed’ Jacobian of a path integral, but I will only shoot back that any such argument is already on tenuous mathematical footing. The set of geodesics on a foam should probably form a complete enough set to work just fine as the domain over which a path integral is taken. (Although not usually studied by physicists, there are deep fundamental problems with integrals of stochastic differential equations, which are well know to ‘quants’ working in the financial industries. These difficulties have been overcome for simple arbitrage and options pricing formulas but continue to plague more complex financial models. With some handwaving, its clear that many of these problems apply to path integrals as well, especially when one considers the question of whether the ‘paths’ contributing to the integral are differentiable or even continuous.)

Next, note that the ‘accidental’ resemblance of various diffusion and stat-mech equations to various quantum equations is no longer so ‘accidental’: the statistical properties and distributions of geodesics on a foam should indeed have the same qualitative properties as Brownian motion in a hard-ball gas.

Thus, we’ve just hand-waved our way over a giant part of quantum mechanics. To be more precise, we will need to also cover why it is that Planck’s constant is used in these path integrals, and how nothing in this handwaving contradicts the infinitely more rigorous theories of superstrings. Indeed, while we’re handwaving, we should point out that talking of geodesics on a spacetime foam might be a lot like talking about the hydrogen atom is if it were a planetary system whose orbits are constrained to have integer-wavelength circumference. We know that Bohr Action-Angle formalisms gave over to solutions of the Schroedinger wave equation for the Hydrogen atom; in a similar way, we might think that this talk of geodesics is a mental place-holder for something that ultimately phrases itself quite differently.