Download Quantum Theories of Mind

Document related concepts

Basil Hiley wikipedia , lookup

Bohr model wikipedia , lookup

Propagator wikipedia , lookup

Atomic orbital wikipedia , lookup

Identical particles wikipedia , lookup

Density matrix wikipedia , lookup

Bell test experiments wikipedia , lookup

Quantum decoherence wikipedia , lookup

Quantum dot wikipedia , lookup

Delayed choice quantum eraser wikipedia , lookup

Ensemble interpretation wikipedia , lookup

Atomic theory wikipedia , lookup

Coherent states wikipedia , lookup

Quantum fiction wikipedia , lookup

Path integral formulation wikipedia , lookup

Quantum computing wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

Topological quantum field theory wikipedia , lookup

Particle in a box wikipedia , lookup

Quantum field theory wikipedia , lookup

Wave function wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Scalar field theory wikipedia , lookup

Quantum machine learning wikipedia , lookup

Quantum entanglement wikipedia , lookup

Hydrogen atom wikipedia , lookup

Quantum group wikipedia , lookup

Probability amplitude wikipedia , lookup

Double-slit experiment wikipedia , lookup

Renormalization wikipedia , lookup

Bell's theorem wikipedia , lookup

Renormalization group wikipedia , lookup

Max Born wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Quantum key distribution wikipedia , lookup

Quantum teleportation wikipedia , lookup

Bohr–Einstein debates wikipedia , lookup

Matter wave wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Wave–particle duality wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Quantum state wikipedia , lookup

EPR paradox wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Canonical quantization wikipedia , lookup

T-symmetry wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

History of quantum field theory wikipedia , lookup

Hidden variable theory wikipedia , lookup

Transcript
1
Chapter 7
Quantum Theories of Mind
Quantum theory describes the atomic and subatomic processes
underlying all material reality – including brain processes. It is
complex, unintuitive, and widely believed to challenge the ideas of
objectivity, continuity, and causality ingrained in early childhood.
Because of this fundamental and challenging character, many see
quantum theory as key to the mysteries of mind. Yet, there is no
consensus on what lock, if any, it opens.
While this chapter’s primary aim is to discuss quantum theories
of mind, quantum physics colors all the topics so far addressed.
 Causality and natural law have been rethought in light of quantum indeterminacy and supposed non-locality.
 Despite evolution being logically independent of ontological
randomness, quantum randomness frames its interpretation.
 Quantum field theory is essential to any multiverse hypothesis.
Thus, the ideas treated in this chapter are pivotal to naturalists’
current worldview.
Relevance
Two aspects of quantum physics seem especially relevant to the
mind. First, quantum indeterminism might allow free will. Second,
the wave function collapsing when observed might give the mind
extraordinary power over material states. Henry Stapp argues that
mind plays a role in quantum theory unknown in classical physics.
Finally, specific quantum phenomena such as the behavior of Bose
condensates and the quantum Zeno effect have been called upon to
explain the mind.
Ricciardi and Umezawa,1 proposed an analogy between brain
states and quantum states using quantum formalism without using
quantum physics. Such analogies stand or fall independently and
are outside our scope here. Other approaches are “quantum mysticism” with no scientific basis, and do no credit to physics or the
Ricciardi and Umezawa (1967), “Brain and Physics of Many-Body Problems.” See
Stuart, Takahashi and Umezawa (1978), (1979); Jibu and Yasue (1995); Vitiello (1995);
Pessa and Vitiello (2003); Vitiello (2001), (2002).
1
2
mystical tradition. They use metaphor, mystery, and suggestion
instead of science and hard analysis.
This chapter is the most difficult in the book. There is no short
and simple way to treat its subject. Since quantum theories of mind
depend on specific, disputed interpretations of the theory, we must
analyze those interpretations. They, in turn, require us to understand the physics, at least in broad outline. Even if you are familiar
with quantum theory, at least skim the historical discussion, as you
may find points previously overlooked. If your interests do not include the foundations of physics, you may skip to the summary.
Modern vs. Classical Physics
Modern physics goes beyond surprising discoveries about the
structure of matter. Quantum theory is a radically different way of
seeing reality. “The laws of nature which we formulate mathematically in quantum theory deal no longer with the particles themselves but with our knowledge of the elementary particles.”2 Relativity is similar. Newton, and LaPlace based classical physics on
a quasi-Platonism in which objects exist just as we know them, independently of being known. Newton believed absolute time existed independently of being measured. Einstein rejected this, returning to the Aristotelian view that time is a measure whose value
depends on how it is measured. Newton had accepted the myth of
passivity unreflectively from Platonism. It neglected the active role
of the knower in perceiving the sensible, numbering the measurable, and understanding the intelligible. Thus, many were shocked
to find that objects can give different answers depending on how
they are sensed, measured or conceptualized. Modern psychology
and physics have refocused attention on our active role in perception.3 They woke classical science from its Platonic slumber, but
Aristotelians were already awake.
Dependence on the conditions of observation is not peculiar to
contemporary physics. Consider seeing a red apple. Our perception
depends not merely upon the apple’s objective qualities, but also
on the lighting and our color vision. The objective basis of the apple’s red is its ability to scatter red light while absorbing other col2
3
Heisenberg (1958b), “The Representation of Nature in Contemporary Physics,” p. 99.
The myth of passivity persists in many fields, including film theory. Anderson and Anderson (1993).
3
ors. Thus, in normal light, it is red. But, in green light, there would
be no red to scatter and it would be black. If we were incapable of
perceiving red, again, the apple would be black. The lesson is that
properties we tend to attribute to objects in isolation actually result
from the complex interactions. Forgetting this is once again committing the fallacy of misplaced concreteness.
Physics’ shift toward a more intimate relationship between matter and mind, and especially the idea that our intentionality is integral to quantum dynamics, is disconcerting and embarrassing to
naturalists. How can they justify their third-person stance, when
physics itself is making the observer’s mind ever more central?
Brief History of Quantum Theory
To understand quantum theories of mind, we must grasp the
basic ideas of quantum theory. While it uses the whole range of
contemporary
mathematics,
many essential quantum ideas
can be grasped without advanced mathematics. To keep
these ideas within the reach of
general readers, I am taking a
historical approach. So take
Figure 13
Young’s diagram of wave interference
courage and read on.
from two slits. The arcs indicate wave
crests. Where crests cancel toughs, a
The Old Quantum Theory
Quantum theory began in dark band is observed as at the right.
1900 when Max Planck (1858-1947) solved a problem that had
baffled classical physics, the color spectrum of black body radiation. (Black body radiation is the light emitted from heated bodies,
as when iron is red or white hot.) Planck used a mathematical trick
without physical justification, viz. restricting the energy of the oscillators emitting light to integral multiples of their frequency
times a constant.4 This forced the light to be emitted in discrete
quanta with energy proportional to frequency. Light quanta came
to be called photons.
Planck’s approach had not been tried earlier because the debate
over whether light was a particle or a wave had been settled in favor of the wave theory by Thomas Young (1873-1829). In a fa-
4
Planck’s constant, h = 6.626x10-34 Joule-seconds.
4
mous experiment, he passed light through two slits onto a screen
where he observed an interference pattern similar to that made by
water waves (figure 13). When a crest meets a trough, cancellation
leaves a dark band. Where two crests or troughs meet, reinforcement gives a bright band. James Clerk Maxwell developed an electromagnetic wave theory for light, which Heinrich Hertz confirmed. Emitting light in quantum packets seemed incompatible
with its accepted wave nature.
In 1905, Albert Einstein reversed Planck’s idea to describe the
photoelectric effect, viz. the energy of electrons knocked off thin
films by light. He explained it by assuming that light is absorbed as
quanta. Despite being a wave, light is emitted and absorbed discretely like a particle.
In 1909, Geoffrey Taylor repeated Young’s two-slit experiment
using light so weak that individual photon strikes could be seen.
Exposing film for various periods, patterns like those in Figure 14
emerged. While photons strike individually, cumulatively, they
form a wave pattern, evidencing wave-particle duality.
Eight years later, Niels Bohr (1885-1962) used Planck’s idea to
develop an atomic theory. In it, electrons were confined to discrete
orbits. Electrons falling from higher to lower energy orbits emit a
photon. Similarly, absorbing a photon causes an electron to jump
from a lower to a higher orbit. The theory was confirmed by calculating the spectrum emitted by hydrogen.
By the early 1920s, Bohr was had developed a model of the atom
with quantized electron orbits, but had no physical reason for quantization. To guide his efforts, he posited his Correspondence Principle: quantum theory has to correspond to classical physics when
applied to larger objects. This ensures a smooth transition between
the quantum and classical theories, but created logical problems
that persist today by basing the interpretation of quantum theory on
radically different classical concepts.
Louis de Broglie (1892-1987) made a startling proposal in his
1924 doctoral dissertation. He reasoned if waves had particle properties, particles should have wave properties. If so, the momentum
of an electron is inversely proportional to its wavelength and its
energy is proportional to its frequency. This explained the quantized orbits in Bohr’s atomic theory: their circumferences were a
whole number of lengths. In 1927, the Davisson and Germer
5
showed electrons interfere like light waves,5 confirming de Broglie’s wave-particle duality. Claus Jönsson performed the two-slit
experiment with electrons.6 (See figure 14.) The interference effects are not the result of several particles interfering, but occur in
the statistical aggregation of single hits, so we may think of quantum events as “guided” by waves. We now know that all elementary particles, atoms and molecules7 have wave properties.
Thus, the old quantum theory consisted of Planck’s quanta,
Bohr’s atomic theory, de Broglie’s relations for wave-particle duality and the resulting rules for calculating spectra.
The New Quantum Theory
Erwin Schrödinger and Werner Heisenberg independently systematized these insights into “the new quantum theory” in 1926.
Schrödinger developed his famous equation, the solutions to which
are the wave functions of quantum theory, each representing a possible state of matter.8 Unlike water or sound waves, quantum
waves are not directly observable. Max Born interpreted wave
functions as giving the probability of observing the electron. For
example, it may tell us the probability of an electron hitting each
part of a photographic plate as in figure 14.
5
Davisson and Germer (1927), “Reflection of electrons by a crystal of nickel,” and Davisson (1928),
“Are Electrons Waves?”
6
Claus Jönsson (1961), “Elektroneninterferenzen an mehreren künstlich hergestellten Feinspalten.”
7
Estermann and Stern (1930). Experiments on fullerenes or buckyballs (soccer-ball-like
molecules of 60 carbon atoms) have also shown wave interference. Nairz, Arndt, and
Zeilinger (2003), “Quantum interference experiments with large molecules.”
8 Heisenberg’s Matrix Mechanics is an equivalent alternate formulation, in which state
vectors and matrices take the place of wave functions and differential operators.
6
The wave function is not a “particle,” but a disposition for finding a
“particle.”9 Since we only find particles through interactions, this disposition is a propensity to interact.10
The interaction is always between
distinct wave functions, and may be
localized or spread over a large region. Particle detection always involves interactions with electrons
bound to atoms because detectors
are made of atoms. The attraction of
opposite charges keeps atomic electrons bound to their atom’s nucleus.
Interactions of photons or free electrons with an atom are localized because its electrons are localized.
These interactions allow us to think
of electrons as particles, but that
does not mean they are point objects
instead of waves.
Observing contrasts with being.
We have to discard the myth of pasFigure 14
sivity to understand quantum theory.
The
buildup
of an interference
If observing quanta left them unafpattern by single electrons in a
fected, then what we observe might double-slit experiment. Numbers
be their reality prior to observation, of electrons are: 10 (a), 200 (b),
but observation changes quanta. 6000 (c), 40000 (d), 140000 (e).
Thus, we cannot say they are what
we observe prior to their being observed. Quantum observations
are like asking leading questions. In doing so, we give information
that can influence the answer. So we can’t be sure the answer reflects prior knowledge.
First Quantization
Schrödinger and Heisenberg expressed the insight that there is
no measured value independent of a measuring operation by re“Particle” is in scare quotes because thinking of quanta as Newtonian point masses results in confusion. It is best to think of “particle” as an arbitrary tag for certain physical
properties.
10 This dovetails with the dynamic ontology outlined in chapter 5. Dynamic ontology
does not posit bits of matter, but operational specifications for possible interactions.
9
7
placing dynamic variables, such as momentum, with mathematical
operators.11 Variables represent pre-existing unknown values,
while operators express the insight, going back to Aristotle, that
values require a measuring operation. Replacing variables with
operators is first quantization.
Wave function values are complex numbers, having real and
imaginary parts. (Don’t be fooled by “imaginary” – both parts have
equal status.) To obtain a probability we add the squares of each
part. Since the wave function has two parts, while the observable
probability has a single value, information is lost in making quantum predictions.
Superposition
We now come to an essential and widely misunderstood aspect
of quantum theory, the Superposition Principle. It is entailed by
the fact that the single particle equations of quantum theory are
linear. “Linear” means the wave function occurs once in each
term, and not, say, twice so it is squared. Consequently, adding any
two solutions gives a new solution. Physically, this means any
number of possible electron states may be combined to give other
states. Imagine overlapping waves in a pond. Each wave progresses independently, but some crests add, while others fill troughs The
result is an ever-changing pattern, like that in the middle of figure
14. Each wave is a simple solution and their superposition is just a
more complicated solution. Since different wavelengths represent
different momenta, it is foolish to expect a single momentum for a
complex pattern. The same applies to other dynamical quantities.
Figure 15 shows how shapes can be represented by adding together simple (sine and cosine) waves. Conversely, any continuous
shape can be analyzed into a mathematical sum (superposition) of
simple waves. Thus, superposition is neither mysterious nor magic.
It is just describes how mathematical forms can be added to represent patterns. Just as a square wave can be made of sine waves, so
sine waves can be made of square waves. There untold complete
sets of basis functions we can superimpose to construct any smooth
function.12 The actual set of basis functions for quantum waves is
11
An operator is a mathematical entity giving a value when applied to a state description.
Typically, wave functions are represented using eigenfunctions of dynamical variables.
This is physically convenient, but not mathematically necessary. The projection postulate
12
8
chosen arbitrary and pragmatically.
We can choose any representation we
like.
Representational freedom is an example of the projection paradigm.
Physicists project quantum states into
alternate representations, expressing
them as vectors or functions, in laboratory or rest frames, using diverse basis
functions – all with equal truth, if not
equal convenience. No representation
is uniquely true. The quantum states
superimposed are an artifact of the
chosen representation, and are no more
or less real than those of any other representation.
Think of a scene not yet expressed
in words. I can ask questions about it,
and require answers in French or Chinese. When you answer in Chinese,
you are not answering in French. It
would be a mistake to think that, because you answered in Chinese or
French, the information was in that
Figure 15
language before you answered. The Superposition is adding funcwave function corresponds to the un- tions to describe other functions.
expressed scene, and its various repre- Here, five sine waves are successively added to approximate
sentations to expression in diverse a square wave.
languages.
Now that you see how straightforward superposition is, you will
be annoyed by simple-minded popularizations in which quantum
superposition is described as a swarm or cloud – as though we had
real point-particles buzzing around in an indeterminate state. In
first quantized theory, this is wrong in three ways. First, there is
only a single value of the wave function at any time and place.
(The pond surface has a definite height.) Second, wave functions
says measuring a variable selects an eigenvalue and collapse the wave function to the
corresponding eigenfunction.
9
are not probabilistic, but determinate. (The probability of interaction with detectors is indeterminate, not the wave function in isolation.) Third, there is no reason to think wave functions describe
“particles.” Electrons’ particle properties depend on interactions
with the bound, localized electron waves in detectors. When electrons are not interacting, there is only a potential to be localized by
interactions with bound electrons.
Heisenberg’s Uncertainty Principle
In 1927, Werner Heisenberg discovered his famous Uncertainty
Principle. Taking the particle idea, he argued that since light is
quantized, a minimum amount of energy and momentum, that of
one photon, is required to illuminate whatever we observe.13 Since
we do not know the initial state of the illuminating photon, observation perturbs a system in an unknown way. If we could know the
initial photon state, we might correct for it. If photon energy could
be arbitrarily reduced, we might make its perturbation negligible.
Since we can do neither, there is an inescapable disturbance and
resulting uncertainty. This uncertainty links the precision of position and momentum inversely:
the more accurately we know
one, the less accurately we know
the other.
This was unnoticed in Newtonian physics because the objects
it deals with are so large that the
perturbation of a single photon is
negligible. Quantum theory deals
with objects so small that the energy and momentum of a single
photon can dominate their response.
Figure 16 Uncertainty
Interestingly,
thinking
of
Five waves give a better model of a
quanta as waves gives an identi- wide shape than twenty-five give for a
cal result. Figure 16 shows that narrow spike.
many more waves are needed to
define a narrow spike than a wide shape. The spike represents a
13
Heisenberg (1927), and (1930) Physikalische Prinzipien der Quantentheorie.
10
precisely specified position. The more precise the position, the
more waves we need to specify it.14 Since each wavelength is a
different momentum, the more waves we need, the more possible
values there are for a momentum measurement, which is trying to
determine the wavelength. Conversely, the more accurate the wavelength or momentum, the less certain the position.
A similar relation links observation time to the error in measured energy. Imagine counting wave crests and dividing by time to
obtain their frequency. Depending on when you start and end, you
could just miss a crest at the beginning or end and get an inaccurate frequency. The longer you count, the less one crest matters
and the more accurate your result. As the energy of de Broglie
waves is proportional to frequency, frequency errors mean energy
errors. Since physical variables represent measurables, our inability to measure precisely means that energy is indeterminate for
short time intervals. There is no precise energy. The physical consequence is that quanta may not conserve energy for short periods
– because the concept of energy is not well defined for such periods.
To summarize Heisenberg’s argument: our inescapable lack of
knowledge about the details of the measurement process and the
wave nature of quanta each produces an inescapable uncertainty in
our measurements. Because prediction requires both position and
momentum, it is impossible in principle to define (not just measure) the initial state LaPlace imagined for omniscient predictions.
So, physicists now reject Laplacian determinism.
In neither the particle nor the wave view does Heisenberg assume we are measuring something ontologically indeterminate. In
the particle picture, the probing particle disturbs determinate values in an unknowable way. In the wave picture, there is a determinate wave, but asking for the particle descriptors of position and
momentum is asking a poorly framed question because waves cannot simultaneously represent a unique position and momentum.
A Conceptual Problem
Since quantum waves typically combine many wavelengths and
are spread over space, asking for a single wavelength or position is
forcing quanta to become something they are not. This is not a
problem with quantum reality, but with our categories. What is ill14
My illustrations use discrete waves in a Fourier series to illustrate what happens with a
continuous set of waves in a Fourier transformation. Quantum uncertainty is a consequent
of the De Broglie relations and the Fourier integral theorem.
11
defined is not the state we are measuring, but the concepts used to
describe it. The state has a definite form, but it does not correspond
to the classical physics’ particle concepts.
A Newtonian particle state is specified by six degrees of freedom – three spatial coordinates and the corresponding components
of momentum. (A degree of freedom is a number required to specify a physical state.) Quantum waves, on the other hand, have an
infinite number of degrees of freedom. Think of constructing complex patterns out of sine waves as in figures 15 and 16. We must
specify the size of each of an infinite number of waves, as each
combination gives a unique result.
Thus, when we think of quanta as “particles” we are using an
archaic and mismatched concept. A wave with infinite degrees of
freedom can’t be described adequately by six values.15 Most quantum information is lost by thinking of waves as particles. It is best
to drop the particle concept, and expect neither well-defined momenta nor precise positions.
Collapse of the Wave Function
One of the most puzzling aspects of quantum theory is the socalled “collapse of the wave function” occurring when we make
observations. When we make a measurement, the wave function
stops being a superposition of many possible states and becomes
the one actual state corresponding to the value observed.16 This
idea, the projection postulate, is usually stated without argument in
standard texts, but is a focal point in philosophical discussions.
As we shall see, some question when and where measurement
occurs. Working physicists take it for granted that measurements
happen when quanta interact with detectors. Still, philosophic interpreters sometimes push it back to the point of awareness, or
even propose eliminating measurement as a separate process from
the theory, so that observed system, detector, and observer are all
aspects of a single, complex quantum state.17 We shall see that
these proposals rest on the fallacy of misplaced concreteness.
15
The point particle concept causes problems in unifying quantum theory with general
relativity. See Greene (2002) p. 157. String theory, which has extended, sting-like “particles,” avoids these problem by limiting the minimum measurable length, but it entails the
contradiction of using the length concept outside its range of application to discuss string
structure. For more on the importance of measurability see the section on quantum observation below.
16 The measured value is an eigenvalue of the corresponding operator, and the associated
state is variously called an eigenstate, eigenvector or eigenfunction.
17 See Jammer (1974), The Philosophy of Quantum Mechanics, pp. 226f for a discussion
of Henry Margenau’s objections to the projection postulate.
12
Bosons and Fermions
Quanta come in two types: bosons
and fermions. Photons, the quanta of
light, are bosons along with other particles such as π and K mesons. Boson
properties were first predicted by
Satyendra Nath Bose (1894-1974) and
published with Einstein’s help in
1924.18 Electrons, protons, neutrons, Figure 17. Möbius Strip
neutrinos and quarks are all fermions. Two circuits are needed to reFermions are named after Enrico Fermi turn to the starting point.
(1901-1954) who, with Paul Dirac (1902-1984), described their
properties in 1926. Atoms can be either bosonic or fermionic.
Being a fermion or a boson depends on a particle’s intrinsic angular momentum or spin. (Angular momentum is the type of inertia that keeps tops and skaters spinning.) The spin of particles is
quantized to values of 0, ½, 1, 1½, … in natural units. Particles
with integer spin (0, 1, …) are bosons, while those with halfinteger spin (½, 1½, …) are fermions. Many bosons can be in the
same state. So, their waves may be added without difficulty. However, the Pauli Exclusion Principle prohibits two fermions from
being in the same state.19 Thus, each fermion is unique, while bosons can merge into a single wave.20
Fermions are also Möbius-like. A Möbius strip can be made by taking
a long strip of paper, twisting one end half a turn and gluing it to the other end. The resulting object has only one side and one edge. When we
rotate the spatial coordinates of a fermion a full circle, the wave function
does not return to its original value, but to minus its original value. This
is like drawing a line along a Möbius strip. After one circuit, our pencil
will be opposite its starting point. It requires a second circuit to return to
the start. This suggests that fermions may have a Möbius-like structure
that might help explain the exclusion principle.
Bose Condensates
Some quantum theories of mind hypothesize Bose condensates
in the brain. A Bose condensate is a state of matter composed of
bosons at temperatures near absolute zero. When bosonic atoms
18
Bose (1924), “Plancks Gesetz und Lichtquantenhypothese.”
They are repelled by so-called “exchange forces.”
Because photons are fermions, Maxwell’s equations, the basis of classical electrodynamics, work perfectly until we come to the emission and absorption of photons requires
second quantization.
19
20
13
are super-cooled, the vast majority of their electrons are in the
lowest possible energy state. Because they are bosons, these atoms
enter an identical quantum state and exhibit exotic quantum effects
at larger scales. This was predicted by Einstein and verified by Eric
Cornell and Carl Wieman21 who produced a Bose condensate using
super-cooled rubidium-87 in 1995. Wolfgang Ketterle of MIT produced a Bose condensate independently the same year. All three
received the 2001 Nobel Prize for this work.
Since all atoms in a Bose condensates share one quantum state,
they have no internal degrees of freedom. Normal systems have
one overall position and momentum, and a different relative position and momentum for each particle. These last are called “internal degrees of freedom” because they characterize the system’s
internal structure. If we have a billion independent atoms in a system, the system has six billion degrees of freedom. In a Bose condensate, all internal values are the same for every atom, so six
numbers describe entire the system.
Bose condensates are very fragile and any interaction with other
matter can cause them to revert to normal gases. Still, while they
are present, surprising phenomena occur. The most relevant is superposition at “macroscopic” scales. Typically, Bose condensate
experiments involve about 10,000 atoms, about 1.4x10-18 grams, a
truly miniscule amount of material. Andrews et al. (1997) used
quantum interference between two independent Bose condensates
to show they had coherent quantum waves. Later, Bloch, Hansch
and Esslinger (2000) showed these effects don’t occur in thermal
(warmer) systems of the same kind.
Second Quantization
Heisenberg and Schrödinger’s new quantum theory adequately
described single particles interacting with classical fields, but not
particle-particle interactions. Also, in 1929, quantum theory was
not integrated with Einstein’s special theory of relativity. Dirac
resolved both problems. By 1930, he had made several discoveries
including a relativistic replacement for Schrödinger’s equation, and
second quantization.
21
Eric Cornell and Carl Wieman (1996), “Bose-Einstein Condensation.”
14
Second quantization treats the birth and death of particles. Since
we don’t understand the details of particle creation and annihilation, we represent these processes with mathematical operators that
can change the number of particles in state descriptions. This replaces Schrödinger’s wave function with an operator much as operators in first quantization had replaced momentum and position.
The values yielded by this operator are wave functions with varying numbers of particles.
These ideas led to quantum electrodynamics (QED), a highly
successful theory with unanticipated consequences. According to
QED, space is not a void, but a plenum of negative energy electrons known as the Dirac sea. If a negative energy electron gets
enough energy, it can pop out of this sea and become observable.
The resulting hole is missing negative charge, which is effectively
a positive charge. We perceive the hole as a positron, which is like
an electron but with opposite charge.22 Thus, the theory predicts
holes that we can interpret as antiparticles or antimatter.
Second quantized field theory is not usually discussed in interpreting quantum theory, but it needs to be, as it changes the physics. In particular, in QED there are no single particle states. All
electrons constantly interact with electrons in the Dirac sea. Since
the wave function is replaced by an operator, the state of an electron is no longer represented by one wave function, but by the set
of all the possible ways in which an electron could navigate the
Dirac sea. The time evolution of quanta between observations is
not longer given by the single wave function of first quantization
but an unknown number of wave functions.
For example, energy indeterminacy allows negative energy
electrons to pop briefly out of the Dirac sea leaving a hole to create
a virtual electron-positron pair (figure 18.) (It is “virtual” since it
cannot exist long without violating energy conservation.) Our original electron may collide with and annihilate the virtual positron,
e+. This releases energy that the popped electron can later absorb,
allowing energy to be conserved. If so, the popped electron has
replaced the original. Thus, the detected electron need not be the
one we started with. We have no idea of when, where, if, or how
22
It also has opposite values for other quantities, e.g. electron number and lepton number.
15
many such events occur, so electrons’ behavior between observations is epistemologically indeterminate.
Second quantized interactions
result in infinities that can be
hidden by a redefinition process
called “renormalization.” While
infinities betoken deep gaps in
Figure 18
our knowledge, renormalization At A energy indeterminacy allows a
allows us ignore the gaps and virtual electron-positron pair to be created. At B the original electron annihilcalculate some of the most pre- ates the virtual positron, e+, creating a
cise values in physics: the anom- photon. At C the virtual electron abalous magnetic moment of the sorbs the photon, gaining enough ergy to replace the original electron.
electron and the Lamb shift of
hydrogen energy levels. In interpreting the theory, however, we
need to remember that the actual processes are more complex than
the mathematics we can get away with, and our knowledge is incomplete.
In sum, QED is closer to reality than first quantization. In it,
things change radically. The single wave function of first quantized
theory is replaced by an operator yielding many wave functions.
The number of particles is ill-defined and varies over time. While
each wave function develops deterministically, we don’t know
which wave function we actually have. Interactions between “our
electron” and the Dirac sea, such as that in figure 18, occur unobserved. Here we have a probability definitely based on ignorance
of known processes.
Interpretation
Wave-particle duality, quantum indeterminism and the unobservability of the wave function have lead to controversy over the
interpretation of quantum theory. Lately, the question of non-local
interactions has added to the confusion. I won’t discuss all the interpretations, but I hope to explain the salient points well enough to
resolve much of the perplexity.
16
The Copenhagen Interpretation
In 1929, quantum theory’s founders met in Copenhagen to discuss the meaning of quantum uncertainty and probability. The resulting Copenhagen Interpretation (CI), is “synonymous with indeterminism, Bohr’s correspondence principle, Born’s statistical
interpretation of the wave function, and Bohr’s complementarity
interpretation of certain atomic phenomena.”23 The most controversial aspect of the CI is that uncertain measurability reflects ontological indeterminacy – what we observe is not determined by
what exists prior to observation.
Just to be clear, there is no question of how to apply quantum
theory. There the CI continues to be accepted. The controversy is
over the meaning of what is calculated.
From its beginning, the CI encountered resistance. Einstein’s
statement that “God doesn’t play dice” is famous, but Planck,
Schrödinger, Max Laue and others also rejected the CI. David J.
Bohm (1952) presented a deterministic model of quantum theory
in which particles are guided by waves.25 It is non-local, which is
to say that happenings at one place depend on events at distant locations. Bohm’s work led De Broglie to abandon the CI and to develop a deterministic nonlinear quantum theory.26 Today, the CI is
no longer a majority position. An informal poll27 (Table 5) showed
that just
over a
Interpretation
Votes
%
quarter
of physicists
still
Copenhagen
13
27%
support
it.
Many Worlds
8
17%
Bohm
Histories24
Consistent
Modified Dynamics
None of the above/undecided
4
8%
4
8%
1
18
2%
38%
Table 5
Faye (2008), “Copenhagen Interpretation of Quantum Mechanics.” See also Hanson
(1959), “The Copenhagen Interpretation of Quantum Theory.”
24 See Omnès (1999), “Understanding Quantum Mechanics,” chapter 13, Griffiths (2003),
Consistent Quantum Theory, Dowker and Kent (1995), “Properties of Consistent Histories.”
25 See also Albert (1994), and Goldstein (2009).
26 De Broglie, Tentative d’Interprétation Causale et Nonlinéaire de la Méchanique Ondulatoire (1956). Relativistic extensions of de Broglie’s nonlinear theory have since been
developed, e.g. Kirilyuk (1999), “Double Solution with Chaos.”
27 Tegmark (1998), “The Interpretation of Quantum Mechanics” Stapp, (private communication) believes this pole is unrepresentative.
23
17
Einstein-Podolsky-Rosen Paradox
An important battleground in the debate over the CI centers on
the Einstein-Podolsky-Rosen (EPR) paradox.28 Bohr and other supporters of the CI thought that physics had gone as far as possible in
laying the foundations of quantum theory, so that looking for
something more “complete” showed a lack of understanding of the
new realities. Einstein believed that quantum theory was incomplete on two counts: (1) It says nothing of what happens between
measurements. Remember that the probability calculated is not the
probability of a particle being at a location, but the probability for
an interaction at a location. (2) In other statistical theories, e.g. statistical mechanics or throwing dice, we know what we are ignorant
of. More precisely, we know what information we would need to
avoid probability. Quantum theory tells us neither what we are ignorant of, nor what we are averaging to obtain probabilities. It just
gives us an equation and says if we do thus and so, we shall get the
probability of an observation.
Einstein, Podolsky and Rosen (1935) proposed a thought experiment in which two particles are sent off in opposite directions
from a common stating point, conserving both momentum and relative position. Once they are far enough apart not to interact, we
can measure the position of one and the momentum of the other,
and get a complete description.
Einstein had asked Podolsky to write the article, and was quite
disappointed. Podolsky’s formulation cannot work. If we know the
exact initial position, we shall be totally ignorant of the original
momentum, and vice versa. Subtracting a precise measured momentum from an imprecise initial momentum gives an imprecise
estimate of the unmeasured momentum. Einstein complained and
said whether or not we could measure position and momentum
simultaneously “is sausage to me.” Subsequently, Einstein made
clear that the EPR
paradox forces us to relinquish one of the following two assertions:
(1) the description by means of the psi-function is complete, (2) the
The history is from Fine (1996), The Shaky Game, and Fine (2004), “The Einstein-PodolskyRosen Argument in Quantum Theory.”
28
18
real states of spatially separate objects are independent of each other.29
Einstein did not need Podolsky’s formulation to make his case.
His idea can be seen in the measurement of a single variable, say
momentum. Suppose we take a system with zero total momentum
and split it into two fragments (quanta). After a time, each fragment can be considered isolated. Certainly, the fragments are as
much in isolation as any other quanta, because all quanta emerge
from some interaction. If quantum theory is complete, then before
measurement, each fragment’s state is represented by its wave function, and is really a superposition of many momenta. Now measure the momentum of one fragment. Without observing the other
fragment, we know it has the opposite momentum. Thus, with no
local interactions, the unmeasured wave function suddenly changes
from the superposition of many momenta to a single momentum.
Einstein concludes either the wave function did not really represent the state of the particle, or, contrary to our physical intuitions,
measuring one fragment is instantaneously changes the other
fragment. Thus, quantum theory is incomplete, or it is not local,
where “not local” means that interactions far removed from an
“isolated” system can affect it. This linking of isolated systems because of a shared past interaction is called quantum entanglement.
Hidden Variables and Locality
One approach to completing quantum theory is to posit unobserved or hidden variables. Then quantum probability reflects our
ignorance. Bohm’s theory takes this approach. Von Neumann and
others believed they had proven such theories impossible, but J. S.
Bell showed their proofs defective.30 Hidden variable theories, including Bohm’s, are non-local. So, completing quantum theory
does not resolve non-locality or quantum entanglement.
Bohm proposed a variant EPR experiment, EPRB, in which we
observe the spin of particles with a common origin instead of their
position and momentum. Bell derived an inequality that can be
used to falsify local hidden variable theories.31 First Freedman and
Quoted in Schilpp, (1949), Albert Einstein: Philosopher-Scientist, p. 682. The “psifunction” is the wave function, labeled ψ in Schrödinger’s equation, not a psychic effect.
30 Bell (1966), “On the Problem of Hidden Variables in Quantum Theory.”
31 Bell (1964), “On the Einstein-Podolsky-Rosen Paradox.”
29
19
Clauser,32 then Alain Aspect et al.,33 observed the spin (polarization) of entangled photons to test Bell-like inequalities. The results
falsify local hidden variable theories while confirming quantum
theory. Critics have argued that the detected photons might not be
representative of the emitted photons. Still, most physicists think
local hidden variable theories are ruled out. The choice seems to be
between non-local hidden variable theories and quantum theory
with local equations, but non-local implications.
Several points need to be made:
1. As Bohr noted, the observed system does not disturb its entangled partner dynamically. The process is not mechanical, but
“an influence on the very conditions which define the possible
types of predictions regarding the future behavior of system.”34
Our information is changed and not nature.
2. Despite the change being informational, there is no transfer of
information from the measurement to the remote entangled
wave function. This implies:
a. That there is no violation of the special relativity, which precludes signals faster than the speed of light.
b. There is no reduction in what is possible. Recall that information is the reduction of possibility. If no information is
transferred from the measurement site to the entangled site,
the measurement does not change what is possible at the entangled site. Even though the wave function collapses, that
collapse does not reflect a change of disposition or possibility.
c. Consequently, whatever happens at the entangled site is just
what would have happened if no measurement had been
done first. This is confirmed by point 3.
d. Finally, since there is no transfer of information, any possible nonlocality is irrelevant to any quantum theory of mind.
3. According to relativity, simultaneity is relative for events separated in space. So, if the measurement of A and the collapse of
Freedman and Clauser (1972), “Experimental Test of Local Hidden-Variable Theories.”
Aspect et al., (1981) “Experimental Tests of Realistic Local Theories,” (1982a) “Experimental
Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment,” (1982b) “Experimental Test
of Bell’s Inequalities Using Time-Varying Analyzers.”
32
33
Bohr (1935), “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” p. 700.
34
20
B’s wave function are simultaneous in one coordinate frame, in
another frame, the collapse B’s wave function will occur before
measuring A – giving rise to backward causality. In a third
frame, the reverse will be true: measuring A will happen with
no collapse of B’s wave function until later. This confirms
point 2c because if a measurement of B is done just after measuring A in one frame, in another frame of reference the measurement of B can be done before measuring A, and the entanglement “influence” passes in the opposite direction.
The collapse of entangled wave functions subjective. First, its
time of occurrence and causal relations depend on the observer’s
frame of reference. Second, it can’t affect on subsequent outcomes
(2c). This supports the position that all probability is intrinsically
subjective. As Bohr implied, the only difference in the entangled
wave function before and after their collapse is in our knowledge.
Schrödinger’s Cat
In his correspondence Schrödinger, Einstein proposed an unstable powder keg as a counterexample to the idea of macroscopic
superposition – arguing it could not possibly be in a superimposed
exploded/unexploded state. In response, Schrödinger published a
paper discussing his notorious cat.
A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in
a Geiger counter there is a tiny bit of radioactive substance, so small,
that perhaps in the course of the hour one of the atoms decays, but
also, with equal probability, perhaps none; if it happens, the counter
tube discharges and through a relay releases a hammer which shatters
a small flask of hydrocyanic acid. If one has left this entire system to
itself for an hour, one would say that the cat still lives if meanwhile
no atom has decayed. The psi-function of the entire system would
express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts. 35
35
Schrödinger (1935), “Die gegenwärtige Situation in der Quantenmechanik.”
21
Figure 19 Schrödinger’s Cat
In this thought experiment, there is a 50% probability that a nuclear decay will
occur and trigger the release of cyanide. This is supposed to give a superposition of a live and a dead cat.
This is extremely anthropocentric, as the result depends only on
human observers, ignoring what the cat experiences. Further, it has
no experiential support, nor can it, since whenever a human looks,
the wave function collapses and the smearing goes away like a shy
fairy who appears only when no one is peeking. In any philosophy
basing meaning on possible human experience, this is meaningless.
The most striking and neglected fact is that the wave function
does not really collapse at all! The “collapse” is an artifact of the
chosen representation. Suppose that we measure the position along
the x-axis. If we specify it exactly, there is a collapse to a single
position, but the momentum expands to infinity in both directions!
In other representations, there is simply a reorganization to account
for the new data. “Collapse” gives the misimpression that an indeterminate multiplicity has become a determinate reality. Actually,
one indeterminacy replaces another. This goes unnoticed because
we are focusing on one aspect of the system to the neglect of others. The collapse in our focus is noticed, while the new indeterminacy in conjugate variables is ignored.
If you open to more brutal thought experiments, it is easy to
show that Schrödinger’s view is incoherent.36 Replace the cat with
a human observer and arrange it so that if he dies, he dies before he
is aware of dying. If he lives an hour, he is 100% certain the decay
has not occurred, and there is no superposition of live and dead
36
This thought experiment is related to, but developed independently of the quantum suicide thought experiment developed by Hans Moravec in 1987 and by Bruno Marchal in
1988. Their ideas were further developed in Max Tegmark (1998).
22
human. To those outside the box, there is a 50% chance he is alive
and a 50% chance he is dead. Thus, (1) the presence or absence of
a real superposition makes no difference whatsoever to outside observers and (2) the probabilities calculated depend on the observer’s information, and are a subjective measure of ignorance rather
than objective, like any other probability.
The best is yet to come. Suppose that after a few seconds the
decay occurs and the inside observer dies. He never observed the
decay, and so there is no collapse of the wave function. Since
Schrödinger sees uncertainty as ontological indeterminacy, for outside observers there really is a 50% chance the inside observer is
alive. So, when the box is opened, he can be resurrected! As before, the probabilities are observer dependent and subjective. As
long as the inside observer lives, the probability he calculates for
his survival will increase, starting at 50% and reaching 100% at the
end. For those outside, it remains a constant 50% for the duration
of the experiment.
Assuming that quantum probability is objective led to the conclusion that it is subjective. Thus, objective quantum probability is
logically inconsistent. So, where did Schrödinger’s interpretation
go wrong? The physical bases of indeterminism in Heisenberg’s
original uncertainty paper are: (1) an unknown perturbation to the
observed system by the measuring process, and (2) the wave nature
of quanta. Neither has a significant macroscopic effect. We can
observe if the flask of hydrocyanic acid is broken without risk of
breaking it by our observation.
Quantum Observation
In his Mathematical Foundations of Quantum Mechanics, John
von Neumann divided quantum dynamics into to process steps
commonly used as a basis for discussing quantum events. As elaborated by Henry Stapp the steps are:
Process 1 (the Heisenberg process) is the selection of what to
observe. This implicitly fixes a representation for the wave function, namely a superposition of eigenstates of the variable to be
observed. (Eigenstates are “pure states” in the sense that they are
characteristic of a single value of the associated variable, and so
not superpositions of several values.)
Process 2 (the Schrödinger process) is the unitary (probability
conserving) development of the wave function or state vector. The
23
equations define the time-evolution of the probability amplitude in
a completely determinate way.
Process 3 (the Dirac process) is the observation process involving collapse of the wave function or reduction of the state vector to
one of the eigenstates defined in process 1. This is the process
through which multiple potential outcomes are reduced to one actual outcome. Randomness enters via this process.
Because von Neumann focused on the formalism, his Process 3
reduces four natural processes steps to one formal process. This
can result in confusion as to when the wave function collapses.
Most working physicists see it happening at the detector, while
more philosophically inclined researchers, including Henry Stapp,
see it as occurring at the point of awareness. When we become
aware, our ignorance disappears, and that is a collapse, albeit not a
physical one. This is the kind of collapse involved in the EPR paradox, and reflects our ignorance. However, interference experiments show that superposition is a physical phenomenon as well,
and the question is where does the physical collapse occur?
In making any quantum observation, four logically distinct reductions of potency to act37 occur: detection, measurement, sensation, and apprehension.
Detection. Detection requires interactions specified jointly by
the interacting elements. Heisenberg, Bohr and others noted it is an
error to regard detection as a passive reception of information from
the detected particle (the Platonic-Newtonian myth of passivity.)
Rather, detection events, like cognitive events, combine information from the objective object with information about the observing process. This dependence of the measured value on the
state of the detector is the basis of Heisenberg’s argument for uncertainty. The collapse of the wave function is a transition from an
unmeasurable quantum state to a measurable classical state.
(Wave functions are unobservable and so not measurable prior to
detection.)
In detection, the wave function interacts with detectors such as
photographic plates, bubble chambers, or scintillating crystals.
These are interactions with electrons bound in atoms. Consider the
37
Heisenberg, heavily influenced by Aristotle, applied the distinction of potency and act
to the measurement problem in his Physics and Philosophy (1958b). Heisenberg agrees
with Aristotle in seeing potencies as the ground of objectivity.
24
decay of a spin 0 particle into two spin ½ particles. Conservation
of angular momentum requires the decay products to have opposite
spins. Measure one with a detector set to spin up or down. For the
other, set the detector at right angles to the first. According to
quantum theory, each will detect a spin of plus or minus ½ along
its axis. Since the measured spins are in different planes, they can’t
add to zero. If the system is the isolated particles, angular momentum is unconserved. The detectors must supply angular momentum. Thus, measured values are not a property of particles alone,
but of the particles and detectors jointly.
Assuming that the physical collapse of the wave function is delayed beyond detection to the point of awareness has two consequences. First, we have discarded the very reason for using detectors. The function of detectors is to make the quantum wave’s potential to interact actual, determinate, and measurable. If it remains
potential, indeterminate and immeasurable after detection, what
function do detectors serve? Second, we still need to make the system measurable, so we still need detectors. Whatever makes the
quantum state measurable is, by definition, a detector.
Measurement. Detection events create discrete and/or continuous
quantities (e.g., counter pulses or measurable traces.). Critically, a
quantity is a potency. Aristotle saw that an objective quantity is not
an actual number or measured value, but is countable or measurable.38 This avoids a key difficulty in the philosophy of measurement. Since the measurable and the measuring process jointly specify values, there is no mystery in an object giving different values
in different measuring processes.
Obviously, we get different numbers using inch and centimeter
rulers. Less obviously, we get different values with different processes. In relativity, measured lengths depend on our frame of reference. When we measure light polarizations, the values vary with
the axes chosen. We can also measure right and left circular polarization. These variations depend on changing the measurement process, not the measurable. Measurables have no value without
measurement. Measurability is a disposition to give determinate
values in response to well-defined processes
“‘Quantity‘ means that which is divisible into two or more constituent parts, each of
which is naturally a unity and an ostensible thing [tóde ti = this something]. A quantity is
a count if it is countable and a magnitude if it is measurable. It is a count if it is potentially divisible into discrete parts, and a magnitude, if it is potentially divisible into continuous parts.” Aristotle, Metaphysics Δ, 13, 1020a8-13. Translation mine. Note the potential
nature of all quantity.
38
25
Heisenberg showed that quantum measurement is not well defined. Detectors have unknown initial states, and any attempt to
learn their initial state leads to an infinite regress. Measurements
are a joint product of the unknown states of both detector and the
target system. Bohr wrotethe principle of complementarity
implies the impossibility of any sharp separation between the behaviour of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.39
The essence of a measurement device is to output a measured
value. If it does not, it is not measuring. Thus, there is no physical
reason to think quantum uncertainty extends past the interaction of
quanta with the detection/measuring system. We routinely make
automated measurements in factories and physics experiments.
Why should automated measurement of quanta be different from
any other automated measurements? The same physics governs
both.
Sensation. The next reduction of potency to act in the chain is the
sensible output of the measuring device being sensed. The measured value is sensed by an observer, typically via a visual indicator. The indicator has the potential to inform the eye, and the eye
has the potential to see its configuration. In looking at the indicator, both potentials are realized in the same act. If the wave function has not yet collapsed, there is no information to inform our
senses. (Remember information is the reduction of possibility.)
The act of sensation informs the sense organ. The essence of sensation is that of all the things we could sense, we actually sense one.
If the wave function has yet not collapsed, this is impossible.
Cognition. The final step is awareness of the result. Processing the
visual image results in the content we become aware of. Here the
mind’s ability be aware, and the content’s ability to inform awareness are jointly actualized Again, without well-defined information, the mind cannot be informed.
There is a four step process here. There is no reason to delay the
collapse of the wave function beyond detection. All physical considerations and the analogy with other measurement processes indicate that detection and measurement should yield a well-defined
Quoted in Klein (2008), “Bohr, Niels.” Note the analogy between the quantum measuring process
and human experience with its mix of objective and subjective objects.
39
26
value. That is where indeterminacy is resolved. However sensible
this may seem, it disagrees with quantum orthodoxy.
Von Neumann argues that both macroscopic and microscopic
objects40 are fully described by their wave function. He argues that
since both the detected particle and the detector are quantum systems, the combination is a quantum system. According to quantum
theory, if the measuring apparatus is unobserved, it should develop
as a deterministic superposition of the possible outcomes of the
measurement. Thus, a visual indicator of the measurement would
be “smeared” instead of displaying a determinate value.
We have three options:
1. Our subjective awareness of the observation determines physical states.
2. There is something about detection and measurement systems
not described by a linear superposition of states.
3. There is no collapse of the wave function.
Option 3 has two branches: (a) modal interpretations which have
little support among working physicists because they are extremely
complex mathematically and so far unable to deal with the continuous state spaces physics actually uses;41 and (b) the many worlds
interpretation.
Many Worlds Interpretation
We discribed the MWI earlier in connection with multiverses
(p. Error! Bookmark not defined.). Here we shall critique it.
Quantum physics allows two experimental outcomes for the spin
of an electron (say, up or down). If the observer is subject to the
Schrödinger equation, then the state after the experiment is a mix
of a spin up measurement with an observer’s brain representing it,
and a spin down measurement with her brain representing that.
While Everett did not say the universe splits with each quantum
observation, epistemologically, it would and so the MWI is selfcontradictory.
40
There are several logical problems here. (1) It extends quantum theory beyond its verified realm of application. (2) There is danger of committing the fallacy of misplaced
concreteness by treating particles in a complex context the same as those in relative isolation. (3) It rejects the correspondence principle. (4) It treats macroscopic systems that
measure quanta differently from others subject to the same laws, but do not measure
quanta.
41 See Dickson and Dieks (2007), “Modal Interpretations of Quantum Mechanics.”
27
Operationally, the universe may be defined as the source of our
knowledge of reality, for “reality” means what we experience
when we are of sound mind. So, if the universe we experience is
not that in which an observer has experienced spin down, and the
spin down result is real, it is real in another universe. The epistemological Principle of Contradiction says an observer cannot know
an electron state is both spin up and not spin up. The ontological
Principle of Contradiction says that it cannot actually be both spin
up and not spin up. (The standard interpretation of quantum theory
allows the potential for spin and spin down to co-exist, but not
their actuality.) To know p is to have our potential for knowing p
actualized. We cannot know p and not actually know p. Therefore,
to allow for the mix, there must be two actually knowing observers
in two actually distinct universes, but this contradicts the logical
basis of the MWI which assumes only one universe.
The MWI has come to mean something independent of the logical basis Everett used to construct it. It now means that whenever a
wave function collapse might occur, what really happens is the
whole universe splits into all possible outcomes. As observations
of continuous variables have an indenumerable infinity of possible
outcomes, this leads to an infinite proliferation of real universes.
For example, each instant a nucleus might but does not decay, a
new real universe is spawned.
Here is the shy fairy syndrome gone wild! To say that there are
other unobservable universes is empirically meaningless, unfalsifiable and hence has no explanatory value. The MWI continually creates an indenumerable infinity of new, unobservable universes.
This is hyper-unparsimonious, for not only do we multiply entities
without necessity to explain events, but also to explain non-events.
Further, the entities we multiply are whole universes!
Other than the unbounded proliferation of shy fairies, what is
wrong with this line of argument? There are four problems:
1. It extends the Schrödinger equation beyond its verified range of
application. According to the correspondence principle, quantum
effects yield to classical ones at macroscopic scales. In reference to the application of the strict tenets of quantum theory to
mind, Nobel physics laureate Eugene Wigner (1902-1995)
wrote:
Its weakness for providing a specific effect of the consciousness
on matter lies in its total reliance on these tenets—a reliance
28
which would be, on the basis of our experiences with the ephemeral nature of physical theories, difficult to justify fully.42
2. If the whole universe is subject to a linear dynanic equation,
there is no room for the nonlinear dynamics of general relativity43 and chaos theory. Thus, the WMI does not apply to the
world in its nonlinear complexity, but only to an abstract linearized model of reality
3. As we saw in analyzing the potencies actualized in coming to
know a quantum state (p. 23), there is no reason to think that
superimposed quantum states are actual as opposed to potential.
Many incompatible states can be potential at the same time. So
there is no necessity to superimpose an observer actually knowing spin up with an observer actually knowing spin down. An
observer with the potency to be informed either way suffices.
4. The Schrödinger equation is an empirical equation. Its sole
foundation is empirical observation, and its sole function is to
give empirical results. Everett proposal is unempirical since
other worlds are causally disjoint from ours. Thus, his conclusion over-reaches its logical foundations.
The MWI also leads to a quantum probability paradox. Suppose
we did the Schrödinger cat experiment with a one in a million
probability of the cat dying. To account for the two possibilities,
we must spawn a universe with a dead cat and one with a live cat.
Both are equally real and deterministic. After many such experiments, we would expect the ratio of dead to live cats to approach
one in a million. Instead, it is always one to one. Half the worlds
must have a dead cat. We get a 50% probability the cat dying, regardless of quantum probability. (The assumption of actuality as
opposed to potentiality is critical. Potencies can have probabilities
of actualization. Realities, however, always get a count of one.
There is no such thing as being 50% real.)
Tegmark, like Penrose, is a Platonist and tries to justify the
MWI using Platonism. He proposes we take an outside observer
stance, as opposed to an inside observer stance. The outside observer knows what is “really” happening, while the inside observer
only knows what she thinks is happening. This, of course, is the
omniscience fallacy. There is an equivocation here on “knows.”
“Knowing” names a human activity in which we are aware of be42
Wigner (1961).
See Weyl (1922), Space–Time–Matter, pp. 267f for a discussion of nonlinearity and superposition.
43
29
ing informed. As a human activity, it has limitations. Einstein derived the theory of relativity by reflecting on those limitations.
Similarly, Heisenberg’s reflection on the limits of human observations gave us quantum indeterminacy. The omniscience fallacy ignores limitations and assumes knowledge only God can have.
Tegmark’s outside observer is informed without an informing process. That is not human knowledge. Without the limitations of the
informing processes, there is no quantum uncertainty – and so
nothing for the MWI to interpret. Further, absent an informing process, no dynamic links knower and known, and there is no
knowledge.
Since Tegmark is a Platonist, empirical means of knowing play
little role in his analysis. His analysis fixes on the laws of nature to
the exclusion of data. Resurrecting Laplacian determinism, he
writes
an infinitely intelligent mathematician given the equations of the
Universe could in principle compute the inside view, i.e., compute
what self-aware observers the Universe would contain, what they
would perceive, and what language they would invent to describe
their perceptions to one another. 44
How can we know what they would perceive without knowing
what is to be perceived? While this is a lapse, it is a telling one, for
it shows Tegmark’s thinking is fixed on dynamical laws to the exclusion of data. In Heisenberg’s discussion of uncertainty, the central question is “How do we measure?” Here that question – central
to both relativity and quantum theory – remains unaddressed.
Tegmark goes on denigrate inside observers (us poor souls who
live in the real world and have to gain our knowledge by experience.) Given our limitations
there can never be a “Theory of Everything”, since one is ultimately
just explaining certain verbal statements by other verbal statements
— this is known as the infinite regress problem.
Since we are not explaining words but data, there is no question of
an infinite regress. If there were, we would all be equally trapped
because we are all inside observers.
There is a fundamental problem with letting data acquisition fall
off the table and out of sight. In Platonism, laws are prior, and data
is a nasty distraction from the contemplation of eternal verities. In
44
Tegmark (1998), p. 3.
30
real science, the laws emerge out of our attempts to find coherent
and consistent means of explaining data. Without that nasty data,
their might be laws, but we would never know them.
Clearly, the MWI has little to recommend it.
Collapse on Awareness
As quantum theory does not claim to describe human awareness, a clear boundary can be drawn between physical systems and
awareness. Von Neumann, a mathematician, suggested that the
boundary between quantum and classical systems could be placed
with indifference either (1) at the interface of measured system and
measuring system or (2) at the interface between the measuring
system and the brain. London and Bauer went further, positing that
the observer’s awareness causes the collapse of the wave function.45 This interpretation is sometimes called “consciousness
causes collapse.” Of course, eliminativists for whom nothing is
distinctly mental, and reductionists for whom mind is a consequence of physics cannot consistently maintain this interpretation.
This option is attractive because (1) it uses a pre-existing
boundary of quantum theory, (2) entanglement showed us that the
collapse of the wave function is at least partly subjective and (3) it
gives mind a fundamental role in science. Extending this to willed
choices would be straightforward, but there are numerous problems with this option, some of which we have already touched on.
Subjective Invariance. Since all subjects observe the same outcome,
the outcome does not depend on the observing subject. If it were a
function of subjective awareness, different subjects would report
different outcomes. What is not subjectively variable is objective,
and so the state is objectively determinate or collapsed prior to our
being aware of it.46
Temporal Relativity. We can try to solve the subjective invariance problem by saying the first one to perceive the result collapses the wave function, but this conflicts with relativity. If two observers are equally far from an event in one reference frame, either
one could be the first observer in another frame. So, being “first”
has no objective basis. That means neither can be the one to col45
London and Bauer (1939), La théorie de l’observation en mécanique quantique.
46
This line of argument is given by Wigner (1961).
31
lapse the wave function by awareness. If neither collapses it by
subjective awareness, it is not collapsed by subjective awareness.
We have seen a similar problem in discussing the EPR paradox.
Backward Causality. If the measurement M occurs at t1, the observer’s awareness A occurs at a later time, t2 > t1. If A determines
M, we have backward causality because a later event determines
an earlier one. However we dress it up with collapsing wave functions, alternate worlds or shy fairies, the actual claim is that a later
event determines an earlier one. If we allow one claim of backward
causality justified by unobservable paraphernalia, we open the gate
to any claim of backward causality.
Parsimony. Saying that wave functions collapse when we become
aware of the result (what result?) does not provide any mechanism
for the collapse. So, collapse on awareness requires a new mental
mechanism when we already have a mechanism in the detection
process. We have enough mind-body problems already.
Operational Confusion. We have already seen that this interpretation is the result of inadequate analysis of the four separate reductions of potency to act involved in observation.
Confusion of Intellect and Will. Making awareness change the
state of matter confuses the intellect (awareness) being informed
by reality with that of the will informing reality.
Anomalous Cognition. For awareness to determine quantum outcomes requires our mind to act differently in quantum and classical
observations. We have seen that in ordinary knowing the mind is
indeterminate and uninformed, requiring a determinate object to
inform it. Here we have the opposite: the object is an indeterminate
superposition of information, and the mind makes that indeterminate state determinate and informative. Here the subject, not the
object, determines information. We get an inconsistent idealism in
which the awareness sometimes informs the world, and at other
times is informed by it.
This is a deep problem. There is no source of information. The
mind is not yet informed, so it has no information with which to
inform the wave function, and the wave function is not collapsed
and so can’t inform the mind. Yet the process ends with both the
mind and the wave function truly informed. Information appears as
32
a deus ex machina, without prior foundation. Information is related
to entropy, so we have a transition from less ordered states to more
ordered mental and material states at no known cost in entropy.
This is an apparent violation of the second law of thermodynamics.
This leads to a further problem. If information can, and routinely
does, appear ex nihilo, without physical or mental source, both physics and logic are vitiated. This is not a matter of ignorance of atomic scale events, but of the creation ex nihilo of information on macroscopic objects. Similarly, valid truth appears in the mind with no
foundation. If this is happens, invalid conclusions can be justified
by informatio ex nihilo. Logic and the scientific method are invalidated.
Elimination of Classical Physics. Anomalous cognition gets
worse when we realize that extending the accepted range of application of quantum theory to macroscopic objects eliminates classical physics. If quantum theory applies at all scales, classical physics is overturned in its verified range of application. This undermines the experimental foundations of quantum theory, all of
which depend on the classical interpretation of experiments.
Self-Contradiction. Further, if all experience is quantum experience, then the analysis of the ordinary experience is an analysis of
quantum experience. So our initial conclusion that the object informs the mind rather than the reverse is based on, and so applies
to, quantum reality. Thus, collapse on awareness supports contradictory conclusions and is logically inconsistent.
Subjective Probability. As we saw in disusing Schrödinger’s cat
(p. 20), collapse on awareness leads to quantum probabilities that
vary with the observing subject.
Fundamental Inadequacy. There are definite measurements in
which we are unaware of the collapse of individual wave functions. For example, when we know a Geiger-Müller counter reading, we know a time-weighted average number of recent detection
events, but we do not know precisely where or when these events
occurred. Since we are unaware of particular events, our knowledge places no constraint on any particular collapse. Still, to have
detection events to average, the wave functions must collapse.
33
Thus, wave functions collapse absent our knowledge, which is impossible if collapse depends on awareness.
The combined weight of these arguments, from methodological,
classical, relativistic, causal, logical, epistemological, and psychological projections – all with the same conclusion – is overwhelming. Collapse on awareness is untenable.
Demythologizing Quantum Theory
If we want to understand quantum theory we should (1) minimize gratuitous assumptions, (2) stay within the theory’s verified
realm of application, and (3) avoid the fallacy of misplaced concreteness. Minimizing gratuitous assumptions forbids postulating
that randomness is ontological if sources of ignorance are known.
Similarly, nonlocal interactions may not be posited absent observable effects. Staying within a theory’s verified realm of application
precludes applying quantum theory macroscopically without experimental warrant. Avoiding misplaced concreteness requires that
we not oversimplify complexity. As we are adding nothing, this is
the Non-Interpretation of Quantum Theory.
Wave-Particle Duality
The particle properties of quanta occur in interactions with
bound atomic electrons. If a extended structure, such as a photon
or a free electron, interacts with a localized structure, such as a
bound electron, the interaction will be in the region of overlap defined by the localized structure. Thus, when a free electron wave
strikes a photographic plate, as in figure 14, the result is localized
without the necessity of postulating a particle. Further, once the
conserved quantities (energy, momentum, spin, charge, etc.) of the
incident electron wave have been expended on one interaction,
conservation laws will prevent another.
The intellectual climate when wave-particle duality arose was
far different than it is today. It was assumed that atoms were composed of hard, billiard-ball-like objects. If you already view matter
as composed of such objects, you will assume that localized interactions require particles. If you see atoms as wave structures as we
do today, then you the quanta that interact with them can also be
wave structures.
34
Positing a particle nature for quanta rests on the evidence of localized interactions like those in figure 14. Some argue that the
Schrödinger equation assumes point particles, because it represents
quanta interacting at a single point. However, the Schrödinger
equation only yields a probability density (probability per unit volume). Thus, probability is proportional to the volume considered,
and the probability of interaction at a single point is zero. Given
that (1) interactions can be explained using the wave picture, and
(2) the particle model has too few degrees of freedom to account
for the observed wave phenomena, the particle model should be
dropped as a Newtonian anachronism.
Globality
This insight allows us to demystify quantum entanglement and
non-locality. The problem is that what happens is very unintuitive
on the particle model. Because quanta are waves, their interactions
occur over an extended volume, not just a single point. Modeling
quanta as point particles leaves a residue of non-local information
unaccounted for. It is not surprising that deterministic particle
models of quantum dynamics must be non-local.
The non-locality47 and quantum entanglement in the EPR paradox follow from conservation laws. I said that the separated fragments “are as much in isolation as any other quanta, because all
quanta emerge from some interaction.” This explains non-locality.
Since the universe began in a compact big bang, all quanta are related via a network of interactions to a common origin. Since every
interaction obeys conservation laws, every interaction gives new
conservation constraints on quantum states. No quantum state is
completely random. All quantum states are severely entangled by a
system of conservation equations traceable to their common origin.
Since quantum waves have infinite degrees of freedom, they can
embody untold information. There is no new physics here – just
applying the physics we already know globally and consistently.
47
Non-locality is a recurring feature of quantum theory. The simplest relativistic extension of Schrödinger’s equation is non-local, even though it involves only local interactions. (Bjorken and Drell (1964), Relativistic Quantum Mechanics, pp. 4f. Thus, nonlocal theories need not imply non-local interactions. Removing non-locality from relativistic quantum theory results in negative energy states, yielding the Dirac sea.
35
As we saw in discussing detection (p. 23), spin measurements
can only conserve angular momentum if the quanta interact with
the detectors. Since detectors contribute angular momentum to the
results, the measured values depend on the joint state of quantum
and detector. When an EPRB pair interacts with separate detectors,
the detectors’ states are not random either with respect to each other, or with respect to the measured pair. Rather, they are entangled
by their history in such a way as to prevent any result violating
conservation. No set of events anywhere in the universe can violate
any conservation law because of the network of constraining equations. If a detector interacts with one quantum of an EPRB pair to
produce a spin up measurement, the other detector-quantum subsystem is so constrained by prior events that it cannot violate conservation of angular momentum by also producing a spin up measurement. If no conservation law would be violated, there are no
constraints because the constraints are the conservation laws.
Thus, despite seeming non-locality, nothing need be communicated from one point to another at the present time. The detectors
are “pre-aligned” by their history: every particle shares a common
origin with, and is entangled with, every other particle in the universe. The logical problem here is misplaced concreteness: the abstraction of isolation ignores the global context of all physics. The
solution is not less entanglement, but more.
We know we are ignorant of the constraints on quantum states.
We can express this ignorance using probability. When we measure one particle of an EPRB pair, we gain information on the constraints. The wave function of the other particle collapses because
our ignorance of the other particle is reduced. As Bohr noted, this
is not a mechanical effect, but “an influence on the very conditions
which define the possible types of predictions.” The probability
has not changed because of non-local dynamics, but because our
ignorance is reduced. Necessarily, then, quantum probability is a
function of our ignorance of entanglement constraints. It may be a
function of other things as well. But, now we know for certain that
quantum probability is a function of ignorance, and one aspect is
our ignorance of entanglement constraints.
Newtonian particles, with six degrees of freedom, don’t have
the capacity to encode historical constraints. So, thinking of quanta
36
as particles forces us into non-locality. Waves have infinite degrees
of freedom, and more information can be stored in the chaos of the
Dirac sea (vacuum fluctuations). This allows the encoding of constraints from any number of interactions going back to the beginning of time. So, we need not imagine non-local “interactions”
communicating no real information. This analysis depends only on
only the fact that no interaction can manifestly violate conservation
laws. Again, there is no new physics.
This leads to another link between entanglement and globality:
Noether’s theorem (p. Error! Bookmark not defined.). Conservation of momentum depends on spatial translation invariance, and
conservation of energy on temporal translation invariance. These
invariances are nonlocal as they depend on the dynamics remaining unchanged through space and time. Entanglement depends on
conservation laws. Conservation depends on global properties. So
entanglement necessarily depends on global properties.
Entanglement constraints explain the non-locality of Bohm’s
quantum theory. Deterministic quantum theories are non-local not
because of non-local interactions, but because non-locality formulates conservation constraints deriving from historical, entangling
interactions.
Ontological Randomness?
As we saw (p. 17), the CI is no longer a consensus. One reason
for the gradual abandonment of ontological indeterminism may be
the emergence of models of the internal structure of particles, such
as superstring theory. Another is that we now know many sources
of ignorance (known unknowns) contributing to quantum probability. We have more than enough known sources of ignorance to account for observed probabilistic effects. So, there is no need for
ontological randomness. We have reviewed the following sources
of ignorance:
 Quanta are waves that sometimes act like particles. We are ignorant of most quantum information because we try to model
waves with infinite degrees of freedom by particles with six.
This leaves us ignorant of an infinite amount of information.
 Quanta are measured by interactions with probes and detectors.
So, measurements reflect the combined state of the detector
37
and the observed system. In classical systems, the detector’s
uncertainty is negligible. In quantum systems, irreducible ignorance of the detector’s state gives an irreducible uncertainty in
our measurements. Ignorance of the detailed state of the detector is unavoidable, but neglecting this uncertainty in interpreting the theory is the fallacy of misplaced concreteness.
 We know that the vacuum is actually a plenum, the Dirac sea,
crammed with negative energy electrons. As we saw (p. 14),
we are ignorant of interactions with the Dirac sea. We know
neither the number of quanta actually present, nor their precise
states.
 Whenever quanta interact, conservation laws constrain the resulting quanta so that when we measure one, we learn something of the particles with which it has interacted. Each fermion
is a unique individual bearing the marks of its lineage back to
the big bang as “quantum entanglement.” Being ignorant of the
dynamic history of any particle, the network of constraints due
to global entanglement remains a mystery.
Initial State
Final State
Forward in Time
Final State
Backward in Time
Initial State
Figure 20. Quantum Determinism
Running a quantum process forward and backward in time without external interference always gives the same results; therefore, unobserved quantum processes are deterministic.
 A final argument, which we hinted at in chapter 3, shows that
events are not random in quantum theory. If an event is ontologically random and we repeat it several times, we shall get
random results. If we necessarily get the same result, it is deterministic. According to quantum theory, if we don’t touch a
system by measuring it, we can run processes forward and
backward in time repeatedly obtaining the same result. (See
figure 20.) If we run it backward in time, we get the original
initial state.48 If we then run it forward a second time, we shall
48
See Susskind (2008), The Black Hole War, pp. 182f.
38
get the same final state as before, not a random one. Thus,
quantum processes are deterministic. Since touching the system
with a measurement process breaks its time reversal symmetry,
the measuring process is the source of randomness, not the
state of the system – just as Heisenberg argued in establishing
the uncertainty principle.
Limiting Superposition
Quantum theory was founded on the Correspondence Principle,
the premise that macroscopic events are described by classical rather than quantum physics. If so, the collapse of the wave function
occurs in detection. As we have seen, temporal relativity and causality rule out any interpretation delaying the collapse of the wave
function past detection. Thus, some problem prevents the application of superposition to large scale objects, such as detectors and
Approx. 1
Output
Range 1
Error 1
Approx. 2
Error 2
Range 2
Input
Figure 21
Small ranges of smooth functions can always be linearly approximated. The
heavy curved line represents nonlinear dynamics, while approximations 1 and 2
use linear dynamics. Approx. 1 spans range 1 with Error 1. Approx. 2 has a
smaller range, so Error 2 is smaller.
human brains.
39
As superposition is a consequence of quantum linearity, any
limit to superposition implies nonlinear dynamics.49 The assumption of linearity always works as an approximation for small ranges
of variables. (See figure 21.) For example, linear approximations
often work in fluid mechanics even though it is nonlinear. Let’s
explore the possibility that quantum theory is a linear approximation to a nonlinear dynamics.
One consequence of linearity is that the solutions of equations
are scaleable, i.e. if ψ is a solution, then so is any multiple of ψ,
such as 10ψ or 0.1ψ. Thus, the fact that elementary particles have
only fixed masses and charges shows a fundamental nonlinearity.
Imagine an equation describing the structure of an electron, proton
or quark. If it were linear, we could scale its solutions and have
particles of any mass and charge. Since mass and charge are not
scaleable, any such equation is nonlinear. Further, if our fundamental theory were linear, we could not have quadratic gravitational and electrodynamic interactions.
Quantum nonlinearity is a fact. Although generally ignored in
discussing superposition, quantum interactions are nonlinear.50 The
annihilation and creation operators of QED disguise further nonlinearities. Photons with linear dynamics would add without interaction, but in QED, they can produce electron-positron pairs.51 So
linearity and the superposition it implies are invalid when there are
interactions, and interactions are an essential feature of the quantum chemistry describing macroscopic objects.
Because opposite charges shield each other, electrodynamic
nonlinearity does not generally increase with scale. So it may not
My position is similar to Ludwig (1961), Werner Heisenberg. Dürr, et al. (1954), “Zur
Theorie der Elementarteilchen,” proposes a nonlinear elementary particle theory. Wigner
(1961) argued that quantum theory must be nonlinear. See also Primas (1997), “The Representation of Facts in Physical Theories.” Weinberg (1989), “Testing Quantum Mechanics” and (1993) Dreams of a Final Theory, pp. 68f, opposes nonlinearity.
50 Using the Einstein summation convention, the electromagnetic term in Dirac’s equation is eγμAμψ, where e is the electron charge, γμ relativistic spin matrix, Aμ the four potential. This appears linear, but the system of equations for interacting quanta is nonlinear.
While Aμ is linear in the four current, Jμ, Jμ is quadratic in the wave function of the interacting quantum: Jμ ~ ψ†γμψ, where † denotes Hermitean conjugation. See Bjorken and
Drell (1964), pp. 108ff, for an example.
51 To understand how second quantization conceals nonlinearity see how Feynman diagrams represent nonlinear water waves in Hasselmann (1966), “Feynman Diagrams and
Interaction Rules of Wave-Wave Scattering Processes.”
49
40
explain the correspondence principle.
Gravity, however, is quadratic in mass,
affects all particles, and is not shielded,
but increases with system mass. Thus,
gravitational nonlinearity gives a
smooth transition from low mass quantum regimes where superposition
works to larger mass classical regimes
Figure 22
where it does not – allowing us explain
Decoherence
due to a frequenBohr’s correspondence principle.
cy difference. At the top is the
While nonlinear equations are insol- sum of the two waves below.
uble, we can estimate how long it will Initially, left, they are in phase
take for gravitational nonlinearity to and reinforce. At right, they are
become important. As we saw (p. 4), out of phase and cancel.
every particle has a frequency due to its
energy, mc2. This frequency is changed by nonlinear gravitational
effects, so that waves that reinforce in a linear theory may cancel
when nonlinear effects are considered. This allows us to estimate
how long the failure of superposition will take. For example, if
nonlinearity changes the frequency by 1/1000, after 500 cycles a
trough will be where a crest would have been. (See figure 22.)
Thus, we would expect the linear approximation to breakdown in
about 500 cycles.
The gravitational decoherence of two electrons separated by an
atomic radius would take six trillion times the age of the universe.
Thus, superposition is an excellent approximation for small numbers of particles, including Bose condensates with 10,000 atoms.
However, for larger objects, nonlinearity becomes critical. For
spheres, the gravitational decoherence time is inversely proportional to the fifth power of the radius.52 A 10 micron silicon sphere
has a decoherence time of about 28 microseconds. For observations on that scale or larger, superposition should not apply. However, a 0.1 micron silicon sphere has a decoherence time of about
The de Broglie frequency is ν = E/h, where E is energy, and h is Planck’s constant. For
a sphere of density ρ and radius r, the gravitational energy is 16π2Gρ2r5/15, where G is
the gravitational constant. The change in frequency due to gravitational nonlinearity is Δν
= 16π2Gρ2r5/15h. The time for decoherence is T = n/ν, where n = ν/(2Δν) is the number of
cycles needed to get completely out of phase with its linear approximation. Thus, T =
1/(2Δν) = 15h/32π2Gρ2r5.
52
41
280,000 seconds or 77 hours, meaning that quantum calculations
extending over days should be valid. While these estimates ignore
the details of nonlinear dynamics, they accord well with the scales
at which we normally transition from classical to quantum theory
and explain Bohr’s correspondence principle.
This resolves Schrödinger’s cat paradox. Superposition does not
apply at such scales. Nonlinear effects also undercut the basis of
the many worlds interpretation in a superposition of observer brain
states, and the collapse on awareness hypothesis. Superposition
fails with the first macroscopic interaction, typically at a detector.
Nonlinearity can force a dynamically determinate, but chaotic, collapse of the wave function. When combined with unknown initial
states, this explains observational indeterminism.
There is no doubt that quantum dynamics is nonlinear, and that
superposition breaks down given sufficient time. It is probable that
nonlinear gravitational interactions explain the correspondence
principle (the transition from quantum to classical mechanics as we
increase the scale of the systems we are investigating.) Thus, tales
of the smearing or superposition of large scale objects such cats,
observers’ brains and planets are inconsistent with the accepted
formulation of quantum interactions.
We see that the macroscopic application of superposition involves the precise difficulties warned of in chapter 1:
1. By ignoring that electrons occur, not in isolation, but in the context of interacting systems, it commits the fallacy of misplaced
concreteness just as reductionism does.
2. It involves the same problem we saw in the opposition to Osborne Reynolds‘ work and the difficulties encountered by longrange weather forecasting – the neglect of nonlinearity.
Application to Mind
We now have enough beackground to discuss the application of
quantum ideas to the mind intelligently.
General Considerations
Many quantum theories of mind face the same challenges including:
 Problems of computational complexity.
42
 Neglect of the first-person perspective.
 Problems of scale and coherence, i.e. quantum theory apples to
spatial scales small compared to the size of a single synapse,
while the brain is largely insensitive to the random firing of a
single neuron.
 If quantum physics is to play a unique role in our theory of
mind, we need a specific mechanisms by which it does.
 Quantum events are random, while the mind is intentionally directed.
Since these topics impinge many theories, we can deal with
them in a general way.
Computational Complexity
No one can perform the quantum computations to predict the
behavior of an organelle, let alone a neuron or the entire brain. Super computers have only very recently become powerful enough to
calculate the dynamics of simple proteins.53 Harvard biological
chemist Alán Aspuru-Guzik, who uses computers to apply quantum theory to biological chemistry, tells us:
As the size of a system grows, the computational resources required
to simulate it grow exponentially. For example, it might take one day
to simulate a reaction involving 10 atoms, two days for 11 atoms,
four days for 12 atoms, eight days for 13 atoms, and so on. Before
long, this would exhaust the world’s computational power. 54
Researchers hope that quantum computation, which is theoretically possible, but far from practical demonstration, may make
such calculations possible in the future. The information in proposed quantum computers would be stored in “qubits” implemented in quantum dots about 180 nm in diameter and containing
about 20 electrons whose spin represent data. If and when we have
quantum computers, we may be able to compare some of the
small-scale mechanisms proposed by quantum theories to experimental results. No one has yet implemented even the simplest
quantum computer circuit, and there are doubts that the components demonstrated can be scaled up to useful proportions.
ScienceDaily (2007), “Quantum Biology: Powerful Computer Models Reveal Key Biological
Mechanism.”
54
ScienceDaily (2008a), “Quantum Computers Could Excel In Modeling Chemical Reactions.”
53
43
For the present, the detailed quantum mechanisms proposed by
quantum theories of mind are speculative and unfalsifiable.
First vs. Third-person Perspective
We know that a theory that ignores the first-person projection
and subjectivity does not address consciousness. Physics generally
takes an objective, third-person stance. Thus, theories which only
show how quantum considerations might affect brain dynamics can
play no more than a supportive role in a theory of consciousness.
Still, the role of the observer in quantum theory seems to involve subjective elements that provide two possibilities for bridging the subject-object gap. Henry Stapp,55 who is perhaps the most
careful and systematic quantum theorist of mind, has sought to exploit both. First, the idea of wave function collapse on awareness
hints at a connection between the intentional and physical theaters
of operation. Second, observers can choose what to measure and so
force the wave function to collapse into an eigenfunction of the
selected variable. This seems to present an entrée for volition in
physical processes.
Unfortunately, as we have seen, collapse on awareness has innumerable difficulties, and cannot be maintained.
That observers can force the wave function to collapse into a
specific subset of states is more promising. It shows that the very
structure of modern physics presupposes, as an independent input,
the human ability to choose. Stapp stresses this as a key point. Instead of assuming, as mechanistic determinists had, that human
choices can be computed from brain states, choice is an essential
independent variable in predicting observations. In measuring the
spin of electrons, for instance, we must input a choice detector
alignment to predict the probability of the measured states. If we
choose a vertical alignment, we shall find an up or down spin. If
we choose a horizontal alignment, we shall find a right or left spin.
The probability cannot be calculated without the experimenter’s
choice of experimental arrangement. Unlike classical physics, in
Schwartz, Stapp, and Beauregard (2005), “Quantum Theory in Neuroscience and Psychology,” Stapp (2005), “Quantum Interactive Dualism,” (2006) “Quantum Approaches
to Consciousness,” (2007) “Quantum Mechanical Theories of Consciousness,” “Quantum
Mechanical Theories of Consciousness.” Many of Stapp’s papers are available at
http://www-physics.lbl.gov/~stapp/stappfiles. html
55
44
which measured values are independent of the observer’s choice of
what to measure choice, in quantum theory, human choice is an
irreducible primitive. Thus, Stapp justly says quantum physics has
forced us into a different view of the experimenter.
Yet, there is a critical ambiguity leading to uncertainty about the
meaning of Stapp’s analysis. Earlier, I quoted Heisenberg: “The
laws of nature which we formulate mathematically in quantum
theory deal no longer with the particles themselves but with our
knowledge of the elementary particles.”56 Based on this Wigner
wrote:
When the province of physical theory was extended to encompass
microscopic phenomena, through the creation of quantum mechanics,
the concept of consciousness and came to the fore again: it was not
possible to formulate the laws of quantum mechanics in a fully consist way without reference to consciousness. All that quantum mechanics purports to provide are probability connections between subsequent impressions (also called “apperceptions”) of the consciousness, even though the dividing line between the observer whose consciousness is being affecting, and the observed physical object can be
shifted toward one or another to a considerable degree, it cannot be
eliminated.57
Stapp follows Wigner in making consciousness a focus of quantum prediction.
What is the ambiguity? Heisenberg, an Aristotelian, said what
Aristotle said, viz. that we can’t know objects except by interacting
with them. Heisenberg did not shift the focus from the objective
object, the quantum system under study, to the subjective object,
the awareness by which we know quantum systems. Wigner and
Stapp do. Using a standard philosophical distinction, Heisenberg
considers knowledge formally, in terms of its content. Wigner and
Stapp consider it materially, as an instrument of thought. Heisenberg’s point is what I quoted from Aquinas in chapter 4: “even in
sensible things essential differences are unknown. They are, therefore, signified through accidental differences which arise from essential ones, just as a cause is signified by its effect...”58 We don’t
Heisenberg (1958a), “The Representation of Nature in Contemporary Physics,” p. 99.
Wigner (1961), p. 185.
58 On Being and Essence in Goodwin (1965), p. 60. = De Ente et Essentia Caput VI, b.
56
57
45
know quantum systems in se, as God knows them. We know them
in a properly human way, by means of their (measurable) effects
on us. Physics is not concerned with our means of knowing, human
consciousness, but with intelligible physical systems.
The quantum mechanical sea change did not broaden the object
of physics to include mind. Stapp argues that it forced physicists to
change their worldview from one in which choices might be assumed to be consequences of physics, to one in which choices are
primitives entering quantum physics from the outside. It is true that
physicists take the choice of measurement to be an input to quantum physics, not an output. But, that only confirms that mind is not
the focus of quantum theory.
That observers can force the wave function to collapse into specific subsets of states is inadequate for a theory of mind. First, it is
a remote consequence of willing. Decisions necessarily begin by
changing our brain’s disposition and, through it, our acts. Some
acts affect experimental outcomes. So the effects of measurement
decisions on quanta are not immediate, but derivative and remote.
Second, the mind is not unique in this. Selection of measurements
can be made mechanically, without mental intervention. Spin orientations are often chosen automatically in EPRB experiments.
There is nothing intrinsically mental in determining what measurements to make and the resulting eigenstates. Finally, while
quantum theory takes these choices as an input without explaining
them, nothing in standard quantum theory prevents its application
to brain microprocesses. So, quantum theory does not force us to
think of mental operations as unique.
Stapp’s line of argument shows quantum theory is not all-encompassing, but assumes outside input, either from mind or from
classical processes. Thus, it allows mind to be a phenomenon independent of quantum mechanics without explaining it. It is silent on
mental life per se. A broader analysis is required to see the uniqueness of mental acts.
Scale and Coherence
While the eye has a 1% chance of detecting a single photon, it is
hard to see how quantum events can control the brain states coherently enough to be important. As Grush and Churchland (1995) put
it, “quantum level effects are generally agreed to be ‘washed out’
46
at the neuronal level.” Information processing in the brain is distributed over neural assemblages, in which the loss or misfiring of
a single neuron is not critical. The time and space scales of quantum events are very different from neural scales. Individual neurons fire in response to the integrated effect of excitatory and inhibitory signals at multiple synapses over time periods interminably long by quantum standards. Typical firing rates are 200-1000
Hz. Also quantum spatial scales are small compared to a synapse,
and smaller still compared to the assemblages representing mental
contents. Scale is a major problem for quantum theories of mind.
Anesthesiologist Stuart Hameroff believes microtubules (figure
23) might be the solution. Microtubules are organelles found in all
cytoskeletons or cell membranes. So they are involved in vesicle
bursting during neurotransmitter discharge. As Hameroff thinks
anesthetic loss of consciousness depends on microtubule mechanics, he posits that microtubules, not neurons, are the basic units of
biological data processing.59
Hameroff suggests that microtubule quantum states are linked
via gap junctions, allowing widespread state coherence in the
brain.60 He supposes these states to be Bose condensates shielded
from the brain’s heat by microtubules. (Bose condensates require
temperatures near absolute zero.)
Thus, coherent states might avoid
decoherence long enough to affect
neural function.
There are problems with this.
First, medical consciousness is not
awareness. Second, in Bose condensates atoms share an identical
quantum state. This allows coherent
action, but not in a helpful way. Bose
condensates have only six degrees of
freedom. Since each degree of freeFigure 23
dom can encodes information, indeModel of Microtubule
pendent atoms have more infor- Microtubules are about 25 nm in
mation capacity than the same atoms diameter with 8 nm long segments.
59
60
Hameroff (1987), Ultimate Computing.
Hameroff (1994), “Quantum Coherence in Microtubules.”
47
in a Bose condensate.61 Thus, positing Bose condensates reduces
information capacity.
Tegmark62 calculated that environmental effects destroy quantum coherence in the brain in 10-20 to 10-13 seconds. This is at least
ten orders of magnitude faster than neural firing times and associated processes in microtubules. Hagan, Hameroff, and Tuszyński63
replied, arguing that hypothetical shielding mechanisms increased
the time scales for Hameroff’s model to 10-5-10-4 seconds. Tegmark answered their criticisms satisfactorily,64 and their hypothetical shielding is unconfirmed experimental. For these and other reasons, Koch and Hepp (2006) concluded that neural activation is adequately described by classical (non-quantum) physics.
Another approach to time scales is Stapp’s use of the quantum
Zeno effect, i.e. repeated observations stabilize quantum states.
This happens because every observation collapses the wave function, effectively reinitializing it, and preventing it from spreading
over time. The effect was proposed by Leonid Khalfin (1958) and
elaborated by Sudarshan and Misra.65 Stapp argues66 that focusing
awareness on particular contents is equivalent to observing the
quantum state of the brain, and that this explains the effectiveness
of attention in keeping contents actively in mind.
One objection is similar to that against Armstrong and Smart’s
idea that self awareness is a form of proprioception (p. Error!
Bookmark not defined.). Being aware of conscious contents is
not the same as being aware of our brain state – nor is it the same
as observing the quantum substates constituting our brain state.
Conscious contents are formal signs, while brain states are at best
instrumental signs.
Another objection is that many different quantum substates are
equally consistent with neural firing, which is the basic computational event in the brain. Further, neural data processing is relatively immune to the misfiring of single neurons. (This is shown both
by neural net simulations, and by the observation that misfires
caused by TMS slow but do not prevent problem solving.) So, the
same contents can be encoded in multiple ways at both the neuron
61
Mathematically, this assumes that information is encoded via topological transformations.
62 Tegmark (2000), “Importance of Quantum Decoherence in Brain Processes.”
Hagan, Hameroff and Tuszyński (2002), “Quantum Computation on Brain Microtubules: Decoherence and Biological Feasibility.”
63
64
See http://space.mit.edu/home/tegmark/brain.html.
Sudarshan and Misra (1977), “Zeno’s paradox in quantum theory.”
66 Schwartz, Stapp, and Beauregard (2005).
65
48
and at the quantum substates levels. Since many quantum states
can support nay given contents, contents do not specify quantum
states. They are not equivalent to quantum observations. So, the
conditions for invoking the quantum Zeno effect are not met.
Quantum Mechanisms
If quantum theory is to help explain conscious experience, we
need a uniquely quantum mechanism with experiential consequences. Presumably, this mechanism will affect neural activity sufficiently to determine conscious contents. Many authors have sought
this mechanism in ion transport during neuron firing.
One of the best elaborated theories integrating quantum ideas on
the microscopic level is that of Beck and Eccles (1992) and its refinement in Beck (2001). They focused on quantum dynamics in
signaling at the synaptic cleft. Their mechanism involves quantum
barrier penetration,67 which allows electrons to transfer between
biological molecules in a way that is impossible classically. This is
mechanism may indeed be involved in exocytosis; however, that
does not explain how synaptic events relate to awareness. Beck
and Eccles claim that “mental intention (volition) becomes neurally effective by momentarily increasing the probability of exocytosis” without offering any dynamic hypothesis. It may happen, but
in their theory it is an unsupported deus ex machina.
Physicist Gustav Bernroider acknowledges that a theory of
mind must link first- and third-person projections, or as he calls
them, neurophenomenal and neurophysical phenomena.68 His work
offers no such a link. Instead, he and Sisir Roy consider how the
possible quantum entanglement of potassium ions with oxygen atoms might implement logic gates in a quantum computer.69 Remember, for supporters of the representational theory of mind,
providing a logic mechanism is giving a theory of mind. Without
judging the theory’s scientific merits, we can say that it neither
tells us how contents become intelligible, nor gets us closer to a
theory of subjectivity.
Barrier penetration is the process by which quanta can “tunnel” through barriers that
would stop Newtonian particles. The process is not mysterious when we think of quanta
as waves without hard edges. The barriers only attenuate waves, so a weakened wave
appears on the far side of any quantum barrier. This allows electrons to “jump” over gaps
that classical particles could not cross – for example from one molecule to another.
68 Bernroider (2003), “Quantum-neurodynamics and the Relation to Conscious Experience.”
69 Bernroider and Roy (2004) “Quantum-Classical Correspondence in the Brain.”
67
49
Although I passed over it in the last section, Stapp applies the
quantum Zeno effect to calcium ions transport in neurons. The objections I made to his use of the quantum Zeno effect excuse us
from examining its specific application to calcium transport.
David Chalmers tried to integrate collapse on awareness into a
quantum theory of mind.
Upon observation of a superposed external system, Schrödinger evolution at the moment of observation would cause the observed system
to become correlated with the brain, yielding a resulting superposition of brain states and so (by psychophysical correlation) a superposition of conscious states. But such a superposition cannot occur, so
one of the potential resulting conscious states is somehow selected
(presumably by a nondeterministic dynamic principle at the phenomenal level). The result is that (by psychophysical correlation) a definite brain state and a definite state of the observed object are also selected.70
We saw numerous difficulties with collapse on awareness
above. In discussing the quantum Zeno effect, we also noted that
knowing contents does not specify the brain’s quantum state. Lastly, Chalmers proposes no dynamics linking the intentional to the
physical projection. The conscious state is “somehow selected”
which, as I noted before, requires informatio ex nihilo.
Randomness vs. Intentionality
A central problem, which is usually neglected, is that quantum
theory is a theory of ignorance. Quantum theory terminates in predictions which are probabilistic because of inescapable ignorance.
On the other, the mind terminates in determinate states of
knowledge or commitment to action. It is hard to see, then, how
quantum indeterminism can be expected to explain mental determination.
Many have proposed that quantum indeterminism is the key to
free will.71 Eddington may have been the first to remark that, with
quantum indeterminism, physics “withdraws its moral opposition
to freewill” while at the same time noting that other grounds may
Chalmers (2003), “Consciousness and its Place in Nature”
E.g. Lucas (1970), The Freedom of the Will, Zohar (1990), Tipler (1994), Kane (1996),
The Significance of Free Will.
70
71
50
be found for denying free will.72 To allow freedom by unreflectively accepting physical indeterminism is a swindle.
First, if nature were ontological random, would that explain free
will? There would be room for our will to collapse the wave function into the states required to carry out our decisions. But, this still
requires that our will act outside the laws described by physics.
Further, over time the probability distributions will be different
from those quantum mechanics predicts. So there is still a conflict
with physics in this scenario. However, that this violation might
not be empirically verifiable.
Second, at best quantum indeterminism holds out the possibility
of random outcomes. Free will requires agency. It is not enough
for the result of free acts to be unpredictable. They must be radically dependent on a free agent. Random events are clearly incompatible with my experience of agency, which is nothing like my surprise at a random throw of dice. The logic of predictability works
in one direction only. If actions are predictable, they are latent in
preexisting conditions, and so not radically dependent on a personal agent. But if they are not predictable, they could be so for reasons other than free agency.
Quantum indeterminacy does no more than Eddington said – it
causes physics to withdraw from the field, leaving the question
open to other means of investigation.
Penrose and Hameroff
Penrose and Hameroff piling one speculative hypothesis upon
one another to develop an elaborate quantum theory of mind. 73 As
we saw (p. Error! Bookmark not defined.), Penrose argued the
mind is capable of non-algorithmic thought. Since von Neumann’s
process 2 is deterministic and algorithmic, Penrose believes that
non-algorithmic thought must be based on process 3, the collapse of
the wave function. As this process appears random, while nonalgorithmic thought is directed and non-random, this conjecture is
problematic. To resolve this, Penrose proposed a new type of wave
function collapse, objective reduction (OR). In OR the collapse is
not due to measurement, but to a non-algorithmic, non-random quantum gravity event.
The quantum gravity mechanism triggering OR was invented by
Penrose for the purpose, and is otherwise unknown to physics. Fur72
73
Eddington (1928), p. 295.
An outline of their argument may be found in Grush and Churchland (1995).
51
ther, any test of the OR hypothesis would involve a further collapse of the wave function in the very act of measurement. So, it is
not clear that OR is falsifiable. I am unaware of any falsifiable
predictions based on it. Indeed, the whole field of quantum gravity
remains a puzzle.
Putting these difficulties aside, there is a long way between OR
as a physical theory, and non-algorithmic thought. Penrose hoped
to make a connection by saying that OR is due to “non-computable
influences” embedded in structure of space-time at Planck scales
(10-35 m). These “influences” originate in the realm Platonic Ideas.
This places Penrose on the wrong side of Aristotle’s devastating
critique of Plato’s theory of Ideas in the Metaphysics. Since Penrose does not suggest that Ideas reside in a mind, he can’t call on a
Neoplatonic transcendental mind to evade Aristotle’s critique. Nor
does Penrose see that abstraction from experience can explain
mathematical universality without Platonic Ideas.
The next leap is to move from Platonic content encoded in an
OR to content encoded in a coherent brain state. To make this leap,
Hameroff provided Penrose the Bose condensate hypothesis discussed above. The result was the Orchestrated Objective Reduction theory in Penrose’s Shadows of the Mind (1994). In it, instants
of consciousness occur when an OR gives the brain nonalgorithmic access to Platonic Ideas.
This theory has not been well received scientifically. There is
no physical evidence to suggest or support OR, either alone or in
the context of mathematical creativity. We saw Tegmark’s calculation of time scales for the survival of Bose condensates in the brain
makes Hameroff’s Bose condensate hypothesis dubious.
Philosophically, the theory fairs no better.74 First, it does not explain consciousness,75 or even thinking in general. It is designed
solely to explain how a physical structure, such as the brain, could
generate non-algorithmic mathematics. To that end, it constructs a
physico-mystical space-time continuum in which intentional objects (Platonic Ideas) reside. How intentional objects can reside in
or affect this space-time continuum is not explained. Neither are
the critical notions of intentionality and subjectivity addressed.
Suppose all Penrose’s and Hameroff’s mechanisms work as advertised. How does a guided, non-random, non-algorithmic OR
bear upon the specific problem the mathematician is pondering? It
74
Grush and Churchland (1995) found the argument to be a tissue of improbabilities piled on improbabilities. See also Penrose and Hameroff (1995), “What gaps? Reply to Grush and Churchland.”
75
Chalmers (2002), pp. 95f.
52
can’t explain her creative process unless the encoded Idea is relevant to the problem she is considering. The OR must be intelligently adapted to the mathematiccian’s needs. This implies intelligent
means-end activity. Only two possibilities work: the mathematician herself, or a supernatural intelligence. If the guidance for
the collapse comes from the mathematician, we’ve come full circle
– explaining how mathematicians think by mathematicians thinking. If it comes from a supernatural intelligence, why do we need
Hameroff and Penrose’s Platonic world and hypothetical mechanisms as opposed to a direct inspiration or revelation?
Finally, it is unclear why consciousness of non-algorithmic
mathematical content should involve OR, while consciousness of
qualia or of algorithmic thought does not. We need a unified explanation of subjectivity because one subject is aware of all types
of content.
A better explanation of non-algorithmic thought is Stapp’s proposal that process 1, the choice of what to look at is critical. While
the choice of a particular observation in an experiment can be done
automatically, the overall choice of topics to investigate and factors to consider is intrinsically intentional. Aristotle defined the
habit of science as facility at finding middle terms, i.e. connectionmaking. The theory of the syllogism articulates this insight into
insight. When mathematicians struggle to solve a problem, they are
not looking to create something out of whole cloth, but to find the
connections that will get them from the premises to the conclusion.
Success or failure depends, as Aristotle saw, on finding the connections, syllogistic middle terms, allowing us to build a logical
argument. It is the choice of what to analyze that elevates a mathematician, scientist or philosopher from simply competent to brilliant. That choice is not algorithmic because we don’t know in advance where each alternative leads. So it can’t be given a determinate value.
Summary
Aside from Stapp’s observation that quantum physics presupposes the observers’ choice of experiments rather than predicting
it, most of what we’ve learned from quantum theory has been more
useful in clearing away underbrush than in building a new model
of the mind. While quantum theory plays a role in the details of
brain dynamics, it seems that an adequate theory of brain function
can begin at the neural level, ignoring quantum details. As noted,
53
Koch and Hepp (2006) agree that neural dynamics can be modeled
without quantum physics.
The promise quantum physics has not been fulfilled. The random outcome of inderterminate quantum processes do not model
the intended outcome of willed choices. Inderteminacy gives epeistemological room for free will, but does not explain it. While
quantum uncertainty makes determinism unfalsifiable, and unscientific, there is no reason to think quantum randomness is due to
anything beyond human ignorance.
The idea that the obeserver’s mind control controls the collapse
of the wave function contradicts both accepted physical principles
and sound philosophical analysis. It explains neither our knowledge of the material world nor our ablity to impliment willed decisions in physical acts.
While some of the quantum mechanisms proposed to explain
the details of ion transport in neural firing may be confirmed, none
of them excludes other mechanisms, or seems a unique key to
brain function. Other than Stapp’s approach, none connects physical processes to the intentional processes of awareness and will.
Stapp has done this, but his approach neglects the facts that (1) the
unknown detector states can explain quantum randomness, and (2)
that known nonlinearities cause the superpostion principle to break
down long before we reach the observer’s brain.
It seems that we have spent many pages getting nowhere, but
that is not true. The mysteries of quantum theory have provided
cover into which naturalists could retreat when hard pressed. Now
that quantum physics has been demythologized, that cover no
longer exists.
54