Download Speculations on the Union of Science and Religion

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Density matrix wikipedia , lookup

Ensemble interpretation wikipedia , lookup

De Broglie–Bohm theory wikipedia , lookup

Propagator wikipedia , lookup

Coherent states wikipedia , lookup

Quantum fiction wikipedia , lookup

Quantum computing wikipedia , lookup

Renormalization group wikipedia , lookup

Quantum decoherence wikipedia , lookup

Spin (physics) wikipedia , lookup

Hydrogen atom wikipedia , lookup

Quantum field theory wikipedia , lookup

Quantum group wikipedia , lookup

Atomic theory wikipedia , lookup

Quantum machine learning wikipedia , lookup

Renormalization wikipedia , lookup

Wave function wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

Max Born wikipedia , lookup

Path integral formulation wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

Identical particles wikipedia , lookup

Elementary particle wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Probability amplitude wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Particle in a box wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

History of quantum field theory wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Canonical quantization wikipedia , lookup

Bell test experiments wikipedia , lookup

Matter wave wikipedia , lookup

Wheeler's delayed choice experiment wikipedia , lookup

Wave–particle duality wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Quantum state wikipedia , lookup

Quantum key distribution wikipedia , lookup

T-symmetry wikipedia , lookup

Bohr–Einstein debates wikipedia , lookup

Delayed choice quantum eraser wikipedia , lookup

Quantum entanglement wikipedia , lookup

Quantum teleportation wikipedia , lookup

Bell's theorem wikipedia , lookup

Hidden variable theory wikipedia , lookup

Double-slit experiment wikipedia , lookup

EPR paradox wikipedia , lookup

Transcript
“Ontology as Poem” was my first attempt to articulate a basis for the essential union of science
and religion to be recognized. (See A Compilation of the Bahá’í Writings on the Unity of
Science and Religion. The poem has evolved (an older version is in the “files” section for this
Meetup Group) into an integrated theory of being, and becoming, based on my also evolving
understanding of the Bahá'í, Bábí, Islamic, and other writings; and the sciences against which
these understandings are being tested.
The second trajectory of my research has been solely physics-based. This focus was catalyzed
by a graduate level class on "Quantum Paradoxes" that that I was permitted to attend at George
Mason University. The class was taught by Yakir Aharonov (who is best known for the
Aharonov-Bohm Effect) and Jeffrey Tollaksen (who at the time was the Director of the Center
for Quantum Studies Center for Quantum Studies at George Mason University. Although this
second, physics-based, trajectory is seemly convergent with my earlier theologically based work,
I have only recently begun to integrate the science I learned in the class and thereafter into the
Poem’s annotations.
One fruit of this second trajectory was my admission into George Mason University's non-degree
graduate physics program in physics. With a transcript devoid of qualifying classes, I submitted
a variant of the following -- and got in. My first day of class was last Thursday.
Please note that this post will focus almost exclusively on the physics. It is my hope that I may
provide my reader with an introduction to time symmetric quantum mechanics (TSQM) and
some of the seemingly intractable puzzles and paradoxes that I believe TSQM may help resolve.
Specifically, following some introductory materials (which include an explanation of the EPR
Paradox) I ask the question: “What happens if adiabatic processes are applied in the context of an
EPR experiment.” In response to this question, I proffer some predictions. If these predictions
have or can be substantiated, the resulting questions and speculative answers I propose seem to
provide an explanatory framework to understand wavefunction collapse, the relativity of
simultaneity, the anthropic principle, and a possible means to remove the Bekenstein bound
limitation on the universe's information capacity as a constraint on emergence.
To begin, the following Google Scholar search provides in depth resources for the topics that I
will try to touch on in this post: Papers of Aharonov & Tollaksen in Google Scholar Also, as
weak measurements are involved in the proofs that TSQM is “real” the following are specifically
recommended: New Insights on Time-Symmetry in Quantum Mechanics; Non-statistical Weak
Measurements; Weak measurements, weak values, and entanglement; Pre-and post-selection,
weak values and contextuality; and Robust Weak Measurements on Finite Samples).
Although I previously presented some evidence that TSQM might be real, I didn't have anything
that was published. Although proofs are proffered in the above cited papers, I believe a proof is
more convincing in the context of the impossible.
HARDY’S PARADOX
Aharonov and Rohrlich in their text book titled "Quantum Paradoxes: Quantum Theory for the
Perplexed" address numerous paradoxes of quantum mechanics. Although not fully explained in
the book or class, the weak measurement experiments that were performed in connection with
the Hardy’s Paradox provides one proof (there are numerous others) that time symmetry is an
actual phenomenon of nature.
As described by Aharonov and Rohlich in their book, “Hardy [1] invented a paradox with two
interferometers, one for electrons and the other for positrons. The interferometers overlap in a
comer where a path in one interferometer crosses a path in the other. (See Fig. 17.1.) The
momenta of the particles and the dimensions of the interferometers are such that electrons
passing through the electron interferometer always arrive at detector C- if no positron enters their
interferometer, and positrons passing through the positron interferometer always arrive at
detector C+ if no electron enters their interferometer. (The C in C- and C+ denotes constructive
interference.) But if an electron and a positron pass through the interferometers simultaneously,
they may annihilate each other in the overlapping corner. Let us assume that an electron and a
positron passing through simultaneously always annihilate each other whenever their paths cross.
If the particles annihilate each other, no detector clicks. What if they do not annihilate each
other?
“If the paths of the particles do not cross, there are three possibilities. The positron may take its
overlapping path while the electron takes its nonoverlapping path. … Or the electron may take
its overlapping path while the positron takes its nonoverlapping path… Finally, both particles
may take nonoverlapping paths. … All three states are equally probable.”
In the context of the experiment, additional detectors were arranged to determine if the electron
and/or positron took overlapping paths. As reported by Aharonov and Rohlich:
“If the detector D+ clicks, it tells us that the electron crossed into the positron interferometer; for,
by assumption, positrons always arrive at detector C+ if no electron enters their interferometer.
Thus we conclude that the electron took its overlapping path (and the positron took its
nonoverlapping path).
“Similarly, if the detector D_ clicks, it tells us that the positron crossed into the electron
interferometer; for, by assumption, electrons always arrive at detector C_ if no positron enters
their interferometer. Thus we conclude that the positron took its overlapping path (and the
electron took its nonoverlapping path). …
“But what if detectors D- and D+ click together? We may conclude from the clicking of detector
D+ that the electron took its overlapping path, and we may conclude from the clicking of
detector D_ that the positron took its overlapping path. But if both particles took their
overlapping paths, they must have annihilated each other! The simultaneous clicking of D- and
D+ leads to the paradoxical conclusion that neither D- nor D+ could have clicked.
In their paper titled “New Insights on Time-Symmetry in Quantum Mechanics”, Yakir Aharonov
and Jeff Tollaksen summarize the essence of the paradox with three counterfactual statements:
• The electron is always in the overlapping arm.
• The positron is always in the overlapping arm.
• The electron and the positron are never both in the overlapping arms.
In a paper by Yakir Aharonov and Jeff Tollaksen titled Robust Weak Measurements and a paper
by Yakir Aharonov, Alonso Botero, Sandu Popescu, Benni Reznik and Jeff Tollaksen titled
“Revisiting Hardy's paradox: counterfactual statements, real measurements, entanglement and
weak values” (see: http://arxiv.org/pdf/quant-ph/0104062), the original Gedanken experiment
was translated into actual experimental results.
The “Revisiting Hardy's paradox” paper concludes: “As they are experimental results, they are
here to stay - they cannot be dismissed as mere illegitimate statements about measurements
which have not been performed, as it is the case with the original counter-factual statements.
Whatever one’s ultimate view about quantum mechanics, one has to understand and explain the
significance of these outcomes.” The explanation to this paradox that I am working on is
foundationed on indeterminacy, including time indeterminacy, but I need to do more research on
this before I write-up my conjectures. Nonetheless, for more information, see: “Direct
observation of Hardy’s paradox by joint weak measurement with an entangled photon pair” by
Kazuhiro Yokota, et.a. (http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.1625v1.pdf) and Chapter
3 of Jeffrey Stephen Lundeen’s paper titled “Generalized Measurement and Post-selection in
Optical Quantum Information” (http://hep.uchicago.edu/~pilcher/OCGS/Lundeen%20%20Quantum%20Optics%20.pdf), which provides a variant of the Hardy experiment using
photons.
THE QUANTUM BOX EXPERIMENT:
The following Quantum Box experiment provides a second proof (there are many others) that
TSQM is “real”. Before I go on to describe the experiment, you may wish to review an early
description of the Quantum Box experiment. (See: http://arxiv.org/abs/quant-ph/0310091v1)
Now, please visualize a three box experiment arranged in a three by three matrix labeled from
left to right: Box A, B, and C; and labeled from bottom to top: time t, t+1 and t+2. A particle
entering the system at the bottom (e.g. at time t) is understood to have a one-third probability of
being in Box A, B, or C at all levels, t, t+1 and t+2. I understand that these probabilities were
confirmed through ideal (or von Neumann) measurements taken at each level. (We will defer the
question: “what causes the wave function to “collapse” into one box and not in another?” to my
discussion of the Anthropic Principle.) In any event, these confirmation measurements were not
part of the experiment that I am about to describe.
In the experiment that was reported in the lecture I attended, a very large ensemble of particles
was introduced into the system and, although ideal measurements were taken at time t+2 for
Boxes A, B, and C, only the experimental data for those particles found Box A (the postselection) were retained for further consideration. The theory behind the experiment is, to my
understanding, that the ideal measurement of the sub-ensemble of particles found in Box A at t+2
constitutes a boundary condition, which through the propagation of a time-reversed wave,
constrains the potential locations and states of the particle to that subset of positions and states
that remain possible given both the t (starting) and t+2 (ending) boundaries. Mathematically, the
theory generates for the selected sub-ensemble a probability of “1” that the particle at time t+1
will be found both in Box A and also generates a probability of “1” that the particle at time t+1
will be found Box B. This means that if an ideal measurements had been conducted at time t+1,
and Box A or Box B were, metaphorically speaking, opened, the particle would always be found
inside the Box that was selected. While this verification cannot be actually performed using
ideal measurements, the prediction can be experimentally confirmed using weak measurements
where the selected sub-ensembles includes a large number of particles
The resulting interference pattern that was presented at the class I attended arose from these
weak measurements and was presented as proof that TSQM is not just a mathematical model
(with explanatory value) but also reflects an underlying reality (that I hope to further explore in
this post).
Noting that the probability of finding the particle in Box A and Box B at t+1 were both “1”, you
may be wondering about Box C. Here, the mathematics predicts something that seemed
astounding. Where the subject particles are electrons, TSQM predicts a particle with all of the
attributes of a positron – but with a fundamental difference. The particle predicted for Box C
must have a Negative mass. (Although not covered in the Quantum Paradox class, I have been
listening to the Feynman lecture series on physics and feel that this finding would be necessary
under a reasonable extension of the conservation of lepton law Feynman describes in one of his
lectures.) In any event, not only was this outcome mathematically demonstrated to the class but
experimental verifications were also alleged to have been obtained. (Unfortunately, a confirming
handout was not provided.)
As noted above, I believe TSQM has explanatory value relative to a number of paradoxes in
quantum mechanics. Let’s begin.
THE EPR PARADOX
For those readers who are not familiar with the EPR paradox, it was "a thought experiment"
devised by Einstein, Podolsky and Rosen "which challenged long-held ideas about the relation
between the observed values of physical quantities and the values that can be accounted for by a
physical theory." (See EPR paradox and Incompleteness of quantum physics)
Although the original EPR thought experiment involved position and momentum measurements,
David Bohm reformulated the EPR paradox into a more practical experiment utilizing spin or
polarization measurements. Bohm's variant of the EPR paradox is described at: Description of
the paradox
Visualize if you will two particles that are quantum entangled moving apart in opposite
directions. (See also: Brief explanation of entanglement in terms of photons) At some distance
from their common origin, Alice measures the spin of one of the particles and finds that the spin
is in the up direction. If Bob were then to measure the spin of the second particle, he will find
that its spin is in the down direction. As often as Alice and Bob wish to repeat this experiment,
Bob will find that the spin of his particle is always opposite to that found by Alice.
How can this be? Stranger still, it does not matter how far apart Alice and Bob may be from
each other or how brief the time-lag between Alice's experiment and Bob's – the results of
Alice's experiment always appears to affect Bob's particle instantaneously. Again, how can this
be?
As noted in Wikipedia: "The EPR paradox is a paradox in the following sense: if one takes
quantum mechanics and adds some seemingly reasonable conditions (referred to as locality,
realism, counter factual definiteness, and completeness), then one obtains a contradiction." …
"Either
(1) The result of a measurement performed on one part A (by Alice) of a quantum system has a
non-local effect on the physical reality of another distant part B, in the sense that quantum
mechanics can predict outcomes of some measurements carried out at B (by Bob); or...
(2) Quantum mechanics is incomplete in the sense that some element of physical reality
corresponding to B cannot be accounted for by quantum mechanics (that is, some extra variable
is needed to account for it.)"
"The principle of locality states that physical processes occurring at one place should have no
immediate effect on the elements of reality at another location. At first sight, this appears to be a
reasonable assumption to make, as it seems to be a consequence of special relativity, which
states that information can never be transmitted faster than the speed of light without violating
causality. It is generally believed that any theory which violates causality would also be
internally inconsistent, and thus deeply unsatisfactory." …
"In 1964, John Bell showed that the predictions of quantum mechanics in the EPR thought
experiment are actually slightly different from the predictions of a very broad class of hidden
variable theories. Roughly speaking, quantum mechanics predicts much stronger statistical
correlations between the measurement results performed on different axes than the hidden
variable theories. These differences, expressed using inequality relations known as "Bell's
inequalities", are in principle experimentally detectable." (See also Bell's theorem) In essence,
Bell's inequality follows from the assumption that local results exist, whether or not anyone
measures them.
"The EPR paradox arises generically for any entangled state - any state of macroscopically
separated systems that is not a product of states of each system. Any entangled state yields
quantum correlations that violate a generalization of Bell's inequality. The EPR claim assumes
that Bob and Alice would measure independent physical variables. Einstein, Podolsky and Rosen
never anticipated that this reasonable assumption would prove inconsistent with experiment and
that we cannot in this context isolate systems in an entangled state from each other."
Experiments have now confirmed that "measurements performed on spatially separated parts of a
quantum system have an instantaneous influence on one another. This effect is now known as
"nonlocal behavior" (or colloquially as "quantum weirdness" or "spooky action at a distance")."
In a paper titled “Space-like Separation in a Bell Test assuming Gravitationally Induced
Collapses” (See: http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.2425v1.pdf) D. Salart et. al
describes a Franson-type test of the Bell inequalities is described where “pairs of entangled
photons traveling through optical fibers are sent to two receiving stations physically separated by
18 km with the source at the center”. According to the paper’s authors, 18km established a new
distance record for this type of experiment. The paper concludes that “under the assumption that
a quantum measurement is finished only once a gravity-induced state reduction has occurred,
none of the many former Bell experiments involve space-like separation, that is space-like
separation from the time the particle (here photons) enter their measuring apparatuses (here
interferometers) until the time the measurement is finished. In this sense, our experiment is the
first one with true space-like separation. The results confirm the nonlocal nature of quantum
correlations.”
"Most physicists today believe that quantum mechanics is correct, and that the EPR paradox is
only a "paradox" because classical intuitions do not correspond to physical reality. How EPR is
interpreted regarding locality depends on the interpretation of quantum mechanics one uses. …
(For those readers who may not already familiar with this material, I would recommend that the
Wikipedia article on the interpretation of quantum mechanics be reviewed before you continue.)
You will note that none of these interpretations provides an intuitively satisfactory explanation
of how the results of Alice's experiment instantaneously determines the state of Bob's particle.
Ironically, according to Yakir Aharonov and Daniel Rohrlich in their book: Quantum Paradoxes:
Quantum Theory for the Perplexed; "the claim that quantum theory is incomplete may well be
correct, though not in the EPR sense. Quantum theory does not explain how we go from
probability to observation, from possibility to actuality, as a complete theory would.”
According to Aharonov and Rohrlich "unitary evolution cannot turn possible results into actual
results. Aware of this paradox, von Neumann postulated collapse. But von Neumann's collapse is
at best an effective model; it does not resolve the paradox. Attempts to resolve the paradox fall
into three classes, corresponding to three statements:
i) Quantum mechanics is incomplete and there is collapse.
ii) Quantum mechanics is incomplete and there is no collapse.
iii) Quantum mechanics is complete."
von Neumann's collapse theory may be seen as consistent with statement i). However, according
to Aharonov and Rohrlich, "so far there is no evidence for collapse. To falsify collapse, on the
other hand, we must verify that no superposition ever collapses. For example, we must show that
Schrödinger's cat remains in an entangled state - and in practice, we have no hope of showing
that the state remains entangled."
Bohm's and other hidden variable theories may be seen to be consistent with statement ii).
In one sense, time symmetric quantum mechanics (TSQM) may be seen as a hidden variable
theory where the hidden variable is non-local in time, but in another sense (which I prefer) time
symmetry is already integral with QM and with TSQM, QM is complete.
Again please visualize two particles that are quantum entangled moving apart in opposite
directions. At some distance from their common origin, Alice measures the spin of one of the
particles and finds that the spin is in the up direction. In traveling from the point of origin to
Alice, we may understand the particle's wave function to have, in a probabilistic sense, taken all
possible paths and to possess all possible states consistent with the initial boundary condition of
the system at the origin. With TSQM we must now visualize a time-reversed wave function
which proceeds backwards in time from the occurrence of Alice's experiment to the time and
point of origin for Alice's particle. This backward in time wave function would also, in a
probabilistic sense, take all possible paths and possess all possible states consistent with three
constraints: (i) the time evolution of the wave function is backward in time; (ii) the timereversed wave function is bounded by the initial state of the system at the origin and (iii) the
time-reversed wave function is also bounded by the spin information arising from Alice's
experiment. It should be noted at this point that due to conservation of momentum the direction
of spin manifest in Alice's time-reversed wave function will be opposite to the spin direction that
Alice measured; and identical to the spin Bob will find when his measurement occurs. In any
event, Alice's time-reversed wave function may be understood to carry the spin information
arising from Alice's experiment to the time and location of origin for the entangled particles.
Here, the information contained in Alice's time reversed wave function may be understood to
"bounce" forward in time in a state that is entangled with Bob's particle. Please note that weak
measurements of Bob's and Alice's particles immediately prior to the occurrence of their
respective ideal measurements will show that each particle has remained entangled with the
other.
My conclusion from the foregoing is that TSQM reintroduces a classic-like causality and locality
to quantum mechanics that I believe have very broad implications. This interpretation based on
time-reversal is far from original, as early as in 1983 Costa de Beauregard gave a formulation of
the EPR setting that allowed a time-reversed EPR.
J. W. Moffat in his paper “Quantum Measurements, Nonlocality and the Arrow of Time” (See:
http://arxiv.org/pdf/gr-qc/9710019) proposes an absorber wave function reduction process to
resolve the EPR paradox that is based on the retarded (forward-in-time) and advanced
(backward-in-time) waves that John Cramer proposed in his transactional interpretation of QM.
The TSQM approach, which I favor, is presented in a paper by Yakir Aharonov and Jeff
Tollaksen titled New Insights on Time-Symmetry in Quantum Mechanics (see
http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.1232v1.pdf
Additionally, Dr. Henry Stapp in a private communication I catalyzed has stated:
“If one considers an EPR-Bohm-Bell correlation experiment, then during some interval in
Process Time the initial (singlet) state of the two particles will be created.
Over an interval in Process Time this singlet state will grow out in an expanding V-shaped
region of spacetime, toward the two far-apart detection regions. At some Process Time a
detection will occur. At that moment in Process Time the state of the universe in the space-time
past of the associated space-like surface will suddenly change, relative to what it was at the
earlier moments in Process Time. In the V-shaped region of spacetime the state will suddenly
jump from a singlet state of the two diverging particles to a state in which, for example, one
particle is polarized in one specific direction, specified by the orientation of the device in one of
the two regions, and the particle traveling along the other wing of the V is polarized in the
opposite direction. The correlation between the parts in the two wings will be fixed instantly (in
Process Time) over the entire V-shaped region in spacetime. The effective transfer of
information about the choice of polarization direction, which choice was seemingly made by the
agent/observer in one region, is made via the V-shaped region that extends backward in time: the
[apparent] faster-than-light transfer of information is made by an effective transfer first backward
in time to the region where the two particle interacted (or originated), and then forward along the
other wing of the V.”
WAVEFUNCTION COLLAPSE
Let’s consider whether the TSQM model might have explanatory value relative to the question of
wavefunction collapse. In order to illustrate where I am trying to go with this, first consider the
wave associated with a single photon radiating from time (t=0) outward in all directions. If an
ideal measurement were then made at time (t=n), there is a non-zero probability that the particle
may be found at any arbitrarily selected point within the universe of points were the photon may
be found. At such time as the photon’s location has been determined, the von Neumann collapse
postulate then hypnotizes that the state of the wave function changes instantaneously along the
(in this case) t=n hyperplane from a momentum eigenstate to a position eigenstate. If depicted
with time represented along one axis, we have a cone with the cone’s radius representing the
light moving outward with the passage of time from its point of origin. The point of origination
may be seen as a boundary condition for the outwardly emanating wave. Applying the TSQM
model we have discussed so far, we may understand that the measurement at t=n to constitute a
new boundary condition and the point of origination of a backward in time wave that may be
depicted as a new cone that proceeds backward in time to the initial point of origination such that
all observables within these two cones are constrained by these boundary conditions. The area
within both of these two cones may be viewed as the sum of all futures arising from the
actualization event at t=0 and also the sum of all possible histories leading to the t=n
actualization event. All possible contingencies consistent with these two boundary conditions
could then be understood to remain ontologically existent as potential. As we saw in the three
box experiment, these contingent observables may even be verified through the weak
measurements that Drs. Aharonov and Tollaksen have employed. As to all other contingent
observables (those not both within the sum of all futures and the sum of all histories that the
actualization events at t=0 and t=n defined), the probability of their actualization becomes zero.
To me, the TSQM model has rendered the mystery of wavefunction collapse (or the apparent
wavefunction collapse arising from quantum decoherence) intelligible, but not yet complete.
Before congratulations are in order, it is important to remember that the standard TSQM model is
non-relativistic, which means not only will the observed collapse along the t=n hyperplane not be
instantaneous for observers in other frames but that observers in different frames will also
disagree about temporal order, length, energy and all other physical quantities that are covariant
but not invariant. It is also important to recall that in discussing the EPR paradox, I observed
that weak measurements of Bob's and Alice's particles immediately prior to the occurrence of
their respective ideal measurements (e.g at time=n-1) demonstrates that each particle has
remained entangled. (Although not to my knowledge published, it was my understanding from
the class I attended that this entanglement at t=n-1 had been experimentally verified.) With this
finding assumed, it would seem reasonable to also assume that weak measurements of the t=n-1
hyperplane for above described photon would be a momentum eigenstate with a non-zero
probability that the photon would be found at any arbitrary location within the cone of the
photon’s potential locations. If so, we must ask what, if anything in the above explanation
distinguishes time=n from time=n-1 that might “cause” the wave function to instantaneously
change from a momentum eigenstate to a position eigenstate along the t=n hyperplane. I will
return with a speculative solution to the problems later in this post.
WAVEFUNCTION“COLLAPSE” AND “DECOHERENCE”
& THE SECOND LAW OF THERMODYNAMICS
As you may have deduced from the foregoing, I am not convinced that the conventional concepts
of wave function “collapse” or “decoherence” are valid. An early (actually pre-TSQM) reason
for my doubt was based on the paradox of how information that is believed to have decohered or
collapsed within conventional QM theories is regained in the context of delayed choice quantum
erasure experiments when an information variant of the second law of thermodynamics is also
taken into consideration.
As described in Wikipedia:
“The Quantum eraser experiment is a double-slit experiment in which particle entanglement and
selective polarization is used to determine which slit a particle goes through by measuring the
particle's entangled partner. This entangled partner never enters the double slit experiment.
Earlier experiments with the basic Young double-slit experiment had already determined that
anything done to discover by which path (through slit A or through slit B) a photon traveled to
the detection screen would destroy the interference effect that is responsible for the production of
interference fringes on the detection screen. …
“The advantage of manipulating the entangled partners of the photons in the double-slit part of
the experimental apparatus is that experimenters can destroy or restore the interference pattern in
the latter without changing anything in that part of the apparatus. Experimenters do so by
manipulating the entangled photon; and they can do so before or after its partner has entered or
after it has exited the double-slits and other elements of experimental apparatus between the
photon emitter and the detection screen. So, under conditions where the double-slit part of the
experiment has been set up to prevent the appearance of interference phenomena (because there
is definitive "which path" information present), the quantum eraser can be used to effectively
erase that information. In doing so, the experimenter restores interference without altering the
double-slit part of the experimental apparatus. An event that is remote in space and in time can
restore the readily visible interference pattern that manifests itself through the constructive and
destructive wave interference. …
A variation of this experiment, the delayed choice quantum eraser experiment, “allows the
decision whether to measure or destroy the ‘which path’ information to be delayed until after the
entangled particle partner (the one going through the slits) has either interfered with itself or not.
Doing so appears to have the bizarre effect of determining the outcome of an event after it has
already occurred.” See Wikipedia’s articles on Quantum eraser experiments. delayed choice
quantum eraser experiments, Wheeler's delayed choice experiment, the Afshar experiment, and
Retrocausality. See also “Random Delayed-Choice Quantum Eraser via Two-Photon Imaging”
by: Giuliano Scarcelli, Yu Zhou, Yanhua Shih (http://arxiv.org/abs/quant-ph/0512207v1)
My purpose in raising the delayed choice quantum erasure experiments is to again draw your
attention to time reversibility as a necessary element for any explanation of these experiments.
Assuming this point of agreement had been reached, I wish to examine whether the common
quantum explanations for these experimental results (e.g. collapse of the wave function and
decoherence) are viable. It is my "belief" that they may not. First, if we assume that wave
functions actually collapse, it is my understanding this event is not time reversible such that no
interference pattern could be recovered for the signal photons once the collapse occurred.
Please reflect on what is happening to the information in the time reversible path (i) when the
photon(s) pass through the double slit; (ii) when the down converter creates an entangled
“signal” and “idler” photon (iii) when the idler photon passively retains or is actively imparted
"which path" information; (iv) when the signal photon reaches the detector; (v) when the active
or passive erasure of the idler photon's "which path" information occurs (which theoretically
could occur years after the signal photon reached the detector); (vi) when the measurement of the
idler photon occurs; again theoretically years after the interference pattern theoretically was or
was not recorded for the signal photon, and (vii) when the existence or non-existence of the
interference pattern becomes known to an observer.
To the extent any of these events results in interactions between the quantum system with its
environments, it is my understanding that physicists currently interpret these interactions in the
context of decoherence.
According to Wikipedia: “quantum decoherence is the mechanism by which quantum systems
interact with their environments to exhibit probabilistically additive behavior - a feature of
classical physics - and give the appearance of wavefunction collapse. Decoherence occurs when
a system interacts with its environment, or any complex external system, in such a
thermodynamically irreversible way that ensures different elements in the quantum superposition
of the system+environment's wavefunction can no longer interfere with each other. …
Decoherence does not provide a mechanism for the actual wave function collapse; rather it
provides a mechanism for the appearance of wavefunction collapse. The quantum nature of the
system is simply "leaked" into the environment so that a total superposition of the wavefunction
still exists, but exists beyond the realm of measurement.”
If we may understand that decoherence has occurred, I recognize that all components of the wave
function are presumed to still exist in a global superposition even after a measurement or
environmental interaction has rendered the prior coherences no longer "accessible" by local
observers. I further understand that all lesser interactions are believed to be time reversible.
However, my question is: Did any of the interactions in these experiments increase the entropy
of the system. Of course, entropy is also time symmetric, but it is my understanding that entropy
should increase both backward and forward in time. I also recognize that entropy is not
deterministic, but only probabilistic. However, if the time reversal path includes an event where
entropy increased, should we not then ask: How is the entropy that was introduced by this
interaction undone? Please note that I have tentatively excluded the active erasure experiments
from this conjecture in recognitions of Huw Price's paper "Boltzmann's Time Bomb", because
active erasure (such as causing all the idler photons to have the same spin) might be seen to have
created a localized low entropy state. Nonetheless, for the passive erasure experiments, would
not any time reversal that must start with a photon's lower coherence (and presumably higher
entropy) and must then, going backwards in time, regain some greater coherence (and
presumably lower entropy) be statistically somewhat less plausible that the experimental results
imply?
See “The Thermodynamical Arrow Of Time: Reinterpreting The Boltzmann-Schuetz Argument”
by Milan M. Ćirković
http://philsci-archive.pitt.edu/archive/00000941/03/Boltzmann_final5.doc)
and “Probability, Arrow of Time and Decoherence ”by Guido Bacciagaluppi
(http://arxiv.org/abs/quant-ph/0701225v1)
The forgoing has hopefully established the dilemma. TSQM, by retaining the information from
both the initial and final boundary conditions (the point of origination and the final actualization
event) provides a first-order resolution to the problem, which I believe the Indra’s Net model that
I will introduce below further refines.
As an aside, you may note that Complementarity was implicitly challenged by the foregoing.
One of my earlier attempts to address this issue may be found at
http://tech.groups.yahoo.com/group/quantum_theory/message/5311. There is more that needs to
be said on this topic, but not in this post
IS TIME REVERSAL ASSOCIATED ONLY WITH IDEAL MEASUREMENTS?
Please recall the paper I cited above in connection with the EPR paradox titled “Space-like
Separation in a Bell Test assuming Gravitationally Induced Collapses”; which confirmed the
nonlocal nature of quantum correlations.” (See:
http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.2425v1.pdf) It also, I believe, shut the door on a
previously viable explanation for the EPR paradox that would have otherwise been applicable to
the Adiabatic Process-based speculation that I will discuss below. This is important because
Adiabatic Processes take time, such that, even with the 18km separation described in the above
referenced paper, no space-like separation could exist for the full duration of the adiabatic
process. The experiment is also important because the fiber optic system used in the experiment
may be adaptable to actually test the ideas I am presenting.
In order to proceed further, we must assume that Time Symmetric Quantum Mechanics (TSQM)
has an ontological foundation and provides a causal explanation of the EPR paradox. With this
foundation, we are ready to ask the following question: Why should time-reversal be limited
only to the two points associated with Alice’s and Bob’s measurements? Clearly these events, as
described, involved gross diabatic perturbations to the system; which physicist traditionally
associate with “wavefunction collapse”. However, what would happen if Alice conducted her
experiment over an extended period of time? Accordingly, instead of conducting an ideal
measurement (which we have consistently assumed would “discover” that the spin of her particle
was in the up direction), our gedanken experiment will assume that Alice adiabatically imposes
the up direction of spin on her particle (See also the “Adiabatic Theorem”).
Aharonov and Rohrlich in "Quantum Paradoxes: Quantum Theory for the Perplexed" provided
the following introductory description to this concept:
"How do we eliminate quantum jumping? Consider a closed system in an eigenstate of Hf, a
Hamiltonian with discrete, nondegenerate eigenvalues. If Hf does not depend on time, the system
never jumps to another state. What if Hf does depend on time? It can depend on time. If we
prepare the system in an eigenstate of Hf, and Hf changes quickly, the system may jump to
another state. But let Hf change adiabatically (slowly); if Hf changes slowly enough, the system
never jumps to another state. Instead of jumping, it adjusts itself to the changing Hamiltonian.
The system behaves like a heavy weight hanging on a thin string. Pull the string quickly - it
snaps and the weight falls. Pull the string slowly - the weight comes up with it."
Based on the foregoing, may we assume that the spin of Alice's particle will, at the end of this
adiabatic process, always be found to be in the up direction when an ideal measurement is
eventually made? If so, it would also seem reasonable to assume that, should intermittent ideal
measurements be conducted on Alice’s particle, the probability of finding the spin of her particle
to be in the up direction would be found to gradually increase as a function of the intensity and
duration of the adiabatic processes that Alice applies prior to each intermediate ideal
measurement being made. As I have not yet found confirmation that this hypothesis has been
experimentally tested, I believe as noted above that such a test might be conducted in the context
of the 18km experiment.
Assuming that Alice has experimentally confirmed that the adiabatic process she employs caused
her particles to always spin in the up direction when measured, let’s now consider Bob’s
particle. We know that Bob's particle was entangled with Alice's particle and, because of this
entanglement; the spin of Bob's particle will (due to conservation of angular momentum) always
be opposite to that found after Alice’s particle is measured. Accordingly, it would be reasonable
to assume that at the end the adiabatic process that Alice uses to cause her particle to spin in the
up direction, the spin of Bob’s particle along the same axes would always be in the down
direction when his measurement is made.
If so, we might then ask: what would happen if Alice never made an ideal measurement? Must
Alice conduct an ideal measurement on her particle in order to “cause” the spin of Bob’s particle
to be in the down direction when his measurement is made? It is my conjecture that such an
ideal measurement by Alice is not required. Again, this conjecture might potentially be tested in
the context of the 18km experiment of Salart et. al.
Previously, I speculated that the probability of finding the spin of Alice’s particle to be in the up
direction would gradually increase as a function of the intensity and duration that Alice’s
adiabatic processes were applied prior to her intermediate ideal measurements being made. If
this speculation is experimentally verified, we must again consider what happens to Bob’s
particle. Again, I would speculate that, because Bob’s particle is entangled with Alice’s particle,
any increase in the probability of finding the spin of Alice’s particle to be in the up direction
would cause a corresponding increase in the probability of finding the spin of Bob's particle to be
in the down direction; and that this finding would occur whether or not any ideal measurements
were ever conducted by Alice on her particles.
Again, I have not yet found experimental confirmation for any aspect of my hypotheses.
Nonetheless, if my speculations could be experimentally verified, the next question that must be
asked is: Could TSQM constitute a fundamental process and an attribute of reality (in the
quantum sense) that is constantly occurring independent of any observer's interactions?
[For a gedanken experiment I devised that tangentially explores whether the adiabatic
manipulation of entangled particles might be possible over space-like separations, see:
http://www.physicsforums.com/showthread.php?p=1841170 ]
INDRA’S NET
The forgoing and a brief write-up I posted some time ago titled Time Symmetric Quantum
Mechanics (TSQM) and Adiabatic Processes were intended to test whether time reversal was
uniquely associated with an ideal measurement or, if adiabatic processes operated within TSQM,
it might be a continuing occurring phenomenon that is independent of any action, interaction or
observation. If so, a generalization of TSQM to include wave interactions at each planck volume
would seem to be worthy of some further consideration. Moving the foregoing into the realms of
philosophy and theology, I have become intrigued by the metaphor of Indra's Net from
Mahayana Buddhism. As quoted in Wikipedia, Francis Harold Cook describes Indra’s Net in his
book “The Jewel Net of Indra” as follows:
“Far away in the heavenly abode of the great god Indra, there is a wonderful net which has been
hung by some cunning artificer in such a manner that it stretches out infinitely in all directions.
In accordance with the extravagant tastes of deities, the artificer has hung a single glittering
jewel in each "eye" of the net, and since the net itself is infinite in dimension, the jewels are
infinite in number. There hang the jewels, glittering like stars in the first magnitude, a wonderful
sight to behold. If we now arbitrarily select one of these jewels for inspection and look closely at
it, we will discover that in its polished surface there are reflected all the other jewels in the net,
infinite in number. Not only that, but each of the jewels reflected in this one jewel is also
reflecting all the other jewels, so that there is an infinite reflecting process occurring.”
In this speculation, I see Indra's Net as a metaphor of an underlying ontological reality that is
existent within every Planck volume and operates within the context of time symmetric quantum
mechanics (TSQM). As analogy, consider the Quantum Harmonic Oscillator model as applied in
calculating indexes of refraction. In the Indra’s Net model, each planck volume may be viewed
as an oscillator through which all TSQM interactions occur and from which all that we
understand to be “matter” or “corporeality”, precipitates.
Recall above that I observed, in the context of the EPR paradox, that weak measurements of
Bob's and Alice's particles immediately prior to the occurrence of their respective ideal
measurements (e.g at time=n-1) demonstrates that each particle has remained entangled and
asked what, if anything, distinguished time t=n from time t=n-1 to cause the wave function to
instantaneously change from a momentum eigenstate to a position eigenstate along the t=n
hyperplane. Let’s return to this question in the new context of the Indra’s Net model and
generalize TSQM to include forward in time and backward in time interactions at each planck
volume. In this variation to the TSQM model, which I have called a “zipper effect”, information
of the system’s condition at both t=n (when the actualization event occurs) and at t=n-1 (when
the adiabatic measurement occurred) are understood to “zipper” alternatively backward and
forward in time to carry information of the state change between t=n-1 and t=n to every planck
volume within the original forward in time “cone”, and beyond, such that the wavefunction
would appear to have collapsed along the t=n hyperplane.
“INDRA'S NET” AND THE HUYGENS-FRESNEL PRINCIPLE
A Feynman lecture I had listened to some time ago had described how every point within the
slots of a Young's double slit experiment behaved as if they were each light emitters, but I had
failed to draw a connection. I was recently listening to a downloaded lecture on optics that
completed my introduction to the Huygens-Fresnel principle. I was blown away. Subsequent
research has revealed the explanatory value of the Huygens-Fresnel principle not only in the
context of the Double-slit experiment (Photon dynamics in the double-slit experiment) that
Feynman was describing; but generally in Refraction, Diffraction (also Fraunhofer diffraction
and Fresnel diffraction), Point spread functions, Color Theory and Fourier optics.
As noted in Wikipedia, the “Huygens principle follows formally from the fundamental postulate
of quantum electrodynamics – that wavefunctions of every object propagate over any and all
allowed (unobstructed) paths from the source to the given point. It is then the result of
interference (addition) of all path integrals that defines the amplitude and phase of the
wavefunction of the object at this given point, and thus defines the probability of finding the
object (say, a photon) at this point. Not only light quanta (photons), but electrons, neutrons,
protons, atoms, molecules, and all other objects obey this simple principle.”
It now seems trivial that my Indra’s Net conjecture is, in essence, a generalization of the
Huygens-Fresnel principle to apply to time-symmetric quantum mechanics. However, the results
seem to have explanatory value (as described in this post) that I have not yet found described in
the literature. Based on the Quantum Paradox class I attended and the papers I have read, I don’t
believe Jeff Tollaksen and Yakir Aharonov have yet explored whether the Huygens principle is
relevant to their work. However, I have found that others have already linked the Huygens
principle with time symmetry in the context of John Cramer’s Transactional Interpretation of
QM and the Wheeler-Feynman Absorber Theory. Although not cited in the above references,
the audio lecture noted an anomaly with the Huygens principle-- that as a point emitter light
should radiate in all directions, but in reality only radiates in the direction of the original wavefront. I will need to work with someone more skilled in the math than I, but I intuitively feel that
TSQM will explain why this occurs.
Amir D. Aczel, in his book titled “ENTANGLEMENT The Greatest Mystery in Physics" Dr.
Aczel writes that Christiaan Huygens (1629-1695) “most remarkable achievement … was a
theory about the nature of light. Huygens interpreted Römer’s discovery of the finite speed of
light as implying that light must be a wave propagating through some medium. On this
hypothesis, Huygens constructed an entire theory. Huygens visualized the medium as the ether,
composed of an immense number of tiny, elastic particles. When these particles were excited
into vibration, they produced light waves." Dr. Aczel concludes that "[the Huygens] model has
been scientifically discredited …”
Although distinguishable, if I am fully honest with myself, the Indra's Net model I am exploring
is intended as a description of reality that the physics community believes has already been
discredited. Obviously, I have much more research to do.
THE ANTHROPIC PRINCIPLE
The scientific starting point for this analysis are the following: (i) a paper by Yakir Aharonov
and Lev Vaidman titled “The Two-State Vector Formalism: an Updated Review”
(arxiv.org/abs/quant-ph/0105101v2 Jun 2007) and (ii) the “destiny states” referenced in a paper
by Yakir Aharonov and Jeff Tollaksen titled “New Insights on Time-Symmetry in Quantum
Mechanics” (arxiv.org/abs/0706.1232v1 [quant-ph] 8 Jun 2007) (For a more comprehensive
analysis of “destiny states see also Jeff Tollaksen’s graduate school dissertation.(See also
generally the time symmetric quantum mechanics and weak measurement papers I previously
cited by Jeff Tollaksen and Yakir Aharonov.)
The theological starting point for this analysis is presented in A Compilation of the Bahá’í
Writings on the Unity of Science and Religion. (For future updated versions see also:
http://groups.yahoo.com/group/science-religion/files/) Additionally, there are Bahá'í Writings
from which it seems reasonable to speculate that Creation in some way involves a preexistent
contingency. As example:
“The preexistence of God is the preexistence of essence, and also preexistence of time, and the
phenomenality of contingency is essential and not temporal, …” (Abdu'l-Bahá, Some Answered
Questions, p. 203)
“…the knowledge of God in the realm of contingency does not produce the forms of the things.
On the contrary, it is purified from the past, present and future. It is identical with the reality of
the things; it is not the cause of their occurrence.” (Abdu'l-Bahá, Some Answered Questions,
Chapter 35, p. 138)
(See also “Ontology as Poem”)
With the forgoing papers as foundation, we may now speculate on whether religion and science
might be joined to provide an explanation of the anthropic principle. (See: Anthropic principle).
In joining these scientific and theological foundations, the idea is incredibly simple. Here, I
apply time symmetry to a theologically posited Primal Point of origination which simultaneously
evolves forward in time into the source of all contingent futures and backward in time as the sum
of all histories. As one illustration of the numerous Writings that reference this posited starting
condition, please consider the following:
“When He purposed to call the new creation into being, He sent forth the Manifest and
Luminous Point from the horizon of His Will; it passed through every sign and manifested itself
in every form until it reached the zenith, as bidden by God, the Lord of all men.” (Baha'u'llah,
Tablets of Baha'u'llah, p. 101)
“…the Primal Point from which appear all things and to which they revert, from which they
originate and to which they return. Thus is it the Primal Oneness (al-ahadiyyah) in its essence
and the derived Oneness (al-wahidiyyah) in its attributes. And from it there appears plurality
through manifestation [is manifested plurality, zuhur] and illumination, and it becomes divided,
dispersed and manifold, and radiates.” `Abdu'l-Bahá’s Commentary on the Qur'anic Verses
concerning the Overthrow of the Byzantines: the Stages of the Soul (provisional translation by
Moojan Momen)
A more detailed description, in theological terms, of the forward in time evolution may be found
in Ontology as Poem. The speculative hypothesis I am working with is that the process would
posit an oscillator within each planck volume that is entangled with all other oscillators in the
universe such that each caries the information of all possible states of all such planck volumes.
Actualizations then occur within this preexisting matrix. In application, any actualization at time
t+1 would be strongly constrained by a forward in time boundary condition (which may be
presumed to originate from an actualization event at time t. The actualization at time t+1, if not
otherwise more strongly constrained by a more proximate future boundary condition at time t+n
would be at least weakly constrained by the end-of-time boundary condition. In theological
terms, creation both begins and ends with Divine Unity. (For a compilation of Writings on this
theme, see: A Compilation of the Bahá’í Writings on the Arcs of Ascent and Decent. See also:
http://groups.yahoo.com/group/science-religion/files/) In scientific terms, it at least provides a
basis to speculate that these interacting boundary conditions are, in the aggregate, the cause of
what we now apprehend as the "anthropic principle" and in complex systems as "chaos theory".
Consider the following quotation from a paper I have cited several times in this post: (“New
Insights on Time-Symmetry in Quantum Mechanics”; arXiv:0706.1232v1 [quant-ph] 8 Jun
2007):
"Up until now we have limited ourselves to the possibility of 2 boundary conditions which obtain
their assignment due to selections made before and after a measurement. It is feasible and even
suggestive to consider an extension of QM to include both a wavefunction arriving from the past
and a second “destiny” wavefunction coming from the future ,which are determined by 2
boundary conditions, rather than a measurement and selection. This proposal could solve the
issue of the “collapse” of the wavefunction in a new and more natural way: every time a
measurement takes place and the possible measurement outcomes decohere, then the future
boundary condition simply selects one out of many possible outcomes [35, 32]. It also implies a
kind of “teleology” which might prove fruitful in addressing the anthropic and fine-tuning issues
[77]. The possibility of a final boundary condition on the universe could be probed
experimentally by searching for “quantum miracles” on a cosmological scale. While a “classical
miracle” is a rare event that can be explained by a very unusual initial boundary-condition,
“Quantum Miracles” are those events which cannot naturally be explained through any special
initial boundary-condition, only through initial-and-final boundary-conditions. By way of
example, destiny-post-selection could be used to create the right dark energy or the right
negative pressure (etc [81])"
SPECIAL RELATIVITY
As noted above, TSQM is a non-relativistic theory. This means that, in the context of the EPR
paradox and conventional QM theory where Alice’s actualization event is deemed to occur at
time t=0, not only will the observed “collapse” along the t=0 hyperplane not be instantaneous for
observers in other frames but that observers in different frames will also disagree about temporal
order, length, energy and all other physical quantities that are covariant but not invariant. The
model I am contemplating seems to work to resolve this paradox if each planck volume were
understood to be an oscillator that both participates in the transmission of information forward
and backward in time and in the actualization of potential states that aggregate into our
observable universe Using this approach, I feel that I have been able to explain disagreements
about temporal order (e.g. The Relativity of Simultaneity Problem) but Length contraction at
plank scales remains a puzzle. Also, something like Doubly-special relativity would seem to be
required.
(For a greatly expanded treatment of the relativity of simultaneity problem, See endnote 32 to
Ontology as Poem".)
INFORMATION CAPACITY OF THE UNIVERSE
One implication of preexistent contingencies or superposition that I have been considering in the
context of our universe is that the information requirements would be enormous. In this context,
please consider the following:
"There has recently been proposed a limit on the computational power of the universe, ie the
ability of Laplace's Demon to process an infinite amount of information. The limit is based on
the maximum entropy of the universe, the speed of light, and the minimum amount of time taken
to move information across the Planck length, and the figure turns out to be [2 to the 130th
power] bits. Accordingly, anything that requires more than this amount of data cannot be
computed in the amount of time that has lapsed so far in the universe. …" (See Pierre-Simon
Laplace)
A second approach to evaluate the Information Capacity of the universe that arose from the study
of black holes generates an even lower information limitation. (For background information, see
the Holographic Principle, Black hole thermodynamics, and Bekenstein bound-- i.e. S(V ) ≤ A/4;
where “S” is the entropy or information that can be contained within a region of volume of space
“V” and “A” is the two-dimensional area of the black hole's event horizon in Planck units).
P.C.W. Davies in his paper titled: “The Problem of What Exists*”writes:
“The de Sitter horizon, which is the end state of cosmological evolution in those models for
which dark energy (of constant energy density) eventually dominates, saturates the holographic
bound, and so sets an upper limit on the information capacity of the universe throughout its
entire evolutionary history. Thus, taking the astronomical observations at face value, it would
appear that the universe never contained, and never will contain, more than about 10122 bits of
information, although the number of bits processed up to a given epoch will continue to rise with
time. Such a rise will not continue indefinitely, however. The holographic bound implies that the
universe is a finite state system, and so it will visit and re-visit all possible states (i.e. undergo
Poincaré cycles) over a stupendous length of time (Goheer et. al., 2003).”
This bleak forecast is the antithesis of the theological model I have been pursuing.
“Briefly, the world of existence is progressive. …the world of existence is continuously
progressing and developing” (Abdu'l-Bahá, The Promulgation of Universal Peace, p. 378)
“’As the Creator has pleased to implant in the atoms His dynamic Power, a latent purposeful
behavior, one thing is contained in another and consecutively there developers from a latent
condition an appearance made visible.[’] This sounds rather like the theory of Darwin who, a
thousand years later and with better methods, with more perfect tools, lifted the veil of secrecy
from many of the miracles of creation” (Abdu'l-Bahá quoted in Fallscheer Notes, February
1910, P. 51, “Concerning Oriental Darwinism”; first part quotes Abu al-Hasan al-Ash'arí)
“In the physical creation, evolution is from one degree of perfection to another.” (Abdu'l-Bahá,
Paris Talks, p. 66)
Nothing has been created without a special destiny, for every creature has an innate station of
attainment. … Each kingdom of nature holds potentialities and each must be cultivated in order
to reach its fulfillment. (Abdu'l-Bahá, Divine Philosophy, p. 110)
Again quoting from “The Problem of What Exists*, P.C.W. Davies writes:
“The term emergence is used to describe the appearance of new properties that arise when a
system exceeds a certain level of size or complexity, properties that are absent from the
constituents of the system. It is a concept often summed up by the phrase that ‘the whole is
greater than the sum of its parts.’” …
“A weakly emergent system is one in which the causal dynamics of the whole is completely
determined by the causal dynamics of its parts (together with information about boundary
conditions and the intrusion of any external disturbances), but for which the complete and
detailed behaviour could not be predicted without effectively performing a one-to-one
simulation.” …
“A strongly emergent system is one in which higher levels of complexity possess genuine causal
powers that are absent from the constituent parts. That is, wholes may exhibit properties and
principles that cannot be reduced, even in principle, to the cumulative effect of the properties and
laws of the components. A corollary of strong emergence is the presumption of “downward
causation” (Campbell, 1974, Bedau, 2002) in which wholes have causal efficacy over parts. …
“These strong emergentists do not claim that additional “organizing principles” over-ride the
underlying laws of physics, merely that they complement them. Emergent laws, they claim, may
be consistent with, but not reducible to, the normal laws of physics operating at the microscopic
level.
Davies then asks the question: “But is this claim correct?” and casts his argument in the context
of Laplace’s omniscient demon (which I commend to your attention) reaching the following
determination:
“There are indeed cosmological models for which no limits exist on the information content and
processing power of the universe. However, recent observations favour cosmological models in
which there are fundamental upper bounds on both the information content and the processing
rate. A Landauer-Laplacian demon associated with such a cosmological model would perforce
inherit these limitations, and thus the fundamental fuzziness or ambiguity in the nature of
physical law associated with these limitations will translate into a bound on the predictability of
complex physical systems, even in principle, if one adopts the Landauer-Wheeler notion of
physical law.”
“It is of interest to determine just how complex a physical system has to be to encounter the
Lloyd limit. For most purposes in physical science the limit is too weak to make a jot of
difference. But in cases where the parameters of the system are combinatorically explosive, the
limit can be significant. For example, proteins are made of strings of 20 different sorts of amino
acids, and the combinatoric possibility space has more dimensions than the Lloyd limit of 10120
when the number of amino acids is greater than about 60 (Davies, 2004). Curiously, 60 amino
acids is about the size of the smallest functional protein, suggesting that the threshold for life
might correspond to the threshold for strong emergence, supporting the contention that life is an
emergent phenomenon (in the strong sense of emergence). Another example concerns quantum
entanglement. An entangled state of about 400 particles also approaches the Landauer-Lloyd
complexity limit (Davies, 2005a). That means the Hilbert space of such a state has more
dimensions than the informational capacity of the universe; the state simply cannot be specified
within the real universe. (There are not enough degrees of freedom in the entire cosmos to
accommodate all the coefficients!) A direct implication of this result is the prediction that a
quantum computer with more than about 400 entangled components will not function as
advertised (and 400 is well within the target design specifications of the quantum computer
industry).
P.C.W. Davies ,“The Problem of What Exists*
If life itself must be considered a strongly emergent property, we must either join Davies in his
discussion of the “10500 instantiations” of “the ‘standard’ multiverse model based on the string
theory landscape and eternal inflation” or we must find some way to get information into the
system that Davies has overlooked.
The resolution of this dilemma, I believe, may be derived from Raphael Bousso’s papers titled:
The Holographic Principle for General Backgrounds and The Holographic Principle which
conclude that the Bekenstein bound is a special case that “holds if the surface permits a
complete, future-directed, ingoing light-sheet”. In this context, the “destiny states” discussed
above do not appear to be subject to the Bekenstein bound limitation. If so, this minimally
doubles the information capacity of the universe and in the context of the “Anthropic Principle”
discussed above, seems to provide a basis for “strong emergence” to remain theoretically viable .
Additionally, as an implication my Indra’s Net conjecture, each planck volume would potentially
be the “caustic” of a new open light sheet. I am still reflecting on the implications of this
speculation, but if it withstands further evaluation, it would seem that for low gravitational
systems (e.g. other than a black hole) information on the brane boundary might be accessible
without regard to the Bekenstein bound limitation. This speculation, however, does require
black holes to constitute a special case where each plank area of the event horizon is caustic
beyond which no future directed information may be obtained.
CONCLUDING SPECULATIONSAlthough I am far from satisfied with the foregoing and the current status of my endnotes to
Ontology as Poem, I have begun to expand my Indra’s Net conjectures into the realms of
consciousness, thoughts, and memory. As imagery for the ideas I am exploring, I visualize my
thoughts as light emanating outward and being reflected in and from Indra’s gems. As I have
playfully explored these ideas, I see these emanations cause memories, ideas, and inspirations to
holographically emerge. In my ruminations, a focused thought becomes analogous to a
flashlight’s beam penetrating the darkness of an unknown; an intense thought illumines to a
greater depth, and the higher frequency and more energetic emanations of psychics and saints
cause concealed realities to become “known”.
Wildly speculative? Probably so, but please consider Valerie Hunt’s book titled “Infinite Mind –
Science of the Human Vibrations of Consciousness”. On page 345, Dr. Hunt writes:
“At any moment in time, a person’s state of consciousness and his level of awareness is
predictable from his mind-field frequency pattern (Exhibit 20).
“During the state of ordinary reality, focus is on the material world. The energies are confined to
the lower frequencies up to 250 Hz, coming from nerve and brain activity.
In altered states, with a psychic, hypnotic or metaphysical reality, the frequencies are extended
above 400 Hz. These are the beginning frequencies of the mind-field. These people, who
“channel” information about current life and who make predictions about future material
happenings, have powerful energies in this lower mind-field spectrum. Some of these psychics
display unusual skills to obtain hidden information.
“Persons in trace states show small ranges of even higher vibrations that seem to stand alone,
unconnected with other levels of reality or material thought. Here they obtain unbelievable
information about distant happenings as they give predictions about distant events. …
“There is another group of persons we describe as mystics, who display the broadest awareness
with complete range of uninterrupted frequencies. Mystics have available at all times the
capacities of the psychics and the trance mediums, but they additionally tap into lofty spiritual
knowings. Their predictions are universal and transcending and full of wisdom. We recorded
their field frequencies to 200 KHz, as high as our instruments could record. In their presence
one sees or senses powerful white light.”
Based on the forgoing, I would speculate that the “mind”, through the radiant emanations that
Dr. Hunt has recorded, may have access to Indra’s Net and the preexistent information I have
speculatively posited; and, through a TBD means, renders informational aspects of the matrix
intelligible. What if the probability density of future potentialities or contingencies could be
influenced by our thoughts? Would this not provide a speculative explanation for the power of
intercessory prayer and "new age" teachings on the "law of attraction"? Many physics have
embraced an impersonal Divine “First Cause” and have rejected the “personal God” of the
Bible. Does this model not provide a frame of reference where both concepts may be
understood?
I have also reflected on the Indra’s Net conjecture in the context soul, spirit, and mind. The
Compilation of the Bahá’í Writings on the Soul, Spirit and Mind that I am preparing marks the
beginning of this trajectory of my research. (For future updates to this compilation, see
http://groups.yahoo.com/group/science-religion/files/) Could it be that the probability density of
our contingent futures are defined at our conception by what we call "souls" and that the
potentialities of our souls are actualized through the intermediary we call "spirit” by the agency
we call "mind". If so, soul and spirit could be understood to constitute the essential and eternal
human reality that is merely connected with what we call "body". This model seems consistent
with the Bahá'í texts I have assembled and the scientific model I am investigating. Is this model
correct? Even though the process of falsification can never be complete, I feel encouraged to
continue.