Download Quantum Physics and Mental Causation By Robert O. (“Bob”) Doyle

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Quantum Physics and Mental Causation
By Robert O. (“Bob”) Doyle
A paper prepared for a conference on Quantum Physics and the Philosophy of Mind,
Milan, Italy, June 4-6, 2013.
Abstract:
The problem of mental causation depends heavily on the idea of “causal closure” of
the world under deterministic laws of nature. The central question in the classic
mind-body problem is how can an immaterial mind move a material body if the
“causal chains” are limited to interactions between physical things.
We propose a model or theory of mind as the pure information in the biological
information-processing system that is the brain and central nervous system. We
show how this model can support a non-reductive physicalism. Information is
physical, but immaterial. Quantum physics produces breaks in the strict “causal
chains” that have been used to “reduce” biological phenomena to physics and
chemistry and mental events to neural events. But statistical causes remain.
Biological information processors are quite different from the digital computers
analogized in computational theories of mind.
We argue against neurobiological reductionism and strictly determined “bottom-up
causation.” At the same time, we defend a supervenient statistical “downward
causation” that allows free thoughts (mental events that are not pre-determined) to
cause willed actions. Actions are ultimately statistical but “adequately determined”
by our motives, reasons, intentions, desires, and feelings, in short, by our character.
Introduction:
Quantum mechanical events have generally been thought to be unhelpful by
philosophers of mind. Adding indeterminism to mental events apparently would
only make our actions random and our desires the product of pure chance. If our
willed actions are not determined by anything, they say, we are neither morally
responsible nor truly free. Whether mental events are reducible to physical events,
or whether mental events can be physical events without such a reduction, the
interposition of indeterministic quantum processes apparently adds no explanatory
power. And of course if mental events are epiphenomenal, they are not causally
related to bodily actions. Epiphenomenal access to quantum physics would not help.
[Some philosophers of mind (e.g., John Searle David Chalmers Christoph Koch?)
have considered the “mystery” of quantum indeterminism as potentially relevant to
philosophy of mind, for example specifically to the problem of consciousness,
1
because it too is a “mystery. “ But explaining one mystery with another mystery is
questionable, at best.]
Our challenge is to admit some quantum indeterminism into a “statistical” causality
(an indeterminism which renders our mental causes merely “statistical”) yet
nevertheless allow us to describe mental causes as “adequately determined.” That is
to say, mental causes are essentially - and for all practical purposes - “determined,”
because the statistics in most cases are near to certainty. Even more importantly,
our thoughts - and subsequent actions – are in no way completely “pre-determined,”
neither from causal chains that go back before our birth such as our genetic
inheritance, nor from the immediate events in the “fixed past,” which together with
assumed deterministic “laws of nature,” are thought by most compatibilist
philosophers to completely explain our actions.
In part I of this paper we explore a mental causation model of non-reductive
physicalism that treats the mind as physical, but nevertheless immaterial.
Specifically, we identify the mind as the information in a brain. We regard the brain
and central nervous system as a biological information-processing system, one that
has very little in common with the computational brain models of today’s cognitive
science.
Information is immaterial. It is neither matter nor energy, although it needs matter
for its (temporary) embodiment and energy for its communication - for example to
other minds or for storage in the external environment.
In part II, we review the arguments for quantum mechanical indeterminism and
show how quantum physics introduces indeterminism into the motions of the
fundamental particles, making their motions stochastic and their future positions
statistical, not certain.
Indeterminism breaks critically located links in the “causal chains” that have
historically been used by philosophers of mind to argue for “bottom-up” causation.
Breaking these deterministic links means that the properties of biological cells
cannot be reduced to those of its component atoms and molecules, those of plants
and animals cannot be reduced to those of its cells, and the mind cannot be reduced
to neurons in the brain.
In Part III we give three examples of breaking causal chains - between atoms and
molecules, between molecules and cells, and between neurons and the information
structures of the mind.
Part IV then makes the case for specific “emergent” properties at those higher
hierarchical levels that could not have been predicted from the laws of lower levels.
We then defend an emergent capability for “downward causation,” in which
biological systems have limited causal powers over their cells, and their cells have
2
causal powers over component atoms and molecules. At the highest level, we show
how the mind can exert a supervenient downward causation on the brain. Mental
causation is needed for an agent to act willfully. The agent’s actions are adequately
determined by the agent’s motives, reasons, feelings, intentions, and desires.
In part V we show that determinism itself is an emergent property. Determinism
shows up only when we can average over a large enough number of atoms and
molecules so that indeterministic events are statistically insignificant. Since this
kind of determinism is only highly probable, and not perfectly certain, we call it
“adequate” determinism. We review the process that creates emergent adequately
determined macroscopic information structures in the universe, despite the second
law of thermodynamics which demands an increase in disorder or entropy.
Part VI situates this quantum physical and informational model of mental causation
in the spectrum of theories of mind, somewhere between the classical substance
dualism of René Descartes and the mind-body identity theory of Ullin T. Place,
Herbert Feigl, and J.J.C Smart.
Part I. An information-based model for non-reductive physicalism.
The leading defender of a non-reductive physicalism was Donald Davidson. Its
leading critic is Jaegwon Kim. In his 1970 essay "Mental Events," Davidson described
his "Anomalous Monism":
“Mental events such as perceivings, rememberings, decisions, and actions resist
capture in the nomological net of physical theory. How can this fact be reconciled with
the causal role of mental events in the physical world? Reconciling freedom with causal
determinism is a special case of the problem if we suppose that causal determinism
entails capture in, and freedom requires escape from, the nomological net. But the
broader issue can remain alive even for someone who believes a correct analysis of free
action reveals no conflict with determinism. Autonomy (freedom, self-rule) may or may
not clash with determinism; anomaly (failure to fall under a law) is, it would seem,
another matter.”
In order to allow mental events to cause physical events, yet not be reducible to
them, Davidson developed the following set of arguments.
1. "at least some mental events interact causally with physical events"
2. "where there is causality, there must be a law: events related as cause and
effect fall under strict deterministic laws."
3. "there are no strict deterministic laws on the basis of which mental events
can be predicted and explained." (mental events are "anomalous.")
3
Davidson viewed his work as extending that of Immanuel Kant on reconciling
(eliminating the anomalous contradiction between) freedom and necessity.
Davidson gave the term supervenience a specific philosophical meaning within
analytic philosophy. He saw supervenience as the last hope for a non-reductive
physicalism, which does not reduce the mental to the physical, the psychological to
the neurophysiological.
Davidson set two requirements for supervenience:
1. a domain can be supervenient on another without being reducible to it (non
reduction)
2. if a domain supervenes, it must be dependent on and be determined by the
subvenient domain (dependence)
In Jaegwon Kim's 1989 presidential address to the American Philosophical
Association, he said:
“The fact is that under Davidson's anomalous monism, mentality does no causal work.
Remember: in anomalous monism, events are causes only as they instantiate physical
laws, and this means that an event's mental properties make no causal difference.”
Kim claims that:
“The most fundamental tenet of physicalism concerns the ontology of the world. It
claims that the content of the world is wholly exhausted by matter. Material things are
all the things that there are; there is nothing inside the spacetime world that isn't
material, and of course there is nothing outside it either. The spacetime world is the
whole world, and material things, bits of matter and complex structures made up of bits
of matter, are its only inhabitants.”
Kim says that Davidson's goal of "non-reductive physicalism" is simply not possible.
The physical world is "causally closed," says Kim:
“what options are there if we set aside the physicalist picture? … This means that one
would be embracing an ontology that posits entities other than material substances —
that is, immaterial minds, or souls, outside physical space, with immaterial, nonphysical
properties.”
An informational theory of mind posits the existence of something physical, yet
immaterial. It is both a “non-reductive physicalism” and an “immaterial
physicalism.”
The germ of the idea was apparent in a series of papers in the 1950’s and 1960’s by
Hilary Putnam and Jerry Fodor, whose theory of functionalism pointed to
characteristics of mind that are “multiply realizable” in different physical substrates.
They were inspired by the then new digital computers, whose software could be
4
moved between different computers and perform the same functions. Mind is the
software in the brain hardware, they argued.
Our informational theory of mind shows how thoughts that are embodied in one
mind can be converted from their material instantiation and transformed into the
pure energy of sound waves or electromagnetic waves by which they are
communicated to other minds, there to be embodied again. During communication,
the information in human knowledge is not even embodied in material, though it is
still a part of the physical world.
Part II. Quantum physics introduces the chance needed to break the chains of strict
causal determinism
The problem of mental causation can be traced back to the first half of the
seventeenth century and René Descartes, who divided the world into three distinct
substances; God as the primary substance; matter, which has a spatial extension;
and mind, a thinking substance which is the locus of intellect and a free will.
Descartes thought that animal and human bodies are machines subject to laws of
matter alone. His famous “cogito, ergo sum” argument was used to distinguish his
thinking substance from his material body.
“I saw that while I could pretend that I had no body and that there was no world and no
place for me to be in, I could not for all that pretend that I did not exist. I saw on the
contrary that from the mere fact that I thought of doubting the truth of other things, it
followed quite evidently and certainly that I existed; whereas if I had merely ceased
thinking, even if everything else I had ever imagined had been true, I should have had no
reason to believe that I existed. From this I knew I was a substance whose whole
essence or nature is simply to think, and which does not require any place, or depend on
any material thing, in order to exist.” (Discourse on Method, 6:32)
Whereas Descartes argued that the thinking substance had freedom of the will, his
materialist contemporary Thomas Hobbes saw necessity and deterministic laws of
nature that denied human freedom.
“That which I say necessitates and determinates every action is the sum of all those
things which, being now existent, conduce and concur to the production of the action
hereafter, whereof if any one thing were wanting, the effect could not be produced. This
concourse of causes, whereof every one is determined to be such as it is by a like
concourse of former causes, may well be called the decree of God.” (Of Liberty and
Necessity, § 11)
5
In the second half of the eighteenth century, the great Isaac Newton discovered his
three laws of physics that seemed for many to come down on the side of Hobbes and
determinism. Explaining the heavenly motion, Newtonian laws surely could explain
all the workings of terrestrial material machines.
The power to predict eclipses of moon and sun many years in advance was
extrapolated in the eighteenth-century by Pierre-Simon Laplace to the idea of a
super intelligence that can predict the future based on a knowledge of positions,
velocities, and forces of all the particles of matter.
Thus arose the concept of “causal closure” and what might be called “the great chain
of causation.” But in the late nineteenth century, some philosophers and scientists
came to wonder whether our physical laws, derived as they are from fallible
experiments and measurements with only finite accuracy, are as absolute and
perfect and strict as most scientists and philosophers believed.
Charles Sanders Peirce was the leading philosopher who argued that the world
contained absolute chance. Peirce was deeply impressed by chance as a way to bring
diversity and "progress" (in the form of increasingly complex organisms) to the
world, including the mind. Peirce was unequivocal that chance was a real property
of the world. He named it Tyche (τύχη). He writes in his third Monist article, "The
Law of Mind,"
“In an article published in The Monist for January, 1891, I endeavored to show what
ideas ought to form the warp of a system of philosophy, and particularly emphasized
that of absolute chance. In the number of April, 1892, I argued further in favor of that
way of thinking, which it will be convenient to christen tychism (from τύχη, chance)... I
have begun by showing that tychism must give birth to an evolutionary cosmology, in
which all the regularities of nature and of mind are regarded as products of growth.”
Physicist Ludwig Boltzmann was the leading scientist who argued against perfect
deterministic laws. Boltzmann's first attempt to derive the second law of
thermodynamics (that entropy or disorder in the universe always increases)
assumed that gas particles followed strict deterministic dynamical laws, that is,
Newton's classical mechanics. But Boltzmann was immediately criticized by his
colleagues James Clerk Maxwell and Josef Loschmidt.
Boltzmann's mentor and colleague Loschmidt criticized Boltzmann's demonstration
of entropy increase on the grounds that dynamical laws are time reversible. If all the
particles could be turned around exactly (or if time could be reversed), in such a
case the entropy should decrease, violating the second law.
As a result, Boltzmann developed a statistical version of his H-Theorem, which again
showed that entropy would always increase, as long as the fundamental particles
themselves had what he called “molecular disorder” or “molecular chaos.”
6
Boltzmann explained the gas laws of thermodynamics as statistical laws resulting
from averaging over an enormous number (billions of billions of billions) of chaotic
atoms and molecules. Boltzmann maintained (as his student Franz Exner, and even
Exner's student Erwin Schrödinger would briefly insist) that no amount of
observational evidence can ever justify our assumptions of strict determinism.[]
Indeed, at the beginning of the twentieth century, observational evidence led to the
notion that fundamental physical quantities like energy and momentum need to be
“quantized,” putting irreducible limits on our ability to observe atoms and
molecules. This in turn led to quantum indeterminism, breaking the causal chain of
determinism that was thought to put limits on the workings of the human mind.
Max Planck in 1900 introduced the quantum concept. He explained the spectral
distribution of colors (wavelengths) in electromagnetic radiation by using
Boltzmann’s principle that the entropy S of a gas is related to the probabilities W for
random distribution of molecules in different places in its container (S = k logW,
where k is Boltzmann’s constant). Boltzmann’s calculations of probabilities used the
number of ways that particles can be distributed in various volumes of space. Planck
used the same combinatorial analysis, but now for the number of ways that
elements of energy could be distributed among a number of electromagnetic
oscillators. To simplify the calculations, both Boltzmann and Planck assumed that
energies could be multiples of a unit of energy, E = ε, 2 ε, 3 ε …. Plank regarded this
quantum hypothesis as a mathematically convenient device. He did not think it
represented reality. Nevertheless, this was the beginning of quantum theory. And
even more importantly it provided the first connection between the laws of matter
and the laws of pure energy (radiation). Planck told his son he had discovered
something as significant as Copernicus or Newton. [ref- Kuhn]
Five years later, Albert Einstein, in the same year that he published his theory of
special relativity (also connecting energy and matter, but now with the famous
equation E = mc2), his theory of Brownian motion, and his explanation of the
photoelectric effect (for which he was awarded the Nobel Prize), proposed that
electromagnetic radiation really consists of discrete quantities of light (he called
them lichtquant - today they are called photons). Radiant energy E = hν, where h is
Planck’s constant - the “quantum of action,” Planck called it - and ν is the frequency
of the radiation.
When in 1913 Neils Bohr published his work on the quantization of energy levels in
atoms (the so-called "old" quantum theory), he did not imagine that the transitions
between quantized energy levels involved quantized light particles. And this despite
the use of Einstein's formula and Planck's constant to determine the frequency of
the light energy emitted in a transition between energy levels: E2 - E1 = hν21.
As late as 1917, Einstein felt very much alone in believing the reality (his emphasis)
of light quanta:
7
“I do not doubt anymore the reality of radiation quanta, although I still stand quite
alone in this conviction.”
(Letter to Besso, quoted by Abraham Pais," "Subtle is the Lord...", p.411)
But the introduction of Boltzmann's statistical mechanical thinking to
electromagnetic theory has produced what Einstein called a "weakness in the
theory." It introduces the reality of an irreducible objective chance!
If light quanta are particles with energy E = hν travelling at the velocity of light c,
then they should have a momentum p = E/c = hν/c. When light is absorbed by
material particles, this momentum will clearly be transferred to the particle. But
when light is emitted by an atom or molecule, a problem appears.
Conservation of momentum requires that the momentum of the emitted particle will
cause an atom to recoil with momentum hν/c in the opposite direction. However,
the standard theory of spontaneous emission of radiation is that it produces a
spherical wave going out in all directions. A spherically symmetric wave has no
preferred direction. In which direction does the atom recoil?
An outgoing light particle must impart momentum hν/c to the atom or molecule, but
the direction of the momentum cannot be predicted! Neither can the theory predict
the time when the light quantum will be emitted.
Such a random time was not unknown to physics. When Ernest Rutherford derived
the law for radioactive decay of unstable atomic nuclei in 1900, he could only give
the probability of decay time. Einstein saw the connection with radiation emission:
“It speaks in favor of the theory that the statistical law assumed for [spontaneous]
emission is nothing but the Rutherford law of radioactive decay.”
(Pais," "Subtle is the Lord...", p.411)
But the inability to predict both the time and direction of light particle emissions,
said Einstein in 1917, is "a weakness in the theory... that it leaves time and direction
of elementary processes to chance (Zufall, ibid.)." And, of course for Einstein, God
does not play dice.
In 1925 Max Born, Werner Heisenberg, and Pascual Jordan, formulated their matrix
mechanics version of quantum mechanics as a superior formulation of Neils Bohr's
old quantum theory. The matrix mechanics confirmed discrete states and "quantum
jumps" of electrons between the energy levels, with emission or absorption of
Einstein’s light quanta.
8
In 1926, Erwin Schrödinger developed wave mechanics as an alternative
formulation of quantum mechanics. Schrödinger disliked Heisenberg’s abrupt
jumps. His wave mechanics was a continuous theory, but it predicted the same
energy levels and was otherwise identical in its predictions to the discrete theory.
Within months of the new wave mechanics, Max Born showed that while
Schrödinger's probability amplitude wave function evolved over time
deterministically, it only predicts the positions and velocities of atomic particles
probabilistically.
It is critically important to understand that probability is nothing but abstract
information. It is neither matter nor energy, despite the fact that it often needs
energy to be transmitted and it needs matter to be embodied in the receiver.
Heisenberg used Schrödinger's wave functions to calculate the "transition
probabilities" for electrons to jump from one energy level to another. Schrödinger's
wave mechanics was easier to visualize and much easier to calculate than
Heisenberg's own matrix mechanics, although wave mechanics and matrix
mechanics are isomorphisms, two ways of looking at the one theory of quantum
mechanics.
Despite this identity, Schrödinger always stressed that the time evolution of his
wave function, which gives the probability of observing a particle, is a deterministic
process. While Heisenberg emphasized that in any measurement that locates a
particle, there is an irreducible indeterminism.
In early 1927, Heisenberg announced his uncertainty principle limiting our ability to
measure the simultaneous position and velocity of atomic particles. The product of
the position error and the momentum error is greater than or equal to Planck's
constant h/2π.
ΔpΔx ≥ h/2π
Heisenberg declared that the new quantum theory undermines the classical idea of
causality. Confirming Einstein’s insights from many years earlier (but without
mentioning him), Heisenberg said:
"We cannot - and here is where the causal law breaks down - explain why a particular
atom will decay at one moment and not the next, or what causes it to emit an electron in
this direction rather than that." []
Heisenberg was convinced that quantum mechanics had put an end to classical ideas
of causality and strict determinism. Although Planck, Einstein, Schrödinger, and
others had their doubts about Heisenberg’s attack on causality, quantum physics
has proved to be the most accurate theory of the physical world.
9
When radiation interacts with matter, first being absorbed from a specific direction,
then being emitted in an arbitrary direction (as Einstein first showed in 1917), the
motion of the atom changes stochastically. As a result, any deterministic path
information is destroyed. Even a Laplace demon could not know where the atom
goes next. If it were to form part of a molecule, for example, there is no way that that
molecule’s existence was determined by immediate past events.
Consequently, we can say that there is no “bottom-up” causation that links the
positions and velocities of atoms with the positions and velocities of molecules that
they might form.
Note that after they are combined into molecules, there is now a “downward
causation,” because the motions of the molecules constrain the motions of their
component atoms. This “downward causation” is an emergent property, a we shall
see in Part IV.
We shall now see that quantum physics can break upward causal deterministic
chains between the physics of the atomic and molecular level and the biophysics of
the organic world. It also breaks any deterministic chains between the
neurobiological brain and the mind, replacing them with a statistical causality that
provides us with what William James called “some looseness in the joints.” []
Part III. Breaking causal chains in cells and neurons
Inside a cell, the high-speed random motions of the molecules is critically important
to the rapid creation of proteins. As a ribosome travels down the messenger RNA
reading each three-nucleotide codon, it expects there to be an available transfer
RNA complex nearby with the three appropriate amino acids attached close to their
binding site on the growing polypeptide chain.
The average time between bindings of the amino acid triplets is one microsecond. []
The only reason that the correct one is available when necessary is that on the
average, every other triplet was also available in the same microsecond, thanks to
those high-speed random motions. There could be no pre-arranged sequences of the
correct transfer RNA of the kind needed to have the lower-level molecules
controlling which proteins are being built in a “bottom-up” causal way.
Once again, the lower level is random and stochastic. The information that is causing
or determining the process is only present at the next higher level, from which it is
exerting downward causal control over the molecules, albeit statistically.
Note that the three amino acids attached themselves to the transfer RNA in a similar
stochastic fashion. There is no deterministic control coming from below. The
transfer RNA grabs the amino acids it needs as they collide with it at random. This
layer of random chaos erases any past path information of the molecules, insulating
10
the upper level from deterministic “bottom up” causation, even as the upper level
exerts its downward causal control.
Neurons, while microscopically small compared to the body, are huge compared to
even the largest biological macromolecules. They are large enough to be
“adequately determined” systems, not normally susceptible to atomic or molecular
indeterminacy. When neurons fire, billions of atoms (mostly charged ions) are
involved. At 98 degrees centigrade, individual ions are moving randomly, at very
high speeds. They move in concert (by “downward causation”) when the relatively
high-voltage action potential moves down the neuron. There is no possibility for
atomic level motions to cause (“bottom-up”) the
But the situation is a bit different at the synapses and at all of the tiny ion channels
along the neuron. When neurotransmitters travel across the synaptic cleft, they are
carried in another kind of “quanta,” the vesicles of neurotransmitter, each
containing only a few thousand molecules (of dopamine, acetylcholine, etc.)
Part IV. The emergence of downward mental causation.
Although the concept of emergence has become very popular in the last few decades
in connection with the development of chaos and complexity theories, it is actually a
very old idea, dating at least to the nineteenth century, with some hints of it in
ancient and medieval philosophy.
The idea of emergence was implicit in the work of John Stuart Mill and explicit in
"emergentists" like George Henry Lewes, Samuel Alexander, C. Lloyd Morgan, and C.
D. Broad.
John Stuart Mill discusses the Laws of Nature in his System of Logic, Book III,
chapter IV and describes the Law of Universal Causation in chapter V ("The truth
that every fact which has a beginning has a cause"). Then, in chapter VI, Mill
explores the "Composition of Causes" in mechanics where the parallelogram of two
vector forces explains the resultant force. However, this simple principle from
dynamics, says Mill, does not apply to materialist chemistry nor to more complex
biological life.
Although Mill did not use the term "emergent," he makes the concept clear enough:
“This principle [of simple composition], however, by no means prevails in all
departments of the field of nature. The chemical combination of two substances
produces, as is well known, a third substance with properties different from those of
either of the two substances separately, or of both of them taken together. Not a trace of
the properties of hydrogen or of oxygen is observable in those of their compound,
water. The taste of sugar of lead is not the sum of the tastes of its component elements,
11
acetic acid and lead or its oxide; nor is the colour blue vitriol a mixture of the colours of
sulphuric acid and copper. This explains why mechanics is a deductive or demonstrative
science, and chemistry not. In the one, we can compute the effects of m combinations of
causes, whether real or hypothetical, from the laws which we know to govern those
causes when acting separately; because they continue to observe the same laws when in
combination which they observed when separate: whatever could have happened in
consequence of each cause taken by itself, happens when they are together, and we have
only to "cast up" the results. Not so in the phenomena which are the peculiar subject of
the science of chemistry. There, most of the uniformities to which the causes conformed
when separate cease altogether when they are conjoined; and we are not, at least in the
present state of our knowledge, able to foresee what result will follow from any new
combination, until we have tried the specific experiment.
“If this be true of chemical combinations, it is still more true of those far more
complex combinations of elements which constitute organized bodies; and in which
those extraordinary new uniformities arise, which are called the laws of life. All
organized bodies are composed of parts similar to those composing inorganic nature,
and which have even themselves existed in an organic state; but the phenomena of life,
which result from the juxtaposition of those parts in a certain manner, bear no analogy
to any of the effects which would be produced by the action of the component
substances considered as mere physical agents.”
(A System of Logic, Book III, chapter VI)
George Henry Lewes also used Mill's example of the properties of water not being
reducible to those of oxygen and hydrogen. If all effects are only the consequences of
their components, everything would be completely determined by mathematical
laws, he said, and then coined the term "emergent":
“Although each effect is the resultant of its components, the product of its factors, we
cannot always trace the steps of the process, so as to see in the product the mode of
operation of each factor. In the latter case, I propose to call the effect an emergent. It
arises out of the combined agencies, but in a form which does not display the agents in
action.”
(Problems of Life and Mind (1875), vol. 2, p. 412)
In his 1912 book Instinct and Experience, C. Lloyd Morgan revived the term
"emergent".
In his 1920 book Space, Time, and Deity Samuel Alexander initially cited Lloyd
Morgan as the source of emergentism, but Lloyd Morgan reminded Alexander about
Lewes' 1875 work. Alexander wrote:
“much of what I have to say has been already said by Mr. Lloyd Morgan in the
concluding chapter of his work on Instinct and Experience. The argument is that mind
has certain specific characters to which there is or even can be no neural counterpart. It
is not enough to say that there is no mechanical counterpart, for the neural structure is
12
not mechanical but physiological and has life. Mind is, according to our interpretation of
the facts, an 'emergent' from life, and life an emergent from a lower physico-chemical
level of existence. It may well be that, as some think, life itself implies some independent
entity and is indeed only mind in a lower form. But this is a different question, which
does not concern us yet.If life is mind, and is a non-physical entity, arguments derived
from the conscious features of mind are at best only corroborative, and it is an
inconvenience in these discussions that the two sets of arguments are sometimes
combined. Accordingly. I may neglect such considerations as the selectiveness of mind
which it shares with all vital structures.”
(Space, Time, and Deity (1920), vol. 2, p. 14)
Later, in his 1922 Gifford Lectures and 1923 book Emergent Evolution, Lloyd
Morgan defined emergent evolution and introduced the related "top-down" concept
of hierarchical supervenience:
“in the physical world emergence is no less exemplified in the advent of each new
kind of atom, and of each new kind of molecule. It is beyond the wit of man to number
the instances of emergence. But if nothing new emerge - if there be only regrouping of
pre-existing events and nothing more - then there is no emergent evolution.
“Such emergence of the new is now widely accepted where life and mind are
concerned. It is a doctrine untiringly advocated by Professor Bergson.
“One could not foretell the emergent character of vital events from the fullest possible
knowledge of physico-chemical events only...Such is the hypothesis of emergent
evolution.
“Under emergent evolution there is progressive development of stuff which becomes
new stuff in virtue of the higher status to which it has become raised under some
supervenient kind of substantial gotogetherness.”
(Emergent Evolution (1923), pp. 1-6)
But Lloyd Morgan's idea of emergent novelty may have been an epistemic rather
than an ontological claim. The laws of nature may still pre-determine all the higherlevel properties, though our understanding of the laws may not be enough to allow
us to predict the higher levels:
“May we bring emergence itself under the rubric of causation?...Is emergent evolution
itself the expression of an orderly and progressive development? If so (and such is my
contention), then emergence itself takes rank, as Mill and Lewes also contended, among
the "laws of nature." We may be unable to predict the probable nature of a character
that is emergently new. We could not have foretold on the basis of physico-chemical
events only what the nature of life would be. But that is due to our ignorance before the
event of the law of its emergence. May we then, say:
13
“...That such novelty is for us unpredictable owing to our partial knowledge of the
plan of emergence up to date, and our necessary ignorance of what the further
development of that plan will be.“
(Emergent Evolution (1923), p. 281-2)
Vitalists like Henri Bergson and Hans Driesch may not have used the term
emergence, but they strongly supported the idea of teleological (purposeful), likely
non-physical causes, without which they thought that life and mind could not have
emerged from physical matter.
Emergence supports the idea of mental causation in particular and the more general
idea of downward causation, for example the downward control of the motions of a
cell's atoms and molecules by supervening biological macromolecules.
Reductionism is the view that material particles and the physical forces between
them, which are supposed to be mathematically analytical, can explain all that
happens in the world. Chemistry is thought reducible to physics, biology reducible
to chemistry, psychology (via neuroscience) reducible to biology, and mind/brain
(or cognitive science) reducible to psychology. The causal relations are "bottom-up."
The finest details of brain events are thought to be a consequence of motions of the
material particles that comprise the brain. Reductionism implies that mind is an
epiphenomenon, or worse, just an illusion. The reductionist idea that everything is
the consequence of "bottom up" physical causes is often called eliminative
materialism.
By contrast, downward causation is a kind of holism that denies reductionism.
"Wholes" can enforce constraints on their "parts" that make them move in ways that
are unpredictable, even given the complete information about the parts along with
the complete information about the state of the universe outside those parts.
Downward causation is closely related to the concepts of emergence, selforganization, and supervenience. The problem of mental causation is a specific case
of downward causal control that is central to the philosophy of mind.
The idea that minds have powers "over and above" the known physical, chemical,
and biological laws is sometimes called "mentalism." It is related to the idea of
"vitalism," that biology might involve new laws that cannot be reduced to "nothing
but" the laws of physics.
Although Donald Campbell in 1974 coined the phrase "downward causation," the
concept was actually described a few years earlier by Roger Sperry who claimed
that it supports a form of “mentalism.” Sperry cited a wheel rolling downhill as an
example of what he called "downward causal control." The atoms and molecules are
caught up and overpowered by the higher properties of the whole. Sperry compared
14
the rolling wheel to an ongoing brain process or a progressing train of thought in
which the overall properties of the brain process, as a coherent organizational
entity, determine the timing and spacing of the firing patterns within its neural
infrastructure.
In 1977, Karl Popper announced that he changed his mind on the importance of
indeterminism. He developed a two-stage model of free will and said that it was an
example of downward causation. Popper cited both Sperry and Campbell as the
source of the idea of downward causation.
In a lecture called Natural Selection and the Emergence of Mind, Popper said he had
changed his mind (a rare admission by a philosopher) about two things. First he
now thought that Darwinian evolution and natural selection was not a "tautology"
that made it an unfalsifiable theory. Second, he had come to accept the random
variation and selection of ideas as a model of free will.
“The selection of a kind of behavior out of a randomly offered repertoire may be an act
of choice, even an act of free will. I am an indeterminist; and in discussing
indeterminism I have often regretfully pointed out that quantum indeterminacy does
not seem to help us; for the amplification of something like, say, radioactive
disintegration processes would not lead to human action or even animal action, but only
to random movements.
“I have changed my mind on this issue. A choice process may be a selection process, and
the selection may be from some repertoire of random events, without being random in
its turn. This seems to me to offer a promising solution to one of our most vexing
problems, and one by downward causation.” []
Since Popper’s two-stage model of free will and the analogy with biological
evolution, several other thinkers have developed two-stage free will models,
including Daniel Dennett, Henry Margenau, Robert Kane, David Sedley and Anthony
Long, Roger Penrose, David Layzer, Julia Annas, Alfred Mele, John Martin Fischer,
Stephen Kosslyn, John Searle, Bob Doyle, and Martin Heisenberg. []
In 2007, Christoph Koch chaired a conference on free will sponsored by the
Templeton Foundation. The proceedings were published in the 2009 book
Downward Causation and the Neurobiology of Free Will. One of the principal
contributors, Nancey Murphy, and her colleague Warren Brown, had argued in their
2007 book, Did My Neurons Make Me Do It?, for the existence of downward causation
on the basis of complexity and chaos theories. Murphy and Brown’s goal was to
defend a non-reductive version of physicalism whereby humans are (at least
sometimes) the authors of their own thoughts and actions.
“If humans are purely physical, and if it is the brain that does the work formerly
assigned to the mind or soul, then how can it fail to be the case that all of our
thoughts and actions are determined by the laws of neurobiology? If this is the
case, then free will, moral responsibility, and, indeed, reason itself would appear
15
to be in jeopardy.” []
The problem, say Murphy and Brown, is not neurobiological determinism, it is
neurobiological reductionism that they want to avoid. This is odd, because it is
precisely the elimination of strict determinism that prevents the causal closure and
reduction of the mind to the neural level below, as well as the emergence of freedom
for the mind to exert downward causation on all the levels below.
Murphy and her colleagues propose to get the kind of non-reductive physicalism
that Donald Davidson wanted based on the emergence of complex systems in farfrom-equilibrium conditions in deterministic chaos that exhibit spontaneous selforganization.
Ilya Prigogine won the Nobel Prize in 1977 for his work on thermodynamic
irreversibility. He is best known for his studies of “dissipative” structures and selforganizing complex systems. These are far-from-equilibrium systems into which
flow matter and energy with very low entropy. These systems maintain their selforganization and low entropy in apparent violation of the second law of
thermodynamics, a fact that had been cited by many other thinkers before
Prigogine, including the general systems theorist Ludwig von Bertalanffy and
physicist Erwin Schrödinger. In his landmark 1945 essay What Is Life?, Schrödinger
famously described life as “feeding on negative entropy.”
“What is the characteristic feature of life? When is a piece of matter said to be alive?
When it goes on 'doing something', moving, exchanging material with its environment,
and so forth, and that for a much longer period than we would expect an inanimate
piece of matter to 'keep going' under similar circumstances…
“It is by avoiding the rapid decay into the inert state of `equilibrium' that an organism
appears so enigmatic; so much so, that from the earliest times of human thought some
special non-physical or supernatural force (vis viva, entelechy) was claimed to be
operative in the organism, and in some quarters is still claimed.
“What then is that precious something contained in our food which keeps us from
death? That is easily answered. Every process, event, happening – call it what you will;
in a word, everything that is going on in Nature means an increase of the entropy of the
part of the world where it is going on. Thus a living organism continually increases its
entropy – or, as you may say, produces positive entropy – and thus tends to approach
the dangerous state of maximum entropy, which is death. It can only keep aloof from it,
i.e. alive, by continually drawing from its environment negative entropy – which is
something very positive as we shall immediately see. What an organism feeds upon is
negative entropy. Or, to put it less paradoxically, the essential thing in metabolism is
that the organism succeeds in freeing itself from all the entropy it cannot help
producing while alive. “ []
16
As we shall see below, the low entropy of all information structures is only possible
because excess positive entropy is carried away, ultimately to the cosmic
background of the expanding universe.
Some of the standard examples given for such self-organizing complex physical
systems include whirlpools, Bénard convection cells, basalt columns, and soil
polygons, all of which apparently violate the normal tendency toward disorder
according to the second law of thermodynamics. These temporary ordered
structures depend for their existence on a steady flow of matter and energy through
them. The order visible is a minimal amount of information.
The complexity theorists like to compare their complex physical system examples to
living systems, which also are self-organizing and require a steady flow of matter
and energy for their continued existence.
But this comparison is very weak. The information in complex physical systems is
thin and fragile. By comparison, the information in biological systems has a lifetime
of the same order as astronomical information structures, despite constantly
changing the matter and energy that embody and communicate biological
information from generation to generation.
From the moment the earliest “self-replicating” and “self-organizing” systems
appeared at the origin of life, the universe has contained purposeful beings. We can
say that a primitive version of the concept of a “self” emerges at this time.
Before this there was no pre-existing telos, no teleology, as many philosophers and
theologians imagined, but there is teleonomy, as Jacques Monod called it. François
Jacob, who shared the Nobel Prize with Monod, said that the purpose of every cell is
to become two cells.[]
To summarize the emergence of systems exhibiting downward causation, we can
divide them into Prigogine’s complex dissipative physical systems and Schrödinger’s
living biological systems.
The purely physical systems exhibit many interesting features described by
complexity theory, fractals, attractors, non-linearity, etc. They include information
that is maintained over time despite the entropic tendency of second law of
thermodynamics. But they show no apparent purpose. Terrence Deacon calls them
morphodynamic - shape changing.[] As they move, these systems carry their
component particles with them, as Roger Sperry described the atoms in a rolling
wheel. But this is an anemic sort of downward causation.
Biological systems, by comparison, use their information for more than just
maintenance. They self-replicate and exhibit teleonomic purpose. . Terrence Deacon
calls these teleodynamic. They evolve to more and more complex species, ultimately
reaching the one that can externalize pure information to be shared with other
17
members of the species, knowledge used (for better or worse) to control their
environment. They do this by taking advantage of the freedom made possible by
indeterminism and the causal power to act with downward causal control that
results from an emergent adequate determinism. This is the most robust kind of
downward causation. It is the mental causation of the human mind.
Part V. Determinism itself emerges as the universe evolves. [from the initial chaos
and minimal information that exists at the origin to higher and information states,
despite the second law of thermodynamics.]
When small numbers of atoms and molecules interact, their motions and behaviors
are indeterministic, governed by the rules of quantum mechanics.
However, when large numbers of microscopic particle get together in aggregates,
the indeterminacy of the individual particles gets averaged over and macroscopic
adequately deterministic laws "emerge."
This determinism is an emergent property. It did not exist in the early universe of
pure radiation and sub-atomic particles when all interactions of particles and
matter were quantum mechanical and indeterministic.
The "laws of nature," such as Newton's laws of motion, are all statistical in nature.
They "emerge" when large numbers of atoms or molecules get together. For large
enough numbers, the probabilistic laws of nature approach practical certainty. But
the fundamental indeterminism of component atoms never completely disappears.
Indeed, quantum events sometimes show up in the macroscopic world, when an
alpha-particle decay fires a Geiger counter, a cosmic ray mutates a gene, when a
single photon is detected by an eye, or a single molecule is smelled by a nose.
The emergence of large and “adequately determined” information structures in the
universe raises a question that is cosmological and ultimately metaphysical.
What is the process that creates the information structures in the universe?
Given the second law of thermodynamics, which says that any system will over time
approach a thermodynamic equilibrium of maximum disorder or entropy, in which
all information is lost, and given the best current model for the origin of the
universe, which says everything began in a state of equilibrium some 13.75 billion
years ago, how can it be that living beings are creating and communicating new
information every day? Why are we not still in that state of equilibrium?
The question may be cosmological and metaphysical, but the answer is eminently
practical and physical. It is found in the interaction between quantum physics and
thermodynamics.
18
The discovery of a two-part cosmic creation process casts light on some classical
problems in philosophy and physics, because it is the same process that creates new
biological species and it explains the freedom and creativity of the human mind.
When information is stored in any structure, two physical processes must occur.
The first is the mysterious collapse of a quantum-mechanical wave function, which
happens in any measurement process, as Heisenberg found. Such quantum events
involve irreducible indeterminacy and chance, but less often noted is the fact that
quantum physics is directly responsible for the extraordinary temporal stability of
most information structures, such as our DNA.
The second is a local decrease in the entropy (which appears to violate the second
law) corresponding to the increase in information. As Schrödinger knew, entropy
greater than the information increase must be transferred away from the location of
new information, ultimately to the cosmic background, to satisfy the second law.
This two-step cosmic creation process generates the conditions without which there
could be nothing of value in the universe, nothing to be known, and nothing to do
the knowing.
The second law of thermodynamics says that the entropy (or disorder) of a closed
physical system increases until it reaches a maximum, the state of thermodynamic
equilibrium. It requires that the total entropy of the universe is now and has always
been increasing.
This established fact of increasing entropy has led many scientists and philosophers
to assume that the universe we have is running down. They think that means the
universe began in a very high state of information, since the second law requires
that any organization or order is susceptible to decay. The information that remains
today, in their view, has always been here. There is nothing new under the sun.
But the universe is not a closed system. It is in a dynamic state of expansion that is
moving away from thermodynamic equilibrium faster than entropy-generating
processes can keep up. The maximum possible entropy is increasing much faster
than the actual increase in entropy. The difference between the maximum possible
entropy and the actual entropy is potential information, as shown by David Layzer.[]
Creation of information structures means that in parts of the universe the local
entropy is actually going down. Creation of a low entropy system is always
accompanied by radiation of entropy away from the local structures to distant parts
of the universe, into the night sky for example, and then on to the cosmic
background.
Creation of information structures means that today there is always more
information in the universe now than at any earlier time. This fact of increasing
19
information fits well with an open and undetermined universe that is still creating
itself. In this universe, stars are still forming, biological systems are creating new
species, and intelligent human beings are co-creators of the world we live in.
Because information increases, we can be co-creators of the universe
All this creation is the result of the two-step core creative process. I like to think that
understanding this process is as close as we are likely to come to understanding the
traditional idea of a creator of the universe, a still-present divine providence, the
cosmic source of everything good and evil.
The two-step cosmic creation process also underlies the most plausible and
practical model for free will. Because each free decision to act also creates
information in the world, it too must be a two-stage process, first involving some
quantum indeterminism to freely generate alternative possibilities, then an
adequately determined decision and action statistically caused by reasons, motives,
intentions, feelings and desires. Nearly two-dozen philosophers and scientists have
proposed such a model. []
Darwinian evolution is another two-step creative process. Darwin proposed that
chance variations (mutations) in the gene pool would be followed by natural
selection to identify the variations with the greatest reproductive success.
Darwin inspired the first philosopher to propose a two-stage model of free will,
William James, who compared his “mental evolution” explicitly to Darwinian
evolution.
The genius of James’ picture of free will is that indeterminism is the source for what
James called "alternative possibilities" and "ambiguous futures." The chance
generation of such alternative possibilities for action does not in any way limit his
choice to one of them. For James, chance is not the direct cause of actions. He makes
it clear that it is his choice that “grants consent” to one of them.
James was the first thinker to enunciate clearly a two-stage decision process, with
chance in a present time of random alternatives, leading to a choice which grants
consent to one possibility and transforms an equivocal ambiguous future into an
unalterable and simple past. There is a temporal sequence of undetermined
alternative possibilities followed by an adequately determined choice where chance
is no longer a factor.
Information philosophy offers a theory of mind that transcends flawed "brain as
computer" models. The brain is not a digital electronic computer, though it has
information circuits. It is not a Turing machine of deterministic states. The brain is
not a mechanism in the seventeenth-century sense of a time-reversible system
obeying Newtonian laws of physics. Time is of the essence in the mind. The arrow of
time traces the irreversible development of unique biological individuals with
20
increasing increasing information in minds that includes consciousness of self and
others, and a nurturing environment.
Information philosophy sees the brain as a magnificent information processing and
decision system. It is orders of magnitude more capable than the whole of today's
internetworked system of computers and multimedia communications channels at
accumulating actionable knowledge, which is the brain's natural purpose, or telos.
Note there are three distinct kinds of information emergence:
1. the order out of chaos when the matter in the universe forms information
structures like galaxies, stars, and planets, or dissipative complex systems.
2. the order out of order when the material information structures form selfreplicating biological information structures with teleonomic purpose. []
3. the pure information out of order when organisms with minds externalize
information, communicating it to other minds and storing it in the environment.
Information philosophy claims that everything created since the origin of the
universe over thirteen billion years ago has involved just two fundamental physical
processes that combine to form the core of all creative processes. The two physical
processes in the core creative process are quantum physics (the collapse of the
wave function probability whenever a new bit of information appears) and
thermodynamics (the inflow of negative entropy, energy, and matter, and the
outflow of positive entropy, energy, and matter) .
This two-step core creative process underlies the formation of microscopic objects
like atoms and molecules, as well as macroscopic objects like galaxies, stars, and
planets.
We showed in part III that the formation of self-organizing physical systems in
conditions far from equilibrium that are the subjects of chaos and complexity
theories are this basic, non-teleonomic form of emergence.
With the emergence of teleonomic (purposive) information in self-replicating
systems, the same core process underlies all biological creation. But now some
random changes in information structures are rejected by natural selection, while
others reproduce successfully. The successful strategies are encoded in biological
macromolecules which transmit the information from generation to generation.
Finally, with the emergence of self-aware organisms and the creation of extrabiological information stored in the environment, the same information-generating
core process underlies communication, consciousness, free will, and creativity.
Part VI. Is our information theory of mind a kind of dualism?
21
Information is neither matter nor energy, but it needs matter for its embodiment
and energy for its communication.
A living being is a form through which passes a flow of matter and energy (with low
entropy). Genetic information is used to build the information-rich matter into an
overall information structure that contains a very large number of hierarchically
organized information structures. Emergent higher levels exert downward
causation on the contents of the lower levels.
The total information in multi-cellular living beings can develop to be many orders
of magnitude more than the information present in the original cell. The creation of
this new information would be impossible for a deterministic universe, in which
information is constant.
Immaterial information is perhaps as close as a physical or biological scientist can
get to the idea of a soul or spirit that departs the body at death. When a living being
dies, it is the maintenance of biological information that ceases. The matter remains.
Biological systems are different from purely physical systems primarily because
they create, store, and communicate information. Living things store information in
a memory of the past that they use to shape their future. Fundamental physical
objects like subatomic particles and atoms have no history, beyond the fact of their
emergence from radiation in the early universe..
And when human beings export some of their personal information to make it a part
of human culture, that information moves closer to becoming immortal.
Human beings differ from other animals in their extraordinary ability to
communicate information and store it in external artifacts. In the last decade the
amount of external information per person may have grown to exceed an
individual's purely biological information.
Since the 1950's, the science of human behavior has changed dramatically from a
"black box" model of a mind that starts out as a "blank slate" conditioned by
environmental stimuli. Today's mind model contains many "functions" implemented
with stored programs, all of them information structures in the brain. The new
"computational model" of cognitive science likens the brain to a computer, with
some programs and data inherited and others developed as appropriate reactions to
experience.
But the brain should be regarded less as an algorithmic computer with one or more
central processing units than as a multi-channel and multi-track experience
recorder and reproducer with an extremely high data rate. Information about an
experience - the sights, sounds, smells, touch, and taste - is recorded along with the
emotions - feelings of pleasure, pain, hopes, and fears - that accompany the
experience. When confronted with similar experiences later, the brain can
22
reproduce information about the original experience (an instant replay) that helps
to guide current actions.
Our informational theory of mind is a kind of dualism because of the nature of
information. Information in the mind, as software is to hardware, appears to exist
“over and above” its realization in the material brain.
When sensory neurons fire, they can record information in memory. When
considering alternative possibilities for action later, reproduced experiences played
back from memory are again pure information, but if acted upon, they can fire
motor neurons that move the body.
Information is neither matter nor energy. It is sometimes embodied in matter and
sometimes is communicated as pure energy. It is the scientific basis for an
immaterial mind that can nevertheless affect the physical world. Information is the
modern spirit.
Where does our informational model of mental causation fit in the spectrum of
theories of mind, from the classical substance dualism of René Descartes to the
mind-body identity theory of Ullin T. Place, Herbert Feigl, and J.J.C Smart?
It is likely closest to what philosophers of mind call property dualism, because the
world contains just physical substance, and information, while fundamentally
neither matter nor energy, is instantiated physically. It is a non-reductive
physicalism, similar to Davidson’s anomalous monism, resolving the anomaly, or
antinomy as Kant called it, between freedom and necessity.
The informational theory of mind is part of an emergent physicalism, because
information itself is emergent, first from the inchoate universe in the form of
radiation and matter, then in astronomical structures like galaxies, stars, and
planets, then in biological organisms, until we reach the information-creating minds
of intelligent beings. Human knowledge is growing exponentially today at a rate
faster than the expansion of the universe.
What the informational theory of mind tells us is that mind is not “outside space and
time” like the Kantian noumena, it is not an illusion, or an epiphenomenalism, or a
metaphysical panpsychism. It is non-reductively physical.
The dualism of the informational theory of mind maps well onto and sheds light on
the ancient metaphysical dualisms of idealism and materialism and perhaps even on
subject and object.
Conclusion
As we noted above, the locus classicus of twentieth-century discussions of mental
causation is Donald Davidson's 1970 essay "Mental Events," which was revisited in
23
his 1993 essay, "Thinking Causes," published together with 15 critical essays on
Davidson's work in the 1993 book Mental Causation, edited by John Heil and Alfred
Mele.
What can we then make of Jaegwon Kim’s assertion that "non-reductive
physicalism" is simply not possible, that the physical world is "causally closed?"
Recall Kim’s view:
“what options are there if we set aside the physicalist picture? Leaving physicalism
behind is to abandon ontological physicalism, the view that bits of matter and their
aggregates in space-time exhaust the contents of the world. This means that one would
be embracing an ontology that posits entities other than material substances — that is,
immaterial minds, or souls, outside physical space, with immaterial, nonphysical
properties.”
(Physicalism, or Something Near Enough, p. 71)
We find the idea of causal closure is a mistake. Quantum physics and the second law
of thermodynamics show us that the physical world is not causally closed. The
expansion of the universe shows it to be physically and informationally open,
otherwise the total amount of information in the universe would be conserved, a
constant of nature, which some mathematically inclined philosophers and scientists
appear to believe (Roger Penrose and Michael Lockwood. for example).
Jaegwon Kim may be wrong about causal closure, but he is correct that the ontology
we embrace does picture the mind as immaterial. Although thoughts in the mind Descartes’ “thinking substance” - are immaterial, that does not mean that they are
“outside physical space.” Rather they are physically present in our brains in the
same sense as mathematics is there, as all concepts and ideas are there. They are
information, the software in the hardware.
As we saw above, Davidson made three causal claims for mental causation. They can
now be adapted to the statistical causality allowed by quantum physics:
1. Mental events are statistically caused by physical events.
2. Causal relations are normally backed by statistical laws.
3. There are no strict deterministic laws for mental events acting on physical
events. But statistical causality and adequate determinism are good enough.
Davidson's man goal was to deny the reducibility of mental events to physical events
in the lower levels, to deny the claim that the motions of the atoms and molecules at
the base level are causally determinative of everything that happens at higher levels.
This we can grant, but nevertheless maintain a statistical causality that is very often
“adequately determined.”
24
We can also accept the goal of Nancey Murphy and her colleagues, that there is no
neurobiological reductionism. Moreover, despite Murphy’s acceptance of causal
determinism, there is no strict neurobiological determinism either.
Finally, we can validate the optimistic view of most emergentists, that there is
something “over and above” the physical. The mind is not “nothing but” the brain, as
eliminative materialists (for example, Daniel Dennett and the Churchlands) believe.
The mind is information.
Notes.
Bibliography.
Alexander, Samuel, Space, Time, and Deity
Baars, Bernard. In the Theater of Consciousness
Boltzmann, Ludwig
Campbell, Donald, Downward Causation
Chalmers, David 1996 The Conscious Mind, Oxford University Press, New York.
Davidson, Donald, "The 'Mental' and the 'Physical'", in Concepts, Theories, and the
Mind-Body Problem, Minnesota Studies in the Philosophy of Science, vol.2, p. 414
Davidson, Donald. 1970, “Mental Events,” in Experience and Theory, University of
Massachusetts Press (reprinted in Davidson (1980), pp. 207-227).
Davidson, Donald (1980). Essays on Actions and Events, Oxford: Clarendon Press.
Deacon, Terrence, Incomplete Nature.
Dennett, Daniel, Brainstorms.
Descartes, René (1642/1986). Meditations on First Philosophy, translated by John
Cottingham, Cambridge: Cambridge University Press.
Descartes, René Discourse on Method, 6:32
Doyle, Bob Free Will: The Scandal in Philosophy
Doyle, Robert O. Quantum Erasure of Classical Path Information by Photon
Interactions and Rearrangement Collisions
Einstein, Albert (1905) A Heuristic Viewpoint on the Production and
Transformation of Light," Annalen der Physik, vol.17, p.133, English translation American Journal of Physics, 33, 5, p.368
Feigl, Herbert (1958). "The 'Mental' and the 'Physical'" in Minnesota Studies in the
Philosophy of Science, vol. II, pp. 370-497.
Fodor, Jerry (1974). "Special Sciences, or the Disunity of Science as a Working
Hypothesis," Synthese 28; 97-115.
Fodor, Jerry (1980).
Heil, John; and Alfred Mele (eds.) (1993). Mental Causation. Oxford: Clarendon Press.
Heisenberg, Martin. “Is Free Will an Illusion?, Nature,
25
Hobbes, Thomas 1654 Of Liberty and Necessity
Jammer, Max Conceptual Development of Quantum Mechanics
Kim, Jaegwon. 1989 "The Myth of Nonreductive Materialism", in Proceedings of the
American Philosophical Association, Princeton University Press, p.150)
Kim, Jaegwon (1998). Mind in a Physical World: An Essay on the Mind-Body Problem
and Mental Causation. Cambridge, Mass.: MIT Press.
Kim, Jaegwon (2005). Physicalism, or Something Near Enough, Princeton, Princeton
University Press.
Kuhn, Thomas (1978). Black-Body Theory and the Quantum Discontinuity, 18941912, Oxford University Press, New York
Layzer, David Cosmogenesis.
Lewes, George Henry, Problems of Life and Mind
Lloyd Morgan, Conwiy, Emergent Evolution
Mill, John Stuart, A System of Logic
Murphy, Nancey and Warren Brown, Did My Neurons Make Me Do It?
Murphy, Nancey, George E.R. Ellis, and Timothy O’Connor, Downward Causation and
the Neurobiology of Free Will
Nagel, Thomas. Mind and Cosmos
Pais, Abraham “Subtle Is The Lord…”
Peirce, Charles Sanders. 1892 "The Law of Mind," The Monist, vol. 2, pp. 533-559,
Collected Papers of Charles Sanders Peirce, vol VI, p.86
Planck, Max
Putnam, Hilary (1967). "The Nature of Mental States" in Mind, Language, and
Reality: Philosophical Papers, vol. II (Cambridge University Press
(1975).(Functionalism)
Putnam, Hilary (1975). "The Meaning of 'Meaning'", in Putnam's Mind, Language
and Reality: Philosophical Papers 2, 1975, Cambridge: Cambridge University Press,
pp. 215-71.
Putnam, Hilary (1967). "The Nature of Mental States" in Representation and Reality
(Cambridge. MIT Press (1988). (Abandons Functionalism)
Robb, David (2003). "Mental Causation," The Stanford Encyclopedia of Philosophy,
Edward Zalta (ed.). (link)
Searle, John.
Smart, J.J.C.
Sperry, Roger.
Yoo, Julie (2006). "Mental Causation," The Internet Encyclopedia of Philosophy,
James Fieser and Bradley Dowden (eds.). (link)
Outtakes:
Information is constant in a deterministic universe. There is "nothing new under the
sun." The creation of new information is not possible without the random chance
and uncertainty of quantum mechanics, plus the extraordinary temporal stability of
quantum mechanical structures, only possible of course, if entropy is carried away
from the new information structures to satisfy the second law.
26
It is of the deepest philosophical significance that information is based on the
mathematics of probability. If all outcomes were certain, there would be no
"surprises" in the universe. Information would be conserved and a universal
constant, as some mathematicians mistakenly believe. Information philosophy
requires the ontological uncertainty and probabilistic outcomes of modern quantum
physics to produce new information.
But at the same time, without the extraordinary stability of quantized information
structures over cosmological time scales, life and the universe we know would not
be possible. Quantum physics reveals the architecture of the universe to be discrete
rather than continuous, to be digital rather than analog.
Moreover, the "correspondence principle" of quantum mechanics and the "law of
large numbers" of statistics ensures that macroscopic objects can normally average
out microscopic uncertainties and probabilities to provide the "adequate
determinism" that shows up in all our "laws of nature."
27