Download AAAI Proceedings Template

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Quantum field theory wikipedia , lookup

Bohr–Einstein debates wikipedia , lookup

Hydrogen atom wikipedia , lookup

Delayed choice quantum eraser wikipedia , lookup

Path integral formulation wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Quantum fiction wikipedia , lookup

Quantum decoherence wikipedia , lookup

Bell test experiments wikipedia , lookup

Double-slit experiment wikipedia , lookup

Coherent states wikipedia , lookup

Max Born wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

Bell's theorem wikipedia , lookup

Quantum computing wikipedia , lookup

Quantum entanglement wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

Quantum teleportation wikipedia , lookup

Density matrix wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Quantum machine learning wikipedia , lookup

History of quantum field theory wikipedia , lookup

T-symmetry wikipedia , lookup

Canonical quantization wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

EPR paradox wikipedia , lookup

Quantum group wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Quantum key distribution wikipedia , lookup

Quantum state wikipedia , lookup

Hidden variable theory wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Quantum cognition wikipedia , lookup

Probability amplitude wikipedia , lookup

Transcript
Distinguishing quantum and Markov models of human decision making
Jerome R. Busemeyer1, Efrain Santuy1, Ariane Lambert Mogiliansky2
1Cognitive Science, Indiana University
1101 E. 10th Street, Bloomington Indiana, 47405
[email protected] [email protected]
2PSE
Paris-Jourdan Sciences Economiques
[email protected]
Abstract
A general property for empirically distinguishing Markov
and quantum models of dynamic decision making is
derived. A critical test is based on measuring the decision
process at two distinct time points, and recording the
disturbance effect of the first measurement on the second.
The test is presented within the context of a signal detection
paradigm, in which a human or robotic operator must decide
whether or not a target is present on the basis of a sequence
of noisy observations. Previously, this task has been
modeled as a random walk (Markov) evidence accumulation
process, but more recently we developed a quantum
dynamic model for this task. Parameter free predictions are
derived from each model for this test. Experimental
methods for conducting the proposed tests are described,
and previous empirical research is reviewed.
Interference is a signature of quantum processes. One way
to produce interference effects is to perturb a target
measurement by an earlier probe. In this article we develop
Markov and quantum models of human decision making
and prove that only the latter is capable of producing this
type of interference effect. The decision models are
developed within the context of a decision paradigm called
the signal detection paradigm.
Signal detection is a fundamentally important decision
problem in cognitive and engineering sciences. In this
dynamic decision situation, a human or robotic operator
monitors rapidly incoming information to decide whether
or not a potential target is present (e.g., missiles in the
military, or cancer in medicine, or malfunctions in
industry). The incoming information provides uncertain
and conflicting (noisy) evidence, but at some point in time,
a decision must be made, and the sooner the better.
Incorrectly deciding that the target is present (i.e. a false
alarm) could result in deadly consequences, but so could
the failure to detect a real target (i.e. a miss).
In statistics, Bayesian sequential sampling models provide
an optimal model for this task (DeGroot, 1970). During the
past 35 years, cognitive scientists (see, Ratcliff & Smith,
2004, for recent review) have developed random walk
model and diffusion models to describe human
performance on this task. The random walk/diffusion
models are Markov models, and they are also closely
related to the optimal Bayesian sequential sampling model
(Bogatz, et. al., 2006).
Recently, we (Busemeyer, Townsend, & Wang, 2006)
developed a quantum dynamic model for the signal
decision task. Thus an important question is what
fundamental properties distinguish these two very different
classes of models?
First we describe the formal characteristics of a signal
detection task. Second, we describe Markov and quantum
models of signal detection. Third we derive theoretical
properties from each model that provide a parameter free
test of the two models. Finally, we summarize
experimental results that provide initial evidence for
interference effects in human decision making.
The Signal Detection Task
Signal detection can be a very complex task. But our goal
is to describe a simple setting for testing models of this
task. Imagine, for example, a security agent who is
checking baggage for weapons. With this idea in mind, we
assume that a decision maker is faced with a series of
choice trials. On each trial, a noisy stimulus is presented
for some fixed period of time, tf, and at the end of the fixed
time period, the decision maker has to immediately decide
whether or not a target was present. The evidence
generated by the stimulus is assumed to be stationary
during the duration of the stimulus. The amount of time
given to observe the stimulus can be experimentally
manipulated.
The operator can respond by choosing one of 2n+1 ordered
levels of confidence (n is assumed to be an even number).
For example, n = 2 produces five levels that might be
labeled: 2 = no target with high confidence, 1 = no
target with low confidence, 0 = uncertain, +1 = yes target
with low confidence, +2 = yes target with high
confidence). The choice probabilities of the 2n+1
categories are estimated from each person pooled across
responses made on several thousand choice trials.
Models of Signal Detection
A random walk (Markov) model
To construct a random walk (Markov) model for this task,
we postulate a set of (2m+1) states of confidence about the
presence or absence of the target:
{ |m, |m+1 , …, |1, |0, |+1, …, |+m1, |+m }.
The state |j can be interpreted as a (2m+1) column vector
with zeros everywhere, except that it has 1.0 located at the
row corresponding to index j. Positive indices, j > 0,
represent a state of evidence favoring target present;
negative indices, j < 0, represent a state of evidence for
target absent, and zero represents a neutral state of
evidence. The number of states could be as small as the
number of response categories (m = n), but the participant
may be able to employ a more refined scale of confidence,
and so the number could also be much larger (m = 10n).
At the start of a single trial, the process starts in some
particular state, for example the neutral state |0. Then as
time progresses within the trial, the process either steps up
one index, or steps down one index, or stays at the same
state, depending on the information or evidence that is
sampled at that moment. The process continues moving up
or down the evidence scale until the fixed time period tf
ends and a decision is requested. At that point, a choice
and a rating are made based on the state existing at the time
of decision.
The initial state of the decision maker on each choice trial
is not known, and this initial state may even change from
trial to trial. The probability of starting in state |j is
denoted Pj(0), and the (2m+1) column vector
P(0) =  j Pj(0)|j
represents the initial probability distribution over states so
that  j Pj(0) = 1.
During an observation period of time t of a single trial, the
probability distribution over states, P(t), evolves in a
direction guided by the incoming evidence from the
stimulus. This evolution is determined by a transition
matrix T(t) as follows:
P(t) = T(t)  P(0).
The entry in row i and column j of T(t), Tij(t), determines
the probability of transiting to state |i from state |j after a
period of time t. The transition matrix must satisfy 0 
Tij(t)  1 and  i Tij(t) = 1 to guarantee that P(t) remains a
probability distribution.
The transition matrix satisfies the group property
T(tf) = T(tf ti)T(ti),
and from this it follows that it satisfies the Kolmogorov
forward equation:
dT(t)/d(t) = KT(t),
which has the solution
T(t) = exp(tK).
The matrix K is called the intensity matrix with element Kij
determining the rate of change in probability to state |i
from state |j. The intensities must satisfy kij  0 for i  j
and  i kij = 0 to guarantee that T(t) remains a transition
matrix.
The random walk model is a special case of a Markov
process which assumes that intensities are positive only
between adjacent states: kij = 0 for |ij| > 1. For |j| < m, we
make the following assumptions: If the target is present
then we assume that kj+1,j > kj1,j; if the target is absent then
we assume that kj1,j > kj+1,j. At the boundaries, we set
km1,+m > 0 and km+1,m > 0 (reflecting boundaries).
Note that the discrete state random walk model is closely
related to a continuous state diffusion model. If we allow
the number of states to become arbitrarily large, and at the
same time let the increment between states become
arbitrarily small, then the distribution produced by the
random walk model converges to the distribution produced
by the diffusion model (see Bhattacharya & Waymire,
1990, pp. 386-388).
The response measured at time tf is determined by the
following choice rule. The entire set of states is partitioned
into 2n+1subsets, with the first subset define by Rn = { |j
| m  j  cn), and subsequent subsets defined by Rk = { |j
| ck-1  j  ck) with cn = +m. If the state of confidence at
time tf equals |jRk, then the response category k is
selected. The probability of this event is simply
Pr[R(tf) = k] =  j Rk Pj(tf).
For later comparisons with the quantum model, it will be
helpful to redefine the computation of the response
probabilities in matrix notation. Define Mk as a projection
matrix with elements Mjj = 1 if |jRk, and zero otherwise.
Note that  k=1,n Mk = I, where I is the identity matrix. Then
MkP(tf) is the projection of the probability vector P(tf) onto
the subspace defined by the states that are assigned to
category k. Define 1 as a (2m+1) row vector with all entries
equal to one. Then the desired probability equals the sum
of the elements in the projection:
Pr[R(tf) = k] = 1MkP(tf) .
A quantum dynamic decision model
To construct a quantum model for this task, we again
postulate a set of (2m+1) states of confidence about the
presence or absence of the target:
{ |m, |m+1 , …, |1, |0, |+1, …, |+m1, |+m }.
In this case, we assume that these states form an
orthonormal basis for a (2m+1) dimensional Hilbert space.
More specifically, i|j = 0 for every i  j and i|i = 1 for all
i, where i|j denotes the inner product between two
vectors. As before, states with positive indices represent
evidence favoring target present, states with negative
indices represent evidence for target absent, and zero
represents a neutral state of evidence.
The initial state of the quantum system is represented by a
superposition of the confidence states
|(0) =  j |jj|(0) =  j j(0)|j.
The coefficient j(0) = j|(0) is the probability amplitude
of being in state |j at start of the trial. The complex
(2m+1) column vector (0) represents the initial
probability amplitude distribution over the states. The
coordinate in the j-th row of (0) is j(0) = j|(0). The
initial state vector has unit length so that (0)|(0) =
(0)†(0) = 1.0. The initial probability amplitude
distribution over states represents the decision maker’s
initial beliefs at the beginning of the trial.
During an observation period of time t of a single trial, the
probability amplitude distribution over states, (t), evolves
in a direction guided by the incoming evidence from the
stimulus. This evolution is determined by a unitary matrix
U(t) as follows:
(t) = U(t)  (0).
The entry in row i and column j of U(t), Uij(t), determines
the probability amplitude of transiting to state |i from state
|j after a period of time t. The unitary matrix must satisfy
U(t)†U(t) = I (where I is the identity matrix) to guarantee
that (t) remains unit length.
The unitary matrix satisfies the group property
U(tf) = U(tf ti)U(ti),
and from this it follows that it satisfies the Schrödinger
equation:
dU(t)/d(t) = iHU(t),
which has the solution
U(t) = exp(itH).
The complex number i = 1 is needed to guarantee that
U(t) is unitary. The matrix H is called the Hamiltonian
matrix with element Hij determining the rate of change in
probability amplitude to state |i from state |j. The
Hamiltonian matrix must be Hermitian (H = H†) to
guarantee that U(t) remains unitary.
To construct a random walk analogue, we assume that
Hamiltonian elements are non zero only between adjacent
states: Hi,j = 0 for |ij| > 1. For rows |j| < m, we make the
following assumptions: Hj1,j = Hj+1,j =  and Hjj = j. If
the target is present, then we assume that j+1 > j ; if the
target is absent then we assume that j+1 < j. At the
boundaries we set Hm1,+m =  = Hm+1,m and Hm,m = m
Hmm = m. This choice of Hamiltonian corresponds to a
crystal model discussed in Feynman et al. (1966, Ch. 16).
Note that the discrete state quantum walk model is closely
related to a continuous state quantum model. Once again, if
we allow the number of states to become arbitrarily large,
and at the same time let the increment between states
become arbitrarily small, then the distribution produced by
the discrete quantum model converges to the distribution
produced by the continuous quantum model (see Feynman
et al., 1966, Ch. 16). Also note that the Hamiltonian
defined above is closely related to the Hamiltonian that is
commonly used to model the one dimensional movement
of a particle in physics:
H = P2/2m + V(x).
The off diagonal elements of the Hamiltonian correspond
to the P2 operator, and the diagonal elements correspond to
the potential function V(x) (see Feynman et al. (1966, Ch.
16).
The quantum response probabilities measured at time tf are
determined as follows. Once again the entire set of states is
partitioned into 2n+1subsets, with the first subset define by
Rn = { |j | m  j  cn), and subsequent subsets defined
by Rk = { |j | ck-1 < j  ck) with cn = +m. The probability
of choosing category k at time tf is simply
Pr[R(tf) = k] =  j Rk |j(tf)|2.
As before, it will be helpful to describe these computations
in matrix formalism. Define Mk as a projection matrix with
elements Mjj = 1 if |jRk, and zero otherwise. Note again
that  k=1,n Mk = I, where I is the identity matrix. Then
Mk(tf) is the projection of the probability amplitude
vector (tf) onto the subspace defined by the states that are
assigned to category k. Then the probability is given by
the squared length of the projection:
Pr[R(tf) = k] = |Mk(tf)|2
= (tf)†Mk†Mk(tf) = (tf)†Mk(tf).
The disturbance effect of measurement.
We modify the signal detection task by asking the decision
maker to report a confidence rating at two time points, ti
initially and later tf. The initial confidence measurement
will cause a ‘state collapse’ with both the Markov and the
quantum models. But how does this change the final
distribution of probabilities? This simple manipulation has
profoundly different effects on the two models. The
quantum model exhibits an interference effect that does not
occur with the Markov model.
The general idea is to compare the results from two
experimental conditions at time tf. Under the single
measurement condition (C1), we can estimate the
probability distribution over the category responses at time
tf , that is Pr[ R(tf) = k | C1]. Under the double measurement
condition (C2), we can obtain another estimate of the
probability distribution over the category responses at time
tf using the law of total probability
Pr[ R(tf) = k | C2]
= k’ Pr[ R(tf) = k  R(ti) = k’ ],
= k’ Pr[ R(tf) = k | R(ti) = k’ ] Pr[ R(ti) = k’ ].
Effects of measurement on the Markov model.
According to the Markov model, the distribution over the
confidence states immediately before measurement at time
ti will be
P(ti) = T(ti)  P(0).
The probability of choosing category k’ is
Pr[ R(ti) = k’] = 1Mk’P(ti) .
If category k’ is selected at this point, then the probability
distribution over states collapses to a new distribution after
measurement
P(ti|k’) = Mk’P(ti) / 1Mk’P(ti).
The joint probability of selecting category k’ and then
reaching one of the (2m+1) final states at time tf equals
[T(tf  ti)P(ti|k’)]Pr[R(ti) = k’] = T(tf  ti)  Mk’ P(ti) .
The joint probability of selecting category k’ and then
selecting category k at time tf equals
Pr[ R(tf) = k  R(ti) = k’ ] = 1MkT(tf  ti)Mk’P(ti).
The marginal probability of selecting category k at time tf
equals
Pr[ R(tf) = k | C2]
= k’ Pr[ R(tf) = k  R(ti) = k’ ]
= k’ 1MkT(tf  ti)Mk’ P(ti)
= 1MkT(tf  ti)k’ Mk’P(ti)
= 1MkT(tf  ti)( k’ Mk’)P(ti)
= 1MkT(tf  ti)IP(ti)
= 1MkT(tf  ti)P(ti)
= 1MkT(tf  ti)T(ti)P(0)
= 1  Mk T(tf )P(0)
= Pr[ R(tf) = k | C1].
Thus the first measurement has no effect on the marginal
distribution of the second measurement. The law of total
probability is satisfied. However, the first measurement
does influence the distribution of the second measurement,
conditioned on the observed value of the first
measurement. So it is important for us to compare models
using the marginal distribution.
It is important to note that the above results hold quite
generally. It holds for any number of confidence states (as
long as this number is equal or greater than the number of
response categories). It holds for any transition matrix T(t)
and so it is not restricted to any particular form of intensity
matrix. Finally, it holds for any initial probability
distribution across the confidence states. This property
tests a basic assumption of linearity that is implicit in the
Markov model:
1  Mk T(t)[pP1(0) + qP2(0)] ,
= p[1  Mk T(t) P1(0)] + q[1  Mk T(t) P2(0)].
Effects of measurement on the quantum model.
According to the quantum model, the probability amplitude
distribution over the confidence states immediately before
measurement at time ti equals
(ti) = U(ti)  (0).
The probability of choosing category k’is
Pr[ R(ti) = k’] = |Mk’(ti)|2 .
If category k’ is selected at this point, then the amplitude
distribution over states collapses to a new distribution after
measurement
(ti|k’) = Mk’ (ti)/ |Mk’(ti)| .
The probability amplitude distribution over the (2m+1)
final states at time tf given k’ observed at time ti equals
= U(tf  ti)(ti|k’),
= U(tf  ti)[Mk’(ti)/|Mk’(ti)|] .
The projection of this final distribution on the basis states
of response category k at time tf equals
= MkU(tf  ti)(ti|k’)
= MkU(tf  ti)[Mk’(ti)/|Mk’(ti)|].
The joint probability of selecting category k’ and then
selecting category k at time tf equals
Pr[ R(tf) = k  R(ti) = k’ ]
= Pr[ R(ti) = k ‘]|MkU(tf  ti)[Mk’(ti)/|Mk’(ti)|]|2
= |Mk’(ti)|2(|MkU(tf  ti)Mk’(ti)|2)/|Mk’(ti)|2
= |MkU(tf  ti)Mk’(ti)|2
= |MkU(tf  ti)Mk’U(ti)(0)|2 .
The marginal probability of selecting category k at time tf
equals
Pr[ R(tf) = k | C2]
= k’ Pr[ R(tf) = k  R(ti) = k’ ] ,
= k’ |MkU(tf  ti)Mk’U(ti)(0) |2 ,
 | k’ MkU(tf  ti)Mk’U(ti)(0) |2 ,
= | MkU(tf  ti) k’ Mk’U(ti)(0) |2 ,
= | MkU(tf  ti) (k’ Mk’)U(ti)(0) |2 ,
= | MkU(tf  ti) I U(ti)(0) |2 ,
= | MkU(tf  ti)U(ti)(0) |2 ,
= | MkU(tf )(0) |2 ,
= Pr[ R(tf) = k | C1] .
The two results are not equal, and the first measurement
does effect the marginal distribution of the second
measurement. The law of total probability is not satisfied.
As noted above, it is important for us to compare models
using the marginal distribution from the two conditions.
The difference between the two marginal distributions,
d(tf ti) = Pr[ R(tf) = k | C2]  Pr[ R(tf) = k | C1],
is called the interference effect, and it depends strongly on
the time lag (tf ti).
It is important to note that the above results hold quite
generally. Once again, it holds for any number of
confidence states (as long as this number is equal or greater
than the number of response categories). It holds for any
unitary matrix U(t) and so it is not restricted to any
particular form of Hamiltonian matrix. Finally, it holds for
any initial probability amplitude distribution across the
confidence states. This property tests a basic assumption of
nonlinearity that is implicit in the quantum model:
|Mk U(t)[pP1(0) + qP2(0)]|2 ,
 p|Mk U(t) P1(0)|2 + q|Mk T(t) P2(0)|2.
Example of interference effects.
To illustrate the predicted interference effect of the
quantum model, the predictions for the single and double
conditions were computed using the following parameters:
(2m+1) = 51, 2K+1 = 3, c1 = 9, c+1 = +8, 1(0) = 0(0)
= +1(0) = 1/3,  = .5, j = j/51, tf = ti+50. The differences
between the double versus the single conditions are shown
in the table below for various ti. As can be seen in this
table, the interference effects can be quite large.
Interference Effect
Initial time
k = 1
k=0
k = +1
ti = 0
0
0
0
ti = 100
.0385
.0251
-.0636
ti = 200
.0177
.0865
-.1043
ti = 300
.0592
.1509
.2101
Note: These predictions were computed from the quantum
equations using Matlab’s matrix exponential function.
Proposed method for investigating the interference
effect. An experiment is underway to experimentally
investigate the disturbance effect of measurement. A signal
detection task was designed that requires an average
decision time of at least 1 second or more. Within this
amount of time, a probe measurement can be made, say
within 500 msec, and a final measurement can obtained
after 1000 msec. The participants are asked to view a
complex visual scene and decide whether or not a target
object (e.g. a bomb) is present or not, similar to the task
faced by security agents at airports or government
buildings. The time interval between probe and final
measurement will be manipulated, as well as the
discriminability of the stimulus and prior probability of the
target. According to our quantum model, the time interval
would affect the delay parameter (tfti), the discriminability
would affect the potential of the Hamiltonian, j, and the
prior probability would affect the initial state, (0).
Related empirical tests of interference effects
in human decision making.
Although we are not aware of any experimental tests of
interference effects that were conducted using the single
and double measurement conditions described above,
several related lines of evidence have been reported by past
researchers.
Townsend, Silva, & Spencer – Smith (2000) conducted a
closely related test of interference. Decision makers were
presented faces belonging to one of two categories (e.g.,
good guys, bad guys) and they were asked to decide to
choose between two actions (e.g. attack or withdraw). Two
conditions were examined: In the category  decision task
they were asked to first categorize the face, and then decide
how to act; in the decision only task, they were only asked
to decide how to act and no categorization was requested.
This paradigm was used to investigate the interference
effect of the category task on the decision task. Townsend
et al. (2000) reported that 72 out of 276 participants
produced statistically significant deviations from the
predictions of a Markov model. Busemeyer & Wang
(2006) explained these deviations using a quantum
dynamic decision model.
Shafir and Tversky (1992) tested a property called the sure
thing principle, which could be re-interpreted as a test of
interference effects (Busemeyer, Matthew, Wang, 2006).
Decision makers were asked to play a prisoner dilemma
game under three conditions: Knowing the opponent has
defected, knowing the opponent has cooperated, and not
knowing the opponent’s action. This paradigm can be used
to determine the interference effect produced by
knowledge of the opponent on the choice of the decision
maker. Decision makers defected 97% of the time when
the other was known to defect; 84% of the time when the
other was known to cooperate; and 63% of the time when
the other’s action was unknown. Busemeyer et al. (2006)
showed that these results also violate the predictions of a
Markov model, and they explained these findings using a
quantum dynamic decision model.
Conte et al. (2006) conducted an experiment to test
interference effects in perception. The task required
individual’s to judge whether or not two lines were
identical. The lines were in fact identical, but they were
presented within a context that is known to produce a
perceptual illusion of a difference. Two conditions were
examined: In one condition, line judgment task B was
presented alone; in another condition, line judgment task A
preceded line judgment task B. The results showed
significant differences in the response proportions to task
B. These results were interpreted within a general quantum
measurement framework developed by Khrennikov (2007).
Atmanspacher, Filk, & Romer (2004) examined a quantum
zeno effect in a bi-stable perception task. The task involves
the presentation of a Necker cube, which is the projection
of a cube onto a plane. The projection can be perceptually
interpreted as being viewed from a top or bottom
viewpoint, and the viewer experiences a spontaneous
switch from one view to another. The time period between
switches in experienced viewpoints is the primary
measurement of interest. According to quantum theory,
this time period can be extended by repeated measurements
of the perception. Atmanspacher, et. al. (2004) report some
experimental evidence related to this effect, and they
explain these results using a quantum dynamic model.
Comparing Markov and quantum models
There is a surprising amount of similarity between the
Markov and quantum models. The basic equations appear
almost the same if one simply replaces probabilities with
complex amplitudes. The initial states of both models can
be represented as a linear combination of basis states
(mixed versus superposition). The transition operators that
evolve the states both obey simple group properties which
lead to deterministic linear differential equations
(Kolmogorov versus Schrödinger). The solutions to the
differential equations are matrix exponentials for both
models. Measurement produces a collapse of the states for
both models.
Based on these similarities, one might suspect that the two
models are indistinguishable. This is not the case because
there are several key differences between the models. First,
the Markov model operates on real valued probabilities
(bounded between zero and one), whereas the quantum
model operates on complex probability amplitudes.
Second, the intensity matrix for the Markov model obeys
different constraints than the Hamiltonian for the quantum
model. Third, the Schrödinger equation introduces a
complex multiplier to maintain a unitary operation. Finally,
the Markov model uses a linear projection to determine the
final probabilities, but the quantum model uses a nonlinear
operation (the squared projection) to determine the final
probabilities. The latter is crucial for the interference effect
examined in this paper. The interference effect produced
by the quantum model cannot be reproduced by the
Markov model.
One might speculate that the quantum model is more
general than the Markov model. However, this is not the
case either. In particular, the intensity matrix is less
constrained than the Hamiltonian matrix. The Hermitian
constraint on the Hamiltonian for the quantum model
restricts its ability to perform like the Markov model. The
dynamics produced by the two models are quite different.
The Markov model is analogous to blowing sand in a
direction, and with time a sand pile builds up to a stable
equilibrium distribution. The quantum model is analogous
to blowing water in a direction, causing a wave to splash
and oscillate back and forth. In sum, despite their
similarity, one model is not a special case of the other.
Summary and Concluding Comments
An incipient but growing number of researchers have
begun to apply quantum principles to human decision
making. Bordely (1998) proposed quantum probability
rules to explain paradoxes in human probability judgments.
Mogiliansky, Zamir, and Zwirn (2004) used non
commutative measurement operators to explain cognitive
dissonance effects in human decisions. Gabora and Aerts
(2002) developed a quantum theory of conjunctive
judgments. Aerts (2007) extended these ideas to
disjunctive judgments. Ricciardi (2007) formulated a
quantum model to explain the conjunctive fallacy in
probability judgments. La Mura (2006) has proposed a
theory of expected utility based on quantum principles.
Applications to game theory have been proposed by Eisert,
Wilkens, and Lewenstein (1999) and Piotrowski &
Sladkowski (2003), and experimental tests of these ideas
have been carried out by Chen, et. al. (2006). More
broadly, quantum principles have also been applied to
price theory in economic problems by Haven (2005).
All of the above applications follow a quantum decision
program of research that uses only the mathematical
principles of quantum theory to explain human decision
making behavior (Aerts, et al., 2003; Atmanspacher, et al.,
2002; Khrennikov, 2007). No attempt or assumptions are
made at this point about the possible neural basis for these
computations. This program differs radically from a more
reductionist program that attempts to explain neural
computations using quantum physical models (Penrose,
1989; Pribram, 1993; Woolf & Hammeroff, 2001)
Acknowledgments
This research was supported by NIMH R01 MH068346 to
the first author.
References
Aerts, D. (2007) Quantum interference and superposition
in cognition: Development of a theory for the
disjunction of concepts. ArXiv.0705.0975v1 [quant-ph]
7 May 2005.
Aerts, D., Broekaert, J., & Gabora, L. (2003) A case for
applying an abstracted quantum formalism to cognition.
In R. Campbell (Ed.) Mind in Interaction. Amsterdam,
John Benjamin.
Atmanspacher, H., Romer, H., & Walach, H. (2002) Weak
quantum theory: complementary and entanglement in
physics and beyond. Foundation of Physics, 32, 379406.
Atmanspacher, H., Filk, T., & Romer, H. (2004) Quantum
zeno features of bistable perception. Biological
Cybernetics, 90, 33-40.
Bogacz, R., Brown, E., Moehlis, J., Holmes, P., Cohen, J.
D. (2006) The physics of optimal decision making: A
formal analysis of models of performance in twoalternative forced choice tasks. Psychological Review,
113, 700-765.
Bordley, R. F. (1998) Quantum mechanical and human
violations of compound probability principles: Toward a
generalized Heisenberg uncertainty principle.
Operations Research, 46, 923-926.
Busemeyer, J. R., Wang, Z. & Townsend, J. T. (2006)
Quantum dynamics of human decision making. Journal
of Mathematical Psychology, 50 (3), 220-241
Busemeyer, J. R., Matthew, M., & Wang, Z. (2006) An
information processing explanation of disjunction
effects. In R. Sun and N. Miyake (Eds.) The 28 th Annual
Conference of the Cognitive Science Society and the 5th
International Conference of Cognitive Science (pp. 131135). Mahwah, NJ: Erlbaum.
Chen, K., Hogg, T., & Huberman, B. A. (2006) Behavior
of multi – agent protocols using quantum entanglement.
Quantum Interaction: Papers from the AAAI Spring
Symposium. Technical Report SS – 07-08. AAAI Press.
Conte, E., Todarello, O., Federici, F., Vitiello, M.,
Lopane, A., Khrennikov, A., & Zbilut, J. P. (2006)
Chaos, Soliton, Fractiles, 31, 1076Degroot, M. H. (1970) Optimal statistical decisions. New
York: McGraw – Hill.
Eisert, J., Wilkens, M., & Lewenstein, M. (1999) Quantum
games and quantum strategies. Physical Review Letters,
83, 3077-3080.
Gabora, L. & Aerts, D. (2002) Contextualizing concepts
using a mathematical generalization of the Quantum
formalism. Journal of Experimental and Theoretical
Artificial Intelligence, 14, 327-358.
Haven, E. (2005) Pilot-wave theory and financial option
Pricing. International Journal of Theoretical Physics,
44 (11), 1957-1962.
Khrennikov, A. (2007) Can quantum information be
processed by macroscopic systems? Quantum
Information Theory, in press.
La Mura, P. (2006) Projective expected utility. Paper
presented at the FUR 2006 Meeting, Rome, Italy.
Mogiliansky, A. L., Zamir, S., & Zwirn, H. (2004) Type
indeterminacy: A model of the KT (Kahneman
Tversky) - man. Paper presented at Foundations Utility
Risk and Decision Theory XI, Paris, France.
Penrose, R. (1989) The emperor’s new mind. Oxford
University Press.
Piotrowski, E. W.& Sladkowski, J. (2003) An invitation to
quantum game theory. International Journal of
Theoretical Physics, 42, 1089.
Pribram, K. H. (1993) Rethinking neural networks:
Quantum fields and biological data. Hillsdale, N. J:
Earlbaum
Ratlciff, R.. & Smith, P. L. (2994) A comparison of
sequential sampling models for two-choice reaction
time. Psychological Reivew, 111, 333-367.
Shafir, E. & Tversky, A. (1992) Thinking through
uncertainty: nonconsequential reasoning and choice.
Cognitive Psychology, 24, 449-474.
Townsend, J. T., Silva, K. M., Spencer-Smith, J., &
Wenger, M. (2000) Exploring the relations between
categorization and decision making with regard to
realistic face stimuli. Pragmatics and Cognition, 8 (1),
83-105.
Woolf, N. J., & Hameroff, S. R. (2001) A quantum approach to
visual consciousness. Trends in Cognitive Science, 15 (11),
472-478.