Download notes for this class

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Biology and consumer behaviour wikipedia , lookup

Microevolution wikipedia , lookup

Polymorphism (biology) wikipedia , lookup

Population genetics wikipedia , lookup

Koinophilia wikipedia , lookup

Hardy–Weinberg principle wikipedia , lookup

Transcript
MATH 339 Class Notes
Evolutionary Game Theory
Shahrokh Naqvi
Date: At some point in history
1
Evolutionary games
Some definitions:
Evolution is the change in the gene frequencies over time.
Genotype is the set of genes carried by an individual.
Phenotype is the physical characteristic of an individual. We assume that phenotype is
determined by the genotype (in some way).
Individuals interact with others; results in changes birth/death rates. Hence, selection acts
at the phenotypic level.
Fitness is the expected number of offspring, usually denoted w.
Example (MS and Price): Suppose two individuals are contesting a resource of value V .
This could be an animal carcass, or a pile of rocks, etc. Obtaining this resource results in an
increase in fitness. Those who do not get this resource do not necessarily have zero fitness.
Each has two options:
Hawk: Escalate the encounter.
Dove: Display or intimidate, but retreat if the partner attacks.
This is, of course, the Hawk-Dove game.
H
D
H
− C)
0, V
1
(V
2
D
V, 0
V
2
Notice that this game is symmetric. We will assume symmetry from here on in. Here C is
the cost of fighting.
Suppose we have an effectively infinite population (large enough to ignore stochastic/random
1
effects) of individuals adopting strategies Hawk and Dove, pairing out randomly.
Let:
• p be the frequency of Hawks.
• w(H) be the fitness of an individual playing Hawk.
• w(D) be the fitness of an individual playing Dove.
• π(H, D) be the change in an H player”s fitness after an encounter with a D player.
If everyone in the population engages in one encounter,
w(H) = w0 + pπ(H, H) + (1 − p)π(H, D)
w(D) = w0 + pπ(D, D) + (1 − p)π(D, H)
Individuals gone on to reproduce the frequency of H in the next generation is:
w(H)
0
p =p
pw(H) + (1 − p)w(D)
Subbing in a value for V, C, w0 , p will determine how the population behaves over time.
Which is not all that interesting.
We want to develop a notion of one strategy being “better” than another. Here’s such
a condition:
If a strategy A is better than strategy B then the frequency of those that adopt A
should increase over time, while the frequency of B players should decrease. We call such
a “better strategy” stable.
We analyze the relative strength of strategies via mutant invasion analysis.
1. Suppose the population consists of entirely A strategists.
2. Introduce a small portion p of B strategists.
If A is stable, w(A) > w(B). That is, B strategists cannot invade a population of A
strategists. We have:
w(A) = w0 + pπ(A, B) + (1 − p)π(A, A)
> w0 + pπ(B, B) + (1 − p)π(B, A) = w(B)
2
For small p (ie., p << 1), this inequality is equivalent to either:
π(A, A) > π(B, A)
or
π(A, A) = π(B, A) and π(A, B) > π(B, B)
If A satisfies these conditions it is called an evolutionary stable strategy. This definition
only holds with our assumptions:
• asexual inheritance
• population pairwise contests
Back to Hawks and Doves. Clearly D is not an ESS. If V ≥ C, then H is an ESS. If V < C
then neither are ESSs. How does this compare to the N.E? Is there an analogy to MSNE for
ESSs?
2
Mixed strategies
Last day we considered the Hawk Dove game. We found that if V > C then neither strategy
can be an ESS. But what if we permit mixed strategies?
Suppose that our genes are no longer “Play H (or D)” but instead “Play H with probability q and D with probability (1 − q)”? Call this gene I. Is there a value q∗ that makes I
an ESS?
Theorem: If I is a mixed ESS, which includes the pure strategies A, B, etc each with
non-zero probability, then:
π(A, I) = π(B, I) = . . .
(No surprises!)
Hence if there is such a q, we can find it by solving:
π(A, I) = π(B, I)
Back to our Hawk Dove example, set I = qH + (1 − q)D. Then,
π(H, I) = π(H, qH + (1 − q)D) = qπ(H, H) + (1 − q)π(H, D)
= qπ(D, H) + (1 − q)π(D, D) = π(D, I)
From the last day, this equals:
1
1
(V − C)q + V (1 − q) = V (1 − q)
2
2
3
We can solve this for q:
V
C
But what values of V and C will make this an ESS? We need to check the definition of ESS.
q=
Since π(H, I) = π(D, I) = π(I, I), we require π(I, D) > π(D, D) and π(I, H) > π(H, H).
Let’s see . . . .
You can check π(I, H) > π(H, H) =⇒ V < C. So the cost is greater than the reward
we expect to see Mixed Strategies.
Note: Everyone in this mixed strategy population is the same, they all have genotype I.
Is it possible to have a population consisting of both strictly H and D? Such a population is called a polymorphism. We would like to find a stable polymorphism.
Definition: A system is at a stable equilibrium if after slight perturbations away from
this system, the system goes back to this equilibrium.
Suppose that the proportion of H in the population is p. If p is a stable polymorphism,
then w(H) = pπ(H, H) = w(D).
We can solve the above and find p = VC . For 2 × 2 games the Mixed ESS and the polymorphism interpretations are equivalent in that their equilibrium are both equal and stable.
This is not necessarily the case for larger games. . . .
3
The hawk-dove-retaliator game
For
A
B
A
a
c
B
b
d
(b−d)
with a < c and d < b, a Mixed ESS is, adopt A with probability q = b+c−a−d
.
Take the classic Hawk-Dove game and add a third strategy Retaliator (R). R acts like a dove
around other doves, but will act like a Hawk if provoked. A payoff matrix:
H
H
D
R
D
V
V −C
2
0
V −C
2
V
2
4
V
2
+a
R
V −C
2
V
−
a
2
V
2
Now let’s pick some values: V = 2, C = 4, a = 0.25
H
H −1
D 0
R −1
D R
2 −1
3
1
4
5
1
4
Although we never defined ESS for 3 strategies, clearly R is an ESS. If R is excluded you
can check that C = 12 H = 12 D is an ESS. How would a population of Hawks, Doves and
Retaliators change over time? Use the replicator equations!
0
XH
= XH (wH − w̄)
= XH ((XH (−1) + XD (2) + XR (−1)) − (XH (−XH + 2XD − XR )
+XD (XD + 0.75XR ) + XR (−XH + 1.25XD + XR )))
0
XD
= XD (wD − w̄) = . . .
XR0 = XR (wR − w̄) = . . .
We can find interior equilibria by setting the RHS of the equations to 0 and solve the resulting
systems.
1 32 12
, ,
(XD , XH , XR ) =
5 55 55
5