Download Experimental Economics Assignment 1 Answer Key Question 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Experimental Economics Assignment 1 Answer Key
Question 1: Remember the following 4 precepts:
Non-satiation (monotonicity): Given a costless choice between two alternatives which are
identical except that the first yields more of a reward medium (for example, TL’s) than the
second, the first will always be chosen (i.e. preferred) over the second, by an autonomous
individual.
Saliency: Individuals are guaranteed the right to claim a reward which is increasing
(decreasing) in the goods (bads) outcomes of an experiment; individual property rights in
messages and how messages are to be translated into outcome are defined by the institution of
the experiment.
Dominance: The reward structure dominates any subjective costs (or values) associated with
participation in the activities of an experiment.
Privacy: The second qualification to the non-satiation precept which carries a potential for
losing control over preferences is the fact that individuals may not be autonomous own-reward
maximizers. If we want to minimize such effects, it can be good to give each subject in the
experiment information only on his/her own payoff alternatives.
Now, let’s look at the question: This is a question in which one makes a $1 bet on one of the
teams. The way you answered this question varies widely with respect to what you want to
“test”. This is fine. I give points to any logical and consistent argument!
Student 1: This suggestion was meant for you to think about flat-payoff/dominance issues. If
you calculate the expected returns for each set of decimal odds you get:
Expected ReturnA (3,6.1) = 2/3 * 3 = 2
Expected ReturnB (3,6.1) = 1/3 * 6.1 = 2.03
Expected ReturnA (4,5) = 2/3 * 4 = 2.67
Expected ReturnB (4,5) = 1/3 * 5 = 1.67
Suppose that the subject is risk-neutral. It might be better to use the (4,5) because this way we
can minimize the effect of errors, and the greater wedge between the two decisions can make
monetary payoffs “dominant”. If you are interested, however, in the strength of emotions in
affecting bets, then you may want to make this a “treatment” variable: that is, vary the difference
in expected returns, and see how betting changes, and whether this correlates with being a fan
etc. Another potential issue is risk-aversion, since that will also affect which bet is optimal for
each individual and is unobserved (in the context of this experiment). If we think that subjects
are risk-averse, it is again going to be better to use (4,5) because a rational risk-averse subject
would pick the same thing as a risk-neutral in that case. But it would be better, of course, to get
more information on risk-aversion and rationality by varying the payoffs and bets.
Student 5: This suggestion is a good one, because otherwise the payoffs are completely
hypothetical (there is only a flat show-up fee), and hypothetical payoffs are not salient (see
above). Economists think that if you do not associate outcomes with rewards, you lose control
and cannot properly induce preferences. Rewarding one person randomly is in this sense better
than rewarding none. Of course, it would be even better if we could reward all participants (this
would improve “dominance”).
What Students 2, 3 and 4 suggested were related to a discussion of maintaining control (internal
validity) versus the external validity or the realism of the experiment. You should realize that
there is probably not a single “correct” answer in these cases. What I look for in this type of
question is a discussion of the relevant methodological issues.
The problem with being a fan of the team is that fans can get extra psychological utility from
betting for their team, which we don’t observe. So, fandom is basically the unobserved factor
that possibly affects decisions. Now, what we do can depend on what we are interested in. Do we
want money to be “all they care about”? If I am only interested in observing whether people can
make accurate expected payoff comparisons and I don’t want psychological stuff tainting my
experiment, I could:
-Use neutral language: team A and team B rather than FB and GS. This can be a better
suggestion than excluding FB, GS fans or doing a survey that collects team information, because
there are some problems that doing a survey (either at the start or at the end) could bring:
If you do it at the start, you might create an “experimenter demand effect” (e.g. obviously they
are testing how FB fans bet, and I should pick my team regardless), and/or people may want to
behave consistently with what they said in the survey (e.g. if I said I am a big FB fan, it will look
really “cheap” if I bet on GS). If you do it at the end, it at least doesn’t affect behavior, but then
you are not sure that what they say in the survey is not related to what they bet on.
Another issue, however, is “external validity”. One might argue that making the teams A and B
will not give much information about how individuals bet in reality, because in reality, soccer
bets will depend on which team you support (for example, I could even “hedge” my bets: if I am
a big FB fan, I might choose to bet on GS just because if FB wins, I am happy anyway, and if GS
wins, I at least earned some money).
Question 2:
This is again related to achieving control in the experiment. It makes sense to think that if the
decision task in the experiment is difficult (i.e. requires significant amounts of thinking or
cognitive resources), monetary incentives should be strong enough (dominant) to induce people
to think about the decision and not make random choices. In this sense, we would expect d and k
to vary positively, that is, if you design a more difficult experiment, you should scale up the
payoffs. Notice that this is also related to the idea of dominance: monetary payoffs should be
“dominant” if we want subjects’ decisions to be driven by the payoff structure we give them! A
very simple model for this could be the following:
Suppose that the subject is already in the experiment, has choice A and choice B, and one of
them is the correct choice. Assume that the incorrect choice pays zero, and the correct choice
pays xk, where k is a scale factor of monetary payoffs. Suppose the subject makes a random
decision if the utility from making a random choice is higher than thinking and making a “correct
choice”. There is a cost of cognitive effort, C(d), which is increasing in the amount of difficulty,
C(d). Suppose the subject is risk-neutral.
The EU from the random decision is: 0.5(xk)
The utility from the correct decision is: xk-C(d)
The subject will put effort if:
0.5xk≥C(d), that is, k≥C(d)/0.5x. The threshold scale factor k* will vary positively with d.
Question 3: a) You need to construct a table that gives the expected utilities from the safe and
risky lotteries, as the probability of winning the high prize changes. Let p=Prob(high prize).
For a risk-neutral agent, utility over money is given by U(w)=w. Therefore, expected utilities are
given by:
EU(safe)=20p+16(1-p)
and
EU(risky)=38p+(1-p)
It is possible to show that this person would switch to the risky lottery at decision 5.
EU(risky)=19.5
and
EU(safe)=18
EU(safe)=17.6>EU(risky)=15.8
when
p=0.5,
whereas
for
p=0.4,
b)An agent who has U(w)=Sqrt(w) is risk-averse, since this utility function is concave. The
second derivative of U with respect to w is: (-1/4)(w(-3/2))<0. This means that the MU from
money is diminishing. Recall also that for a risk-averse agent, U(EV of gamble)>EU(gamble).
For this guy, EU(safe)=p(Sqrt(20))+(1-p)Sqrt(16) and EU(risky)=p(Sqrt(38))+(1-p)Sqrt(1)
Calculating the safe and risky EU’s for each decision, it is possible to see that the guy will switch
to the risky decision at decision 7.
The “certainty equivalent” of a gamble is the minimum amount of “sure” money you would
accept in the place of the gamble. So, the CE should give as much utility as the gamble. The EU
from the risky gamble with p=0.5 is 3.582. Therefore,
U(wCE)=3.582=> wCE=(3.582)2=12.83.
c) One reason why people may behave less risk-averse with hypothetical payoffs at large stakes
could be that there is some intrinsic utility associated with being “gutsy”, or unafraid. If I
actually have nothing to lose, I can just make my decision based on this, whereas if we had high
monetary stakes, the monetary part of my utility function would overcome this kind of
motivation and I would behave in a risk-averse way. Another possibility for the hypothetical-real
difference is that perhaps subjects just are unable to imagine what they would do if payoffs were
real, so they naively believe that they would do the same with real money too (which they
wouldn’t).
d) Many contexts. Especially in contexts where we would like to test the predictive power of an
economic theory, having risk preference information is useful. For example, if I want to test
whether subjects bid in the first-price auction according to economic theory, risk-aversion is an
important issue because risk-aversion will change what the theory predicts as well. For this, a
possibility is to use the Holt-Laury task at the end of an experiment, but an important issue is that
risk-aversion can be wealth-dependent and stake-dependent. In general, it is important to adjust
the payoffs of the H-L task so that it will reflect (in terms of size) the payoffs involved in the
actual decisions in an experiment. Another issue is how much a subject earned during the
experiment, which can affect how risk-loving she is in the H-L task. One way to get around that
is to withhold feedback about earnings until the experiment is over, but in many multi-round
experiments we want subjects to “learn” through payoffs, so this may be undesirable.
Question 4:
a) T>X>Y>S makes this a PD. X>T & Y>S would make this a coordination game.
b) Let p=Probability that row player plays W. Let q=Probability that column player plays
W.
Then, EU1(W) =2q, EU1(NW)=1-q, EU2(W)=2p, EU2(NW)=1-p.
For the MSNE, each player should be indifferent between their two actions.
Therefore, 2q=1-q =>q=1/3
2p=1-p=>p=1/3
(p,q)=(1/3, 1/3) is the MSNE.
EU1=2pq+p(1-q)0+q(1-p)0+(1-p)(1-q)(1)=2/3
Likewise, EU2=2/3 through a similar calculation.
c) If the MSNE were being played, we’d expect the following frequencies for the 4
outcomes:
(W,W): pq=1/9
(W, NW): p(1-q)=2/9
(NW, W)=(1-p)q=2/9
(NW, NW)=(1-p)(1-q)=4/9
Since the experimental results are not consistent with this, we cannot say that we find
evidence for the MSNE.
d) NE: {(W,W), (NW, NW)}
We’d expect less coordination on working in this game because NW is riskless now,
since it gives a payoff of 1 regardless of what the other guy does. So, a risk-averse player
might prefer to play NW, and the NW, NW equilibrium may be “risk-dominant”.
Question 5:
a) I asked for present value but should have said present discounted utility. With that, we
have: PDU0=ln(100), PDU1=βδln(100), PDU2=βδ2ln(100)
b) The consumer will consume at the point where the (discounted) MU of consumption
between the periods will be equal. That is,
MU0/MU1=(1/c0)/( δ/c1)=1. This gives c1=δ c0.
MU1/MU2=1 gives c2= δ c1
The other equation we can use is the budget constraint: c0+c1+c2=244.
We get: c0*=100, c1*=80, c2*=64.
c) Just compare the discounted utilities from the two consumption streams. You’ll find (of
course) that the optimal plan gives more U than the suboptimal.
d) Since the guy has already consumed 100 (it’s now period 1), he has $144 remaining. He
now has to decide on c1 and c2. Again setting MU’s equal, we have:
c2= δ c1.
Plugging into the budget constraint (c1+c2=144), we get c1*=80 and c2*=64.
f) Here, from the optimality conditions we are going to get c1=c0 (βδ) and c2=βδ2c0=δc1
Plugging into the budget constraint of c0+c1+c2=244, we get c0*=130.9, c1*=62.8 and
c2*=50.27.
g) After the initial period is over, the guy has $113.1 remaining, so, using the condition
c2=βδc1, we have: c1*=76.42 and c2*=36.68.
h) From the perspective of time zero, the consumption plan of c0*=130.9, c1*=62.8 and
c2*=50.27 gives higher utility than (c0, c1, c2)=(130.9, 76.42, 36.68). Signing the
contract allows the consumer to have the better consumption stream.
i) A naïf person is someone who does not understand that their “future self” will want to
break the consumption plan made at period zero. A sophisticate understands that the
period 1 self will deviate from the plan, so, that type will use the commitment device to
bind the actions of the future self.
U(naïve)=8.339 > U(soph)=8.366
j) Many examples—see Dellavigna paper posted on class website.
k) You gave pretty interesting answers all around! One thing I tried while I was writing my
thesis was to block any fun websites using the “parental control” feature of my internet
service. Notice that my sophisticated time-0 self is the “parent” here! Of course, this
lasted until I realized that I was willing to spend 5 minutes unblocking and look at them
anyway and incur a cost for breaking the commitment
Question 6:
a) If alpha=0 (standard case), the price of a mug will be equal to the MRS between mugs
and money, which is the ratio of the MU of mug and MU of money. Since MRS=4, the
guy would be willing to pay a max. of $4 to get the mug and would want at least $4 to
give up the mug. Notice that with standard preferences, max. WTP= min. WTA.
b) Suppose alpha=1. Now we have reference points as (rc, rw)=(1,0),
If he doesn’t sell the mug: U(1,0)=4+0+V(1-1)+V(0-0)=4
If he sells the mug at a price of ps: U(0,ps)=ps+V(0-1)+V(ps-0) =ps-3+ps=2ps-3
For the guy to sell, 2ps-3≥4 or ps ≥7/2
The guy’s min. WTA is therefore $3.5
c) When the guy doesn’t own the mug, we have (rc, rw)=(0,0).
If he doesn’t buy the mug, U(0,0)=0
If he buys the mug at a price of pb, U(1, -pb)=4-pb+V(1-0)+V(-pb-0)=5-4pb
Therefore, the guy will buy the mug if 5-4pb≥0, or pb≤5/4.
Min. WTP=5/4.
There is an endowment effect because WTA>WTP. That is, owning the mug changes
your valuation for it.
d) The chooser decides between “gaining” the mug vs. “gaining” pc dollars. His reference
point is (rc,rw)=(0,0).
If he chooses the mug: U(1,0)=4+V(1-0)=5
If he chooses the money: U(0, pc)=pc+V(pc-0)=2pc
Will choose the money if: 2pc≥5, or pc≥2.5 (this is the minimum choosing price)
e) From the viewpoint of standard theory, choosers and sellers are identical. That is, if you
ignore reference-dependence, the equations in b) and d) will be identical, and we’ll have
pc=ps.
f) With our model, we have ps>pc>pb. With the standard model, we’d have ps=pc=pb.
The reference-dependent model captures the qualitative findings of the experiment well.
That is, the rankings of the 3 prices are correct. The actual prices in the experiment differ
from the predictions of the model, but we could alter the parameters of the model to fit
the data better.
g) The Becker, De Groot, Marschak (BDM) procedure is what we can use to truthfully
elicit the maximum WTA. Sample instructions can be as follows:
As part of this experiment, you have each been given a mug, which is on your desks. The mug is
yours, but you can exchange it with money (essentially sell it back) if you would like. What you
will now be asked to do is to report the minimum price you would be willing to sell the mug at.
That is, you will report your threshold price, below which you are not willing to sell the mug,
and above which you are willing to sell the mug. The actual price will then be determined as
follows: The computer will randomly pick a number between 0 and 15. Call this p. If your
minimum price happens to be greater than p, you will keep the mug. If your minimum price
happens to be lower than p, you will sell the mug back at a price of p, that is, you will give the
mug back and get $p in return. Notice that the minimum price you state does not determine what
price you actually pay. The actual price will be determined randomly by the computer. The
minimum price you state will only determine whether a sale will happen at the random price p or
not.
To students: This mechanism is “incentive compatible” because subjects have no incentive to
misreport their valuation. For example, there is no incentive to state a higher WTA because in
that case, if I was actually willing to sell it at $10 but said I’m not willing to sell it below $12,
and the random price was $11, I am not executing a trade that would make me better off.