Download Slides [pptx]

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Postdevelopment theory wikipedia , lookup

Individualism wikipedia , lookup

Neuroeconomics wikipedia , lookup

Adult development wikipedia , lookup

Public choice wikipedia , lookup

Arrow's impossibility theorem wikipedia , lookup

Capability approach wikipedia , lookup

Preference (economics) wikipedia , lookup

Microeconomics wikipedia , lookup

Choice modelling wikipedia , lookup

Transcript
Prioritarianism and Climate
Change
Matthew Adler, Duke University
LSE, MSU Workshop
June, 2015
Overview of Talk
• The social welfare function (SWF) framework
• The prioritarian SWF (presented, and
contrasted with the utilitarian SWF)
• Prioritarianism under risk: “Ex ante” versus “ex
post” prioritarianism
• Possible implications for climate change:
research questions?
• This talk based upon joint work with Nicolas Treich, Adler/Treich (2015),
although he should not be held responsible for everything I say here!
The SWF Framework
• An interpersonally comparable well-being function w(.) and a rule E
whereby outcomes are ranked:
outcome x is at least as good as y (x ≽E y)
iff
(w1(x), w2(x), …, wN(x)) E (w1(y), w2(y), …, wN(y))
•
•
•
Actions (policy choices) are in turn ranked in light of the outcome
ranking.
SWFs originate in theoretical welfare economics; now used in
optimal tax theory, public finance, environmental economics
(including climate change), etc.
Adler, Well-Being and Fair Distribution (2012); Adler and Fleurbaey, Oxford
Handbook of Well-Being and Public Policy (forthcoming), provide overviews
The SWF as a tool for ethical
decisionmaking
• I view the SWF as a tool for ethical (moral, “social”) decisionmaking.
Specifying an SWF means making ethical choices. Descriptive facts don’t
settle normative questions. (Hume et al: no “ought” from “is.”) Moreover,
since individuals in any modern, pluralistic, society have diverse ethical
views, it’s problematic (I think) to see society having a single SWF, or to try
to “infer” what that SWF is. The electoral process determines whose
ethical views get to influence policy (for now, until the next election).
•
Bergson: “A notable feature in welfare economics is the attempt to formulate a criterion of social welfare
without recourse to controversial ethical premises… [T]his goal for the criterion is an illusion.” Harsanyi:
“This function Wi that individual i will use in evaluating social situations from a moral point of view will be
called his social welfare function.”
• That said, ethical thinking is not wholly unstructured. The axiomatic
method of welfare economics/social choice theory is hugely valuable in
sorting through the “space” of SWFs and well-being measures; and
contemporary moral philosophy, in bringing to light sophisticated
arguments for different possible approaches.
Normative Choices in Elaborating the
SWF Framework
• What is the well-being measure w(.)?
Preferences,
happiness, objective goods.
• What is the ethical rule E for ranking well-being
vectors? Utilitarian, Prioritarian, other
• Application under risk (well-defined probabilities)
Utilitarianism, ex ante prioritarianism, ex post prioritarianism, “transformed”
versions
• Deep uncertainty
• Variable populations E.g., Parfit’s “repugnant conclusion”
• Ethics versus self-interest. Sidgwick. To what extent is it reasonable
for the current generation to give priority to its own interests, even though an
ethical perspective requires impartiality vis a vis the future?
The Well-Being Measure
•
•
•
Let c denote an individual’s consumption of marketed goods; a her nonmarket attributes (e.g., health, environmental quality); and R her preferences
over (c, a) bundles.
Different forms for w(.):
– Objective good/”capability”: w(c, a, R) = o(c, a)
– Happiness: w(c, a, R) = h(c, a, R)
– Preference-based
• Simple case: Individuals have the same preferences. Very often assumed
by economists who use SWFs. w(c, a, R) = uR(c, a), with uR(.) a vNM utility
function representing the common preferences R—or F(uR(c, a)), F(.)
increasing, in more general formulation.
• Heterogeneous preferences. Harsanyi’s concept of “extended
preferences”. w(c, a, R) = s(R)uR(c, a) + t(R), with s(.) and t(.) scaling
factors for the various vNM functions—or F(s(R)uR(c, a) + t(R))
Complex mixture of normative and descriptive. Although the choice between
these views is normative, the application of specific views will in turn depend on
empirical facts. On the preference view, well-being depends upon individuals’
preferences; and it’s in turn an empirical question what those preferences are.
The Rule E
These formulas assume fixed population: N(x) = N(y) = N for all outcomes
• Utilitarian: x at least as good as y iff ∑wi(x) ≥ ∑wi(y)
• Prioritarian: x at least as good as y iff ∑g(wi(x)) ≥
∑g(wi(y)), with g(.) strictly increasing and concave.
•
Prioritarians give greater weight to well-being changes affecting worse off
individuals. Vibrant philosophical literature starting with Parfit (1991). Key
axiomatic difference from utilitarianism is Pigou-Dalton principle. PD principle
says that (1, 3, 8, 10) is ethically preferred to (1, 2, 9, 10).
• Atkinson SWF: (1-γ)-1∑wi1-γ, with γ an inequality aversion parameter
>0 (becomes utilitarian at 0, and leximin at ∞)
The Prioritarian SWF
Axioms for E
•
•
•
•
Pareto superiority: (3, 4, 10, 13) ≻ (3, 4, 10, 12)
Anonymity/Impartiality: (7, 12, 4, 60) ∼ (12, 60, 4, 7)
Pigou-Dalton: (1, 3, 8, 10) ≻ (1, 2, 9, 10)
Separability:
(7, 100, 100, 7) ≽ (4, 100, 100, 12) iff
(7, 7, 7, 7) ≽ (4, 7, 7, 12)
• Continuity: If (1, 3, 50000, 50000) ≻ (1, 3, 6, 8), then (1, 3±ε,
50000, 50000) ≻ (1, 3, 6, 8) for ε sufficiently small
• Ratio rescaling invariance:
(10, 12, 17, 20) ≽ (10, 10, 20, 20) iff (50, 60, 85, 100) ≽ (50,
50, 100, 100)
Three Justificatory Schemes
• Different ways to choose among SWFs, consistent with the
“separateness of persons”
• The veil of ignorance.
Harsanyi versus Rawls. The Harsanyi VOI (with a
risk neutrality assumption) yields utilitarianism. x ≽M y iff
(1/N, (x 1); … ; 1/N, (x N)) at least as good as (1/N, (y 1); … ; 1//N, (y N)) iff
1/N w1(x) + …+ 1/N wN(x) ≥ 1/N w1(y) + …+ 1/N wN(y)
• Temkin’s complaints (“claims within outcomes”).
In each
outcome, worse-off individuals have complaints against better-off individuals
(modulo resp.). The ranking of outcomes (in light of persons’ interests)
balances complaint minimization versus overall well-being.
• Adler “claims across outcomes” (building on Nagel).
As between
two outcomes, each individual has a claim in favor of the outcome in which
she is better off. The ranking of outcomes (in light of persons’ interests)
balances these claims.
Claims across Outcomes versus
Complaints (claims within outcomes)
x
y
z
w
Amy
10
10
Amy
5
5
Betty
10
10
Betty
60
70
Mark Z
90
100
Mark Z
90
80
• Why believe that the balancing of overall well-being and complaint
minimization necessarily yields the Pareto superior outcome? By
contrast, the claim-across-outcome view directly justifies both the
Pareto and Pigou-Dalton principles. Or, it helps us see the deep
connection between these principles; or, why, a Paretian welfarist
should also care about equity in the sense of Pigou-Dalton.
Equality and Priority
• Do prioritarians care about equality? Extensionally, yes. A decomposition
theorem: Any SWF that respects Pareto, anonymity, Pigou-Dalton and
continuity (with or without separability) can be represented as overall
well-being discounted by the degree of inequality.
x ≽M y iff ∑g(wi(x)) ≥ ∑g(wi(y)) iff
[1-Ig(w1(x), …, wN(x))] ∑(wi(x)) ≥ [1-Ig(w1(y), …, wN(y))] ∑(wi(y))
• However, the justification offered here for the prioritarian SWF is based
on individuals’ claims-across-outcomes, not a concern for comparative
well-being within outcomes. (Contrast the case in which Temkin-style claims
balanced against overall well-being somehow yield a Paretian, Pigou-Dalton, and
separable SWF!).
Choosing a g(.) function
• The Atkinson family is attractive: g(w) = (1-γ)-1w1-γ
• γ is an ethical parameter, capturing the “social planner’s” own
moral/ethical views. Problematic to try to estimate empirically. Not
to be conflated with parameter λR of individual risk aversion w/r/t
consumption for CRRA utility.
R
u R (c)  (1   R )1 c1
• Leaky bucket question. Poor is at utility level W, while Rich is at level KW. If
we reduce Rich’s well-being by ∆w, and increase Poor’s by f∆w, 0<f<1,
what is the smallest value of f such that this remains a moral
improvement? For given γ, f = (1/K)γ
• Equalization question. Is it an improvement to move from
(W, W*) to (W+, W+), where W + W* > 2W+?
A contrast with climate change
scholarship
•
Assuming a preference-based well-being measure, and the Atkinson g(.), the
prioritarian formula for ranking outcomes (modulo zeroing-out) is:
•
•
E(x) = (1-γ)-1 ∑i [s(Ri)uRi(ci(x), ai(x)) + t(Ri)]1-γ
By contrast, the utilitarian formula often used in climate change scholarship
ignores non-market attributes and preference heterogeneity, has a pure time
preference (well-being discount factor), and no inequality-aversion prameter
E(x) = ∑i di u(ci), with di > dj if individual j exists later in time than i
Even ignoring preference heterogeneity and non-market attributes, and with
CRRA consumption utility, the two approaches are different:
E(x) = (1-γ)-1 ∑i [u(ci)]1-γ vs. E(x) = ∑i diu(ci), with u(ci) = (1-λ)-1ci1-λ
•
Time preference: violates anonymity; an ad hoc way to handle extinction
risk; and counterintuitive features of zero discounting can be mitigated, by
prioritarians, by increasing the inequality-aversion parameter
Utilitarianism and Prioritarianism
under Risk
• Utilitarianism E(a) = ∑x πa(x) (∑i wi(x)) = ∑i ∑xπa(x) wi(x)
• “Ex post” prior. E(a) = ∑xπa(x) (∑i g(wi(x))) = ∑i∑xπa(x) g(wi(x))
• “Ex ante” prior. E(a) = ∑i g(∑x πa(x) wi(x))
• Utilitarianism and ex post prioritarianism apply expected-utility theory at
the level of ethical choice: maximizing expected ethical value (i.e., sum of
utilities or transformed utilities). Ex ante prioritarianism does not.
• “Transformed” approaches: ∑x πa(x) H(∑i wi(x)) or ∑x πa(x) H(∑i g(wi(x))).
Problematic w/r/t separability
Util. vs. EPP vs. EAP
Jim
June
Policy a
x
y
70
30
30
70
Exp. wb
50
50
Policy
z
50
50
b
zz
60
40
Exp. wb
55
45
Note: The outcomes are equiprobable, i.e., πa(x) = πa(y) = 1/2, and πb(z) = πb(zz) = 1/2.
Utilitarianism is indifferent between the policies.
Ex post prioritarianism prefers policy b. Ex ante
prioritarianism prefers policy a.
Which is most attractive?
Stochastic
Dominance
Time
Sure Thing Consistent
Pigou
Dalton
Ex Ante
Pareto
Util.
Yes
Yes
Yes
No
Yes
EPP
Yes
Yes
Yes
Yes
No
EAP
No
No
No
Yes
Yes
For those who find axioms of stochastic dominance, sure
thing, and time consistency to be normatively compelling—
anyone attracted to expected utility theory! —EAP will seem
highly problematic. The choice is, for them, between
utilitarianism and EPP.
Illustrating the axioms
Jim
June
Policy a
x
y
10
90
90
10
Exp. wb
50
50
Policy b
z
50-ε
50-ε
zz
50-ε
50-ε
Exp. wb
50-ε
50-ε
Illustrates stochastic dominance and ex ante Pareto for three approaches
Jim
June
Policy a
x
80
20
Jim
June
Policy a*
x
y*
80
10
20
200
y
50
50
Exp. wb
65
35
Policy b
z
90
10
Exp. wb
45
110
Policy b*
z
y*
90
10
10
200
Ilustrates sure thing principle for three approaches
y
50
50
EU
70
30
Exp. wb
50
105
Time Inconsistency
• A simple example. Assume 2 generations. The news about the effect of
climate change on the second generation can either be “good” or “bad.”
At an initial point, the decisionmaker assigns each equal probability. She
then learns and, if “bad” news, can mitigate the bad effects (at some cost
to the current generation). If “good,” the well-being of the two
generations is (200, 300). With “bad” and no mitigation it’s (200, 100);
with “bad” and mitigation it’s (140, 140).
• Assume a prioritarian sufficiently inequality averse to prefer the outcome
(140, 140) to (200, 100). At the initial point, the expected well-being of
the generations, with a plan not to mitigate upon learning the news, is
(200, 200); with a plan to mitigate, it’s (170, 220). Thus EAP initially plans
not to mitigate – but then, if bad news is learned, prefers to mitigate.
• By contrast, Util consistently plans not to mitigate if bad news, and EPP
consistently plan to mitigate if bad news.
EPP vs. Utilitarianism: Climate Policy
• The extent to which EPP (ex post prioritarianism) and
utilitarianism diverge substantially with respect to
climate policy, given the empirics of our climate
system, global economy and population, etc., is an
open question, on which research is now needed.
What follows are possible differences …
Climate Change: SCCU vs. SCCEPP
P

c
P
t

N
u

(
c
)

P 
t
t
es
P
t 2
u(c1 )
T max
SCCU
P

c
P
P
t

N
g

u
(
c
))
u

(
c
)
P P 
t
t
t
es
t 2
g (u (c1 ))u(c1 )
T max
SCCEPP
c is per capita (normalized)
consumption. Equality within
each generation. P is an
economic/climate path. es is
emissions at time s. Current
consumption c1 is numeraire.
Note that SCCEPP > < SCCU depending on the relative likelihood
of paths where future generations are worse off vs. better off
compared to the present
Optimal Abatement
• A toy model. 2 periods; resources in first period either
consumed or invested at rate r; CRRA utility. With
intragenerational equality, utilitarian optimum is c2 =
c1(1+r)1/λ. Prioritarian optimum is c2 = c1(1+r) 1/[λ+γ(1-λ)].
Assuming λ < 1 (to avoid negative utilities), less is invested for future and less
inequality between generations. As γ approaches infinity, the optimal
allocation approaches equality between the 2 generations
• With π = fraction held by worse off in second generation
and (to simplify) linear utility, the utilitarian invests
everything. The prioritarian optimum is c2 =
(c1/2)(1+r)1/γ[π1-y+(1-π)1-γ]1/γ. Subtle interactions between π and γ,
i.e., between future inequality and inequality aversion. Ratio c2/c1 is
increasing in π for γ < 1, independent for γ =1, and decreasing in π for γ > 1
Other Climate Change Implications
• “Catastrophe”: Let WL be a specified “low” level of
well-being. A policy will reduce the risk of
generations falling below WL, at some cost to aboveWL generations. EPP is willing to incur greater such
costs than Utilitarianism.
• Decreasing Risk: A policy reduces the downside risk
for a given generation of being badly off, but also
their upside prospects, so that expected well-being
remains the same. EPP prefers the policy, while
Utilitarianism is indifferent.
Deep Uncertainty
• How to model this is a burgeoning area of research in
economics and decision theory (“ambiguity”). Many
approaches now use a set of probabilities, e.g.,
Gilboa/Schmeidler maximin EU, or Klibanoff et al. secondorder risk aversion. It would seem that such approaches are
equally applicable to Util. and EPP, simply changing the social
maximand.
Klibanoff et al., with E(x)
either sum of utilities for
Util. or sum of transformed
utilities for EPP.


P
P  ( P) H  x  a ( x) E ( x) 
Variable population
• Average utilitarianism/prioritarianism: Non-separable, violates
“negative expansion” principle
• Total utilitarianism/prioritarianism: “Repugnant conclusion”
• Critical level utilitarianism/prioritarianism: With critical level
above level of life just worth living/equally good as nonexistence, this
approach is separable, satisfies “negative expansion” principle, and no
repugnant conclusion.
E(x) =∑i=1 to N(x) (wi(x) – w*) or E(x) = ∑i=1 to N(x) (g(wi(x)) – g(w*))
with w* the critical level of well-being (above a life just worth living)
• Generalizes straightforwardly using EPP or utilitarian formula
to choice under risk
E(a) =∑xπa(x) (∑I to N(x) wi(x) – w*) or
E(a)=∑xπa(x) (∑I to N(x) g(wi(x)) – g(w*))
Scaling the Well-Being Function
• While a well-being function w(c, a, R) unique up to a positive affine
transformation—which provides intra- and interpersonal
comparisons of levels and differences—is sufficient for
utilitarianism, this is not true for prioritarianism. In particular, the
Atkinson SWF requires a w(.) unique up to a ratio transformation.
•
Jim
Sue
Original
x
y
Diff.
10 11
1
30 25
−5
Renumbering 1
x
y
Diff.
200 220 20
60
50 −10
Renumbering 2
x y
Diff.
2 12
10
202 152 −50
Renumbering 3
x
y
Diff.
30 33
3
90 75 −15
Sum
40 36
260
204 164
120 108
270
Renumbering 1 is individual-specific ratio transformation (preserves only
intrapersonal info). Renumbering 2 is common affine transformation (preserves
intra- and interpersonal levels and differences). Renumbering 3 is common ratio
transform. Utilitarian SWF invariant to Renumberings 2 and 3, Atkinson only to 3.
Scaling the Well-Being Function
•
•
•
•
We construct w(.) by identifying a “zero” history (czero, azero, Rzero)
w(c, a, R) = [s(R)uR(c, a) + t(R)] − [s(Rzero) uR-zero(czero,azero) + t(Rzero)]
By construction, w(czero, azero, Rzero) = 0.
With the Atkinson SWF, the zero history (czero, azero, Rzero) is both the
“horizon of ethical assessment” and the point of absolute ethical
priority. The Atkinson SWF is “badly behaved” if anyone has a negative w value in any
outcome. Moreover, for any “positive” history such that w(c, a, R) > 0, the ratio between the
ethical impact of adding an increment of well-being ∆w to the zero history, and the ethical impact
of adding ∆w to the positive history, becomes infinite.
•
(czero, azero, Rzero) should not be conflated with the level of a life
worth living (cworth, aworth, Rworth) nor with the critical level history
(ccrit, acrit, Rcrit).
• Relation to Weitzman’s “dismal theorem”
Significance of choice of zero history
Zero bundle at (csub, hdeath): w(c,h) = Zero bundle at (csub, hworst): w(c,h) =
log(c)m(h) – log(csub)m(hdeath).
log(c)m(h) – log(csub)m(hworst)
Marginal
moral impact
of $1 for
person with
$100,000 and
health h
Marginal
moral impact
of $1 for
person with
$20,000 and
health h
Comparative
marginal
moral impact
of $1
w(100,000, h)-γ m(h)(1/100,000) =
w(100,000, h)-γ m(h)(1/100,000) =
(11.51m(h) –log(csub)m(hdeath))-γ x
m(h)(1/100,000)
(11.51m(h) –log(csub)m(hworst))-γ x
m(h)(1/100,000)
w(20,000, h)-γ m(h)(1/100,000) =
w(20,000, h)-γ m(h)(1/100,000) =
(4.30m(h) –log(csub)m(hdeath))-γ x
m(h)(1/20,000)
(4.30m(h) –log(csub)m(hworst))-γ x
m(h)(1/20,000)

1  4.30m(h)  log(c sub )m(h worst ) 


5  11.51m(h)  log(c sub )m(h worst ) 

1  4.30m(h)  log(c sub )m(h death ) 


5  11.51m(h)  log(c sub )m(h death ) 