Download - Institutional and Organizational Economics Academy

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
1
SOCIAL OUTCOME-BASED CONTRACTS AND PERFORMANCE MEASUREMENT
Thomaz Teodorovicz1
RESEARCH PROJECT
The usefulness and limitations of employing incentive contracts, tying financial remuneration
to the achievement of pre-established targets, within and between organizations is a long-studied topic
in economics and strategic management (Gibbons, 1998; Gibbons & Roberts, 2012; Kerr, 1975;
Prendergast, 1999, 2011). Along with it, a recent interest in the possibility of measuring and
rewarding agents for the achievement of and improvement in social indicators provided an
opportunity to bridge the gap between incentive contracts and public-oriented activities (DIXIT,
2002). The rising demand to assess and attach payments to social indicators and performance has
roots on a trends arising in both the public and private sectors. On the one hand, as governments have
paid closer attention to transparency and efficiency in public expenditure, especially amidst times of
extensive budget constraint, their search for improving social wellbeing implied a greater concern
over efficient resource allocation (Grittner, 2013; Hood, 1991). Thus, the assessment of social
indicators allowed policymakers to evaluate the relative success of (or ex ante merit) of alternative
projects. On the other hand, a distinguished new type of socially responsible investors and operators
(Barnett and Salomon 2006) who pick investment projects based both on financial returns and
expected social impact (Bugg-Levine, Kogut, & Kulatilaka, 2012; Lazzarini et al., 2014) has further
supported the movement towards rewarding (verifiable) social impact. These private agents not only
increasingly adopt practices simultaneously addressing social and profitability purposes, as the
pursuit of shared value, corporate social responsibility and action targeting the base of pyramid
(Margolis & Walsh, 2003), but also show interest in financing projects combining economic and
social returns has boosted due to the emergence of socially responsible investors. As the perception
of the interdependence of the private and public sectors rose (Klein, Mahoney, McGahan, & Pitelis,
2010) implied a call for novel organizational arrangements to create public value (Baum & McGahan,
2009; Cabral, Lazzarini, & Azevedo, 2013; Lazzarini, Cabral, Ferreira, Pongeluppe, & Rotondaro,
2014), potentially in the intersection between incentive contracts and social performance.
Defining precisely what ‘social impact’ is and means, both technically and even more so
contractually, poses a host of intricacies to the design of payment systems based on social indicators.
Measuring ‘quality-oriented’ constructs (e.g. educational attainment, health improvement, and
expected life quality) attached to the idea of ‘social value’ is often rife with biases, risk, and
misspecifications not only providing uncertainty to both parties in a contractual relation, but also
rendering contracts incomplete. Even if one defined quality constructs by means of particular social
indicators, public value remains not contractible in arrangements addressing public-oriented services.
As a result, the established theory on the boundaries of the government have heralded a plethora of
non-negligible risks in a world in which contracts are inherently incomplete (Hart, Shleifer, and
Vishny 1997; Dixit 2002; Levin and Tadelis 2010). Namely, a profit-maximizing contractor
responsible for providing complex public services has incentive to shirk on effort associated with
quality provision whilst overinvesting in cost reduction initiatives, thus compromising overall quality,
implying a cost-quality trade-off (1997) or what Williamson (1999) called 'probity risks'. Indeed,
these results have even claimed that the participation of private agents in such public-oriented services
could harm social welfare and thus would be, ultimately, undesirable.
Nonetheless, if theoretical results have dwelled on the assumption of contract incompleteness
due to the impossibility of contracting upon a public service’s quality, a novel, and yet mostly
unexplored, organizational arrangement, the Social Outcome-Based contract, tries to overthrow this
assumption. Social outcome-based contracts is a broad definition of contractual arrangements
between a public-oriented entity and a private partner in which the later becomes responsible for
implementing and/or financing a project in the public interest and receives a repayment/return
conditioned on an ex post (quantitative) verification of ‘successful’ social impact. Over the 2000s,
1
PhD Student in Business Economics at the Insper Institute of Education and Research. Advisors: Profs. Sérgio
Lazzarini and Sandro Cabral.
2
these contracts have assumed several forms. Development and Social Impact Bond (DIB and SIB),
created in the United Kingdom in 2010, are contracts aiming at attracting private resources to social
projects by offering investors a remuneration for doing so, but conditioning payment upon the
existence of proven social impact (Gustafsson-Wright, Gardiner, and Putcha 2015). Environmental
Impact Bonds (EIB) follow a similar vein, but focusing on environmental targets. Pay-for-Success
Contracts (also called results-based financing) are programs where the principal (investors,
government, or others) sets financial or other incentives for an agent (social service provider) to
deliver predefined outputs or outcomes and rewards the achievement of these results upon verification
(Grittner 2013).
The first pilot of a SIB, implemented in the UK, supported an intervention to reduce the
recidivism rate of a prison with approximately 3,000 inmates. Since then, developed and developing
countries (e.g. the UK, the US, Mexico, India, Pakistan) have employed similar social outcome-based
contracts to address a range of sensitive social issues as educational achievements, homelessness,
workforce development, and others (Gustafsson-Wright, Gardiner, & Putcha, 2015; Lazzarini et al.,
2017). This arrangement’s main cohesive characteristics is the inclusion of contractual clauses
mapping the achievement of pre-established social impact to payments, and thus trying to partially
overcome the contractual incompleteness assumed in theoretical models. As a result, social outcomebased contracts rest on well-defined, objective, performance measures guiding final payment to either
investors or service providers and defining what is the so-called ‘impact’ rewarding investors and
operators.
Attaching a social indicator to payment, however, begs a series of questions as how to select
such metric, whether indicators provide investors and operators with the right incentives, or even
whether the contracts indeed rewards ‘social impact’. The selection of social indicators can prove
controversial. This case is better exemplified by the Social Impact Bond enacted in 2013 by the state
of Utah (USA) with the intent to assist 109 kindergartners avoid special education. Education
specialists criticized the agreed performance metric, the number of assisted children who did not need
special education, claiming it rested on the wrong assumption that most assisted kids would have
needed special education without the project even if there was little evidence or previous research
indicating this was the case.2 Therefore, a first question is: how should performance metrics be
designed in social outcome-based contracts? The main objective of this essay is to tackle this
question and define optimal conditions to select amongst several competing social performance
metrics. However, to achieve this goal, we first have to define what ‘social impact’ means.
According to Brest and Born (2013), an impact investment would focus on ‘additional’ social
impact, the so-called additionality principle. Its core concept is the production of beneficial social
outcomes that would not occur but for the investment in a social program/enterprise. Indeed, this
concept rests on the statistical definition of a ‘counterfactual’, i.e. what would have happened to the
targeted population of a social outcome-based contract in the absence of the provided social
project/service. The ‘additional’ social impact would be the difference between an observable social
indicator and a ‘counterfactual’ estimate of the indicator. Indeed, the definition of a precise
‘counterfactual’ has been a topic extensively explored by statisticians and economists with the intent
to uncover causal relations from a treatment/project into a target population (Athey & Imbens, 2016;
Heckman, 2008; Holland, 1986; Imbens & Angrist, 1994; Imbens & Wooldridge, 2009; Rubin, 1974,
1977). For that purpose, several estimators have been proposed to filter random shocks, effects of
unobservable variables, and potential biasedness on the estimates of treatment effects. As the golden
standard, selecting randomly which individuals from a target population receive the social program
or not, a technique referred as randomized controlled trials (RCT) is taken as the ‘golden rule’ to
assert causal estimates (Duflo, Glennerster, & Kremer, 2007; Evaluations et al., 2013; What Works
Clearinghouse, 2014). The advances on this area provided a ‘technological shock’ on how to measure
social impact, something potentially unforeseen by early models on the decision to contract out or
provide internally public-oriented services (Hart et al., 1997).
2
https://www.nytimes.com/2015/11/04/business/dealbook/did-goldman-make-the-grade.html
3
Nonetheless, one cannot assume that the existence of statistical methods to establish
counterfactuals and derive causal ‘social impact’ from a project implies these are the best social
performance measures for social outcome-based contracts. Indeed, Lazzarini et al (2017) collected
information on the performance metrics used in 71 social outcome-based contracts, classifying them
into one of four measurements tiers according to its capacity to measure unbiased ‘social impact’.
Given a targeted, usually vulnerable, population, the measurements tiers are as follow (in increasing
order of statistical robustness):
0. Comparison of ex post social indicator concerning the targeted population to historical
information of the same population;
1. Comparison of ex post social indicator to the same indicator measured at an aggregate
level (national or regional, for instance);
2. Use of matching tools to compare the evolution of a social indicator on the targeted
population and another similar, but untreated, population; and
3. Use of randomized control trials (RCT) to select treatment and control groups and finally
comparing the evolution on the pre-defined social indicator.
Note that all tiers associate impact to an additionality measure, i.e. a comparison from an ex
post indicator collected at the target population level with another indicator representing a common
benchmark. What changes amongst these tiers is who the benchmark is. Tier-0 measures use the target
population in the past as a benchmark. Tier-1 measures use aggregate measures at a regional local as
a benchmark, thus filtering any regional-level common trends affecting all individuals. Tier-2
measures employ statistical methods to select a benchmark (control group) with common observables
characteristics as that of the treatment group, assuming any difference between treatment and control
groups after the program’s implementation is due to the program. Finally, Tier-3 measures define the
benchmark through randomization. This procedure presents the most reliable statistical method to
assure that treated individuals have the same features (in expectation) as the benchmark (untreated
individuals randomized out of the project).
Figure 1 presents a conundrum: although RCTs and matched samples are the two most robust
techniques to identify performance gains, these two methods correspond to only 23% of all indicators
basing payments to investors social outcome-based contracts. All remaining cases rely on simpler,
though potentially biased and subject to external shocks, performance measures. This pattern of
performance indicators seems counterintuitive, especially under the assumption that social investors
care not only about financial returns, but also about achieving social impact. Indeed, such finding
motivates the questions: Should one rely on refined econometric measures to reward for socioenvironmental impact? Under which conditions should one prefer simpler, but potentially
inaccurate with respect to social impact, performance metrics when designing social outcomebased contracts? Have contracting parties adopted such ‘new technologies’ when enacting these
contracts? If not, why?
The problem of selecting socio-environmental performance metrics to support social
outcome-based contracts motivate a closer look on the relative benefits of alternative metrics and on
the possibility of bridging the literature of the econometrics of program evaluation and incentive
contracts, the main empirical and theoretical challenge this essay addresses. More specifically, this
project has a dual objective: to explore how social outcome-based contracts have been designed with
respect to their performance metrics and to propose a theoretical model on the optimal performance
measure for social-outcome based contracts. With these goals in mind, we propose to use a unique
and comprehensive dataset constructed by Lazzarini et al (2017) containing information over 130
social outcome-based contracts enacted worldwide since 2010. Although confidentiality clauses
impede us to collect all information required for an even deeper inquiry on the reasons leading
contracting to select such-and-such measure, we wish to verify which types of social metrics and
methods are parties adopting when issuing incentive contracts tied to social indicators. After assessing
potential empirical regularities, we use them to motivate a stylized moral hazard in which a principal
writes an incentive contract with a service provider but can only base the contract upon a biased
metric. Using this model, we intend to draw some insights on how bias and risk affect the selection
4
of performance metrics. The underlying general question we answer is: how to best design social
performance metrics in social outcome-based contracts, i.e., how to best design such metrics to
impinge the correct incentives upon managers and contractors? Specifically, we propose to design a
moral hazard short-term contractual model not only accounting for measurement error in performance
standards, but also by inserting statistical concepts as choice variables, such as the power of a
statistical test and the significance level. With such, we propose to extend contracting models by
inserting aspects from the literature on the econometrics of program evaluation on the hope of better
understanding the relative trade-offs when designing a social outcome-based contract based on
different types of performance indicators.
60
Figure 1 – Social outcome-based contracts worldwide and stringency of performance measure.
54
(B) Share, by measurement tier
60
(A) Total, by measurement tier
40
15.5%
40
7.0%
20
20
1.4%
76.1%
11
11
5
1
5
1
0
0
No. of PFI contracts
54
Adm. data/hist. comparison
Quasi-experimental designs
Source: own elaboration based on Lazzarini et al (2017).
Comp. to aggregate data
RCT
By further exploring the process of selecting performance measures in incentive contracting,
this work could enhance our understanding of the optimal choice of an indicator’s stringency in
performance contracts. Especially considering socio-environmental indicators, our results could
guide practitioners to consider alternative measures when designing social outcome-based contracts.
In addition, these contracts are organizational arrangements in their infancy, making it a topic with
few academic and practical experience. Indeed, the emerging market for impact investing, valued at
$46 billion in 2014 (World Economic Forum, 2014), only reinforces how prominent social outcomebased contracts could become. Perhaps the main expected contribution we wish to pursue with this
proposal is to refine the literature in contract theory by incorporating the literature on impact
assessment in order to compare the existing trade-offs in selecting performance metrics for incentive
contracts. This is a challenge apparently never tackled, to be the best of our knowledge, and of
paramount applied concern, especially because these socially-oriented incentive contracts exist and
governments and public-oriented agencies are trying to expand their application. Proposing a
common framework inserting econometric as statistical power within incentive contracting models is
a challenging objectives but which we think holds great the potential to lapidate a new mechanisms
for creating public value.
REFERENCES
Alchian, A. A., & Demsetz, H. 1972. Production, Information Costs, and Economic Organization.
American Economic Review, 62(5): 777–795.
Athey, S., & Imbens, G. 2016. The State of Applied Econometrics - Causality and Policy
Evaluation. http://arxiv.org/abs/1607.00699.
Baker, G. 1992. Incentive Contracts and Performance Measurement. The Journal of Political
5
Economy, 100(3): 598–614.
Baker, G. 2000. The use of performance measures in incentive contracting. American Economic
Review, 90(2): 415–420.
Baker, G. 2002. Distortion and Risk in Optimal Incentive Contracts. The Journal of Human
Resources, 37(4): 728–751.
Baker, G., Gibbons, R., & Murphy, K. J. 1994. Subjective Performance Measures in Optimal
Incentive Contracts. The Quarterly Journal of Economics, 109(4): 1125–1156.
Barajas, A., Barajas, L., Burt, K., Harper Jr., T., Johnson, P., et al. 2014. Social Impact Bonds: a
new tool for social financing.
Barnett, M. L., & Salomon, R. M. 2006. Beyond dichotomy: the curvilinear relationship between
social responsibility and financial performance. Strategic Management Journal, 27(11):
1101–1122.
Baum, J. A. C., & McGahan, A. 2009. Outsourcing War: The Evolution of the Private Military
Industry after the Cold War. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1496498.
Bennett, J., & Iossa, E. 2009. Contracting out public service provision to not-for-profit firms.
Oxford Economic Papers, 62(4): 784–802.
Bolton, P., & Dewatripont, M. 2005. Contract Theory. Cambridge, MA: The MIT Press.
https://doi.org/10.1093/acprof:oso/9780198765615.001.0001.
Brest, P., & Born, K. 2013. When Can Impact Investing Create Real Impact? With Responses From
Audrey Choi, Sterling K. Speirn, Alvaro Rodriguez Arregui & Michael Chu, Nancy E. Pfund,
and Nick O’Donohoe. Stanford Social Innovation Review, 11(4): 22–31.
Bugg-Levine, A., Kogut, B., & Kulatilaka, N. 2012. A new approach to funding social enterprises.
Harvard Business Review, 118–123.
Cabral, S., Lazzarini, S. G., & Azevedo, P. F. de. 2013. Private Entrepreneurs in Public services: a
longitudinal examination of outsourcing and statization of prisons. Strategic Entrepreneurship
Journal, 7: 6–25.
Dixit, A. K. 2002. Incentives and Organizations in the Public Sector: an interpretative review. The
Journal of Human Resources, 37(4): 696–727.
Duflo, E., Glennerster, R., & Kremer, M. 2007. Using Randomization in Development Economics
Research: A Toolkit. In P. T. Schultz & J. A. Strauss (Eds.), Handbook of Development
Economics, vol. 4: 3895–3962. North-Holland.
Evaluations, R. R., Glennerster, R., Takavarasha, K., Gerber, A., Green, D., et al. 2013.
Introduction to evaluations, (May 2003): 1–24.
Feeney, L., Bauman, J., Chabrier, J., Mehra, G., & Woodford, M. 2015. Using Administrative Data
for Randomized Evaluations, (December).
https://www.povertyactionlab.org/sites/default/files/documents/AdminDataGuide.pdf.
Gertler, P. J., Martinez, S., Premand, P., Rawlings, L. B., & Vermeersch, C. M. J. 2011. Impact
Evaluation in Practice. The World Bank Publications. Washington D.C.: World Bank.
https://doi.org/10.1596/978-0-8213-8541-8.
Gibbons, R. 1998. Incentives in Organizations. Journal of Economic Perspectives, 12(4): 115–132.
Gibbons, R., & Murphy, K. J. 1990. Relative performance evaluation for chief executive officers.
Industrial and Labor Relations Review, 43: 30–51.
Gibbons, R., & Roberts, J. 2012. Economic theories of incentives in organizations. The Handbook
of Organizational Economics: 56–99.
Grittner, A. M. 2013. Results-based Financing: Evidence from performance-based financing in
the health sector. no. 6/2013, Bonn. http://www.oecd.org/dac/peer-reviews/Results-basedfinancing.pdf.
Guajardo, J. A., Cohen, M. A., Kim, S.-H., & Netessine, S. 2012. Impact of Performance-Based
Contracting on Product Reliability: An Empirical Analysis. Management Science2, 58(5):
961–979.
Gustafsson-Wright, E., Gardiner, S., & Putcha, V. 2015. The Potential and Limitations of Impact
Bonds: Lessons From The First Five Years of Experience Worldwide.
6
Hart, O., Shleifer, A., & Vishny, R. W. 1997. The Proper Scope of Government: Theory and an
Application to Prisons. The Quarterly Journal of Economics, 112(4): 1127–1161.
Heckman, J. 2008. Econometric Causality. International Statistical Review, 76(1): 1–27.
Holland, P. W. 1986. Statistics and causal inference: Rejoinder. Journal of the American
Statistical Association, 81(396): 968–970.
Holmström, B. 1982. Moral hazard in teams. The Bell Journal of Economics, 11(2): 74–91.
Hölmstrom, B. 1979. Moral hazard and observability. Bell Journal of Economics, 10, (1): 74–91.
Hood, C. 1991. A Public Management for All Seasons? Public Administration, 69(1): 3–19.
Imbens, G. W., & Angrist, J. D. 1994. Identification and Estimation of Local Average Treatment
Effects. Econometrica, 62(2): 467–475.
Imbens, G. W., & Wooldridge, J. M. 2009. Recent Developments in the Econometrics of Program
Evaluation. Journal of Economic Literature, 47(1): 5–86.
Imberman, S. A., & Lovenhein, M. F. 2015. Incentive Strength and Teacher Productivity: evidence
from a group-based teacher incentive pay system. Review of Economics and Statistics, 97(2):
364–386.
Iossa, E., & Martimort, D. 2012. Risk allocation and the costs and benefits of public – private
partnerships. The RAND Journal of Economics, 43(3): 442–474.
Kerr, S. 1975. On the Folly of Rewarding A, While Hoping for B. The Academy of Management
Journal1, 18(4): 769–783.
Klein, P. G., Mahoney, J. T., McGahan, A., & Pitelis, C. 2010. Resources, Capabilities, and
Routines in Public Organizations. no. 1550028. https://doi.org/10.2139/ssrn.1550028.
Lazzarini, S. G., Cabral, S., Ferreira, L. C. de M., Pongeluppe, L. S., & Rotondaro, A. 2014. The
Best of Both Worlds ? Impact Investors and Their Role in the Financial versus Social
Performance Debate. no. 2015–6.
Lazzarini, S. G., Rotondaro, A., Cabral, S., Pongeluppe, L., Schmithausen, E., et al. 2017.
Contracting for Socio-Environmental Outcomes Throughout the World: a Database. Sao
Paulo.
Margolis, J. D., & Walsh, J. P. 2003. Misery Loves Companies: Rethinking Social Initiatives by
Business. Administrative Science Quarterly, 48(2): 268–305.
Prendergast, C. 1999. The provision of incentives in firms. Journal of Economic Literature, 97,
March(1): 7–63.
Prendergast, C. 2000. What Trade-off of Risk and Incentives? American Economic Review Papers
and Proceedings, 421–425.
Prendergast, C. 2002. The Tenuous Trade-off between Risk and Incentives. Journal of Political
Economy, 110(5): 1071–1102.
Prendergast, C. 2011. What have we learnt about pay for performance? Economic and Social
Review, 42(2): 113–134.
Roberts, J. 2010. Designing incentives in organizations. Journal of Institutional Economics, 6(1):
125–131.
Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized
studies. Journal of Educational Psychology, 66(5): 688–701.
Rubin, D. B. 1977. Assignment to Treatment Group on the Basis of a Covariate. Journal of
Educational Statistics, 2(1): 1–26.
Shiller, R. J. 2013. Capitalism and financial innovation. Financial Analysts Journal, 69(1): 21–25.
Sloof, R., & Van Praag, M. 2015. Testing for Distortions in Performance Measures: An Application
to Residual Income-Based Measures like Economic Value Added. Journal of Economics &
Management Strategy, 24(1): 74–91.
Social Market Foundation. 2013. Risky Business: Social Impact Bonds and public services, 50.
What Works Clearinghouse. 2014. What Works Clearinghouse Procedures and Standards
Handbook. https://doi.org/10.1037/e578392011-004.
World Economic Forum. 2014. From ideas to practice, pilots to strategy II, practical solutions
and actionable insights on how to do impact investing, (September): 1–43.