Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
February 2016 The Taylor Rule in the 1920s by Alexander J. Field* Department of Economics Santa Clara University Santa Clara, CA 95053 [email protected] ABSTRACT In an influential article published in 1993, John Taylor described a rule for setting interest rates that he viewed as important both descriptively and prescriptively for the study of central bank behavior and the setting of monetary policy. Rates too high risked recession while rates too low risked inflation. More recently Taylor has suggested that low rates might also precipitate recession by generating asset price bubbles, and has criticized Federal Reserve policy between 2001 and 2004 and after 2008 on these grounds. But the posited link between low rates and bubbles remains controversial, and the proposition that interest rates are the best mechanism for deflating them far from settled. This paper examines US macroeconomic history and policy in the 1920s, asking what a Taylor rule would have prescribed then and comparing that with what actually transpired. The results of the study are not entirely unambiguous, but they do show that for most of the period a Taylor rule would have mandated a policy rate lower than what was actually in effect, indeed often below zero. Yet rates generally above those prescribed did not, in the event, prevent a raging stock market bubble from developing in the second half of the decade. This finding adds to evidence from Britain or New Zealand in the 2000s that asset bubbles can develop when policy rates are above those prescribed by a Taylor rule, as well as when they are below. *This paper has benefited from participant comments at a seminar at UC Berkeley on February 22, 2016, and in sessions at the ASSA meetings, San Francisco, California, January 3, 2016, the SSHA meetings in Baltimore, Maryland, November 13, 2015, and Santa Clara, October 28, 2015. Special thanks to Barry Eichengreen, Gabriel Mathy, Kris Mitchener, Helen Popper, Christy Romer, and Tobias Rotheli for comments on earlier versions. 0 The Taylor Rule in the 1920s In an influential article published in 1993, John Taylor described a rule for setting interest rates that he viewed as important both descriptively and prescriptively for the study of central bank behavior and the setting of monetary policy. Rates too high risked recession while rates too low risked inflation. More recently Taylor has suggested that low rates might also precipitate recession by generating asset price bubbles, and has criticized Federal Reserve policy between 2001 and 2004 and after 2008 on these grounds. This paper examines US macroeconomic history and policy in the 1920s, asking what a Taylor rule would have prescribed then and comparing it with what actually transpired. This paper represents in a sense an extension of a project adumbrated in Orphanides (2003). That paper provided detailed empirical analysis of the period following the 1951 Treasury Accord, arguing that policy closely followed the principles of the Taylor rule decades before its formulation.1 Orphanides also discussed the 1920s, but his treatment was largely descriptive and qualitative, aimed basically at showing that policy even prior to the depression was guided by the twin mandates of stable inflation and maximum employment reflected both in the Employment Act of 1946 and in Taylor's 1993 formulation of his rule. Although Orphanides’ final figure included basic data for the 1919-29 period, there was no serious attempt to calculate an output gap, and no attempt to compare what a Taylor rule would have mandated with the actual course of the policy rate. 1 Taylor (1999a) benchmarked interest rate outcomes against his baseline rule for the years of the classical gold standard (1879-1914) and 1955-1997 period, but not for the interwar years. 1 This paper aims to do both. The possible role of monetary policy in the 1920s in precipitating financial crisis and subsequent output loss during the 1930s remains an area of active inquiry. During and immediately after the US involvement in the First World War (191718), the newly established Fed was largely subservient to the Treasury, concerned, as it would be in World War II, with minimizing the interest costs of a rising Federal debt. During the 1920s the Fed operated within the constraints imposed by the gold exchange standard. But as the central bank for an economy with a large and growing share of world output, it had considerable autonomy in influencing interest rates and exercised it, above and beyond its mandate to smooth seasonal fluctuations.2 Taylor’s latest arguments, however, lead us to ask, given the series of speculative bubbles in the 1920s, whether a problem earlier in the decade might actually have been “too low” rates, as he has claimed for 2001-04 and 2008 onward. Taylor has propounded an additional channel through which excessively low policy rates might damage an economy: by fueling asset price bubbles that eventually burst, exposing an economy to the risk of years of recession, slow recovery, and possibly a reduced growth trajectory for potential output. Given the widespread interest in comparing and contrasting 1929 and 2008 (and the years before and after those dates) it is therefore worth asking whether "too low rates" (according to the rule) might, according to Taylor principles, be held responsible for the 1920s real estate and/or equity bubbles.3 2 There is general agreement, for example, that high rates in 1928 and 1929 helped trigger the recession that began in the latter year. These rate increases reflected the Fed’s freedom of maneuver. They were chosen not with an eye to defending the gold dollar exchange rate, but with the explicit intent of deflating a domestic stock market bubble. 3 White (2014, p. 125), raises this possibility with respect to the mid-decade residential real estate bubble. 2 The Taylor Rule Taylor’s 1993 article did not mention bubbles. The Fed was assumed to have two targets – low inflation and high employment – each of which might be affected by the level of short term interest rates. His analysis represented an attempt to preserve the principle of rules vs discretion in monetary policy following the disappointing attempts in Britain and elsewhere to implement a constant growth rate of the money supply rule, which had long been advocated by Milton Friedman, David Laidler, and many others. Given its enormous influence and the fervor it had inspired, the death of monetarism – killed by actual attempts to implement its central policy prescription in the face of unstable velocity – was remarkably swift.4 Intended or otherwise, Taylor wrote its epitaph, and his work would change the way macroeconomics was (and is) taught and influence the way it is practiced. No central banks today targets a monetary aggregate, and no macroeconomist advocates that they should, a huge change from the late 1960s, 1970s, and 1980s. Taylor quotes Laidler in 1991 throwing in the towel on his long advocacy of a monetary growth rule (1993, p. 197). But Taylor did not want to give up on rules: he simply wanted one that would better describe what central banks actually did most of the time and provide a standard against which their policies might be judged. The stated rationale for the functional form of Taylor’s rule was therefore not the then recent (and disappointing) policy experience with constant growth rate mandates, but rather the superior performance in terms of variability of prices and output of a rule that focused on these variables directly, as opposed to a monetary aggregate or exchange rate target. Without referring 4 The belief in stable velocity was so strong that conventional wisdom and textbook exposition in the early 1980s treated the growth rate of the money supply and the growth rate of nominal income as, for practical purposes, almost identical. See, for example, Gordon, 1982. Subsequent experience has shown that income velocity is subject to substantial and unpredictable perturbations beyond those associated with variation in nominal interest rates. 3 to the monetarist experiments specifically, Taylor simply said that a monetary growth rule was “not practical” (1993, pp. 200, 208). Taylor began by positing a long run real interest rate of two percent. This was the Wicksellian natural rate at which, Taylor assumed, actual output would equal potential. He assumed an inflation target of two percent. Friedman had argued for slight deflation as optimal, since it minimized cash management costs. But two percent was low and provided some cushion against the possible danger of deflation and the asymmetry implied by the zero lower bound on nominal interest rates.5 A two percent inflation target has subsequently been adopted officially by a number of central banks including the Fed. These assumptions are at the core of Taylor’s baseline rule. Taylor defined the inflation gap as the deviation of current inflation (pt) from the inflation target (two). The output gap he understood as the percentage deviation of actual output (yt) from a “target” (y*), which he defined rather vaguely as what would be predicted by a trend (1993, p. 202). The lack of specificity about how a trend is calculated is quite important, because, as will be argued below, the methods used can matter. An output gap is the difference between actual and potential output, commonly defined as the highest level of output that can be sustained without so stimulating an economy that it experiences an acceleration of inflation.6 5 Friedman was more sanguine about deflation than current policy makers. His advocacy for 0 nominal interest rates was paired with the view that with some deflation, real rates of return on money and bonds could be equalized, which he viewed as desirable from an efficiency standpoint. 6 The term natural output, popularized by Friedman, is a synonym. Okun’s law suggested that there was a systematic relationship between changes in the unemployment rate and nchanges in the output level. If the output gap is understood as the difference between actual and potential GDP, there will also be a systematic relationship between the unemployment rate and the output gap. Thus the unemployment rate is often used, along with Okun’s law, in calculating an output gap. 4 Assuming, for the moment, that an output gap can be easily measured in ways that command broad agreement, 7 here are mechanics of the original (1993) version of the Taylor rule. If both the output gap and inflation gap are zero (which means that the inflation rate is exactly two percent per year and the economy is at potential, the rule mandated a nominal interest rate of four, yielding the desired real interest rate of two. If there is an inflation gap, then the rule said to set nominal r equal to two plus the actual inflation rate plus half the amount by which actual inflation exceeded two. If there is also an output gap (if output is below potential, the output gap is a negative number), then reduce the mandated r by half the gap in percentage points. Stated more colloquially, if inflation is above target, hit it hard with higher nominal interest rates, but if there is a negative output gap, try to close it with lower nominal interest rates. The Taylor rule described and prescribed how these sometimes conflicting imperatives were or should be reconciled: rt = 2 + pt + .5(pt – 2) + .5(yt – y*) There is widespread agreement that setting interest rates too high can produce a recession. Doing so can generate a real interest rate above that which will elicit sufficient autonomous planned expenditure to ensure that output is at potential. Thus, an excessively tight or contractionary policy can cause an economic downturn by generating, through its effect on investment and consumer durable spending (as well, in a flexible exchange rate environment, on exports), a deficiency of aggregate demand. What were the dangers of setting interest rates too 7 RBC proponents reject the concept of an output gap as useful, or define it in new ways (see below). Over the long run the growth trajectory of potential is likely to approximate the growth rate of actual output. A key question is whether time series data on actual output can be used to approximate the level of potential output, as well as its growth rate. An output gap is a statement about levels, not rates of growth, and over the medium to long term, the average level of actual output will likely deviate from the average level of potential – indeed, I will argue that it is likely to be lower. 5 low? In the context of Taylor’s original article, the risks were limited to inflation and possibly depreciation of the exchange rate which, by raising import prices in domestic currency, would add to inflationary pressures. More recently, Taylor has criticized the Fed for having run interest rates that were too low between late 2001 and the end of 2004. But in terms of the original formulation of the Taylor rule, there is little evidence that these rates damaged the economy. The negative consequences of too low interest rates suggested in Taylor’s original article – high and/or rising inflation and/or a depreciating exchange rate – simply did not develop during or after this period. The inflation rate remained moderate, and the dollar remained relatively strong. Employment growth was, moreover, sluggish, casting doubt on the likelihood that too low rates were risking inflation by causing the economy to press too tightly against potential from below. So what is Taylor's case that low rates between 2001 and 2004 damaged the economy? The argument is that low rates were responsible for the spectacular appreciation in housing prices that began in 2001 and peaked in 2006 (Taylor, 2012, p. 35). This adds a new complexity to the policy rule. Asset bubbles and the financial crises to which they might lead were simply not on the radar screens of most macroeconomic and monetary economists in 1993, at least insofar as the US economy was concerned. Since the housing boom and the ways in which it was financed are now widely understood to have been among the most important factors precipitating the financial crisis as well as the recession and slow recovery from which we are still suffering, Taylor has arrived at the position that recessionary dangers can inhere both in interest rates that are too high, and in those that are too low, to the degree that they fuel an asset bubble that subsequently collapses, leading to a long recession, slow recovery, and possible 6 negative effects on the growth of potential. Rates too low risk inflation, and if they do not in fact produce inflation, they risk subsequent recession. This type of thinking has become increasingly influential. The concerns of the Bank for International Settlements are illustrative. For a number of years it has warned about the risks of financial bubbles and their consequences. Its 2015 Annual Report argues that “Domestic policy regimes have been too narrowly concerned with stabilizing short-term output and inflation and have lost sight of slower-moving but more costly financial booms and busts.” The original objectives of the Taylor rule – preventing an acceleration of inflation and closing output gaps are now dismissed as “illusory short term macroeconomic fine-tuning” (2015, p. 3). Yet it remains a matter of considerable controversy whether low nominal interest rates are necessary to produce an asset price bubble or that higher rates (above those mandated by the rule) are sufficient to prevent one.8 Britain, for example, experienced a more extreme housing price boom in the 2000s than that in the US, but one not associated with policy rates that were particularly low.9 In the US, the federal funds rate dropped below two percent on December 11, 2001, and did not rise above that level again until November 10, 2004. This is the period to which Taylor calls our attention in seeking the origin of the housing bubble. During this same period, in contrast, the British policy rate (bank rate) ranged between 3.5 and 4.75, roughly double the range in the US over the same period (http://www.newyorkfed.org/markets/statistics/dlyrates/fedrate.html; 8 The deflation of an asset price bubble does not necessarily entail severe cumulative output loss. Compare the effects of the collapse of the dot com bubble in 2001 with what happened from 2007 onwards. The degree to which asset holders (and lenders to them) are highly leveraged can make a huge difference. 9 In his address to the annual AEA meeting in 2010, Bernanke (2010) compared house price appreciation between 2001 and 2006 and the degree of monetary ease. He noted that “11 of the 20 countries in the sample had both tighter monetary policies, relative to the standard Taylor-rule prescriptions, and greater house price appreciation than the United States.” 7 http://www.bankofengland.co.uk/monetarypolicy/Pages/decisions.aspx, accessed June 14, 2015). As is now recognized, a severe housing price bubble developed in Britain as well as in the United States, leading one to ask whether too low rates should necessarily be identified as the cause of the housing price inflation in the United States. New Zealand’s experience illustrates the same point. The official cash rate ranged from 4.75 to 6.5 percent between 2001 and 2004, but house price appreciation was higher than in Britain. Even taking into account slightly higher inflation in New Zealand, these were high real interest rates. See http://www.rbnz.govt.nz/monetary_policy/ocr/, accessed October 30, 2015. Taylor now argues that rates below those prescribed by his rule can fuel asset bubbles. The British and New Zealand cases present a problem for his analysis, since we have the asset boom in the absence of exceptionally low bank rates. If we find for the 1920s that interest rate policy was generally above that which would have been prescribed by a Taylor rule, we will have another instance of an asset bubble developing in the absence of rates lower than mandated by the rule. Such instances should cause us to question the tightness of the linkage between “too low” rates and asset bubbles. Taylor has criticized the Fed not only for 2001-04, but also for the near zero policy interest rates from 2008 onward. His position is that the low rates in 2001-04 gave us the housing bubble, financial crisis, and slow recovery from which we are still suffering, and that the low rates after 2008 are fueling new asset price bubbles, the next financial crisis, and the next recession. These are worrisome possibilities. Taylor is not now arguing that the low rates have produced inflation and currency depreciation (the evidence to date does not show this), although in 2010, in an open letter to Fed chairman Ben Bernanke also signed by other economists, he did warn against the likelihood of those outcomes. 8 Eight years of near zero rates and a quintupling of the size of the Fed’s balance sheet (this is written in 2016) did not produce inflation or currency depreciation. Nevertheless, Taylor’s concerns about fueling bubbles and future crises deserve serious consideration. In July of 2011, Sheila Bair, then head of the FDIC, criticized Fed policy on the grounds that it was producing a “bond bubble” (Bernanke, 2015, p. 515). As a matter of financial economics, a prolonged period of low short rates associated with stable inflation (so that the move to lower nominal rates is also a move to lower real rates) is likely to strengthen the prices of financial assets such as bonds, by reducing the rate at which future revenue streams are discounted and, by stimulating the real economy, reducing default probabilities. Thus we should generally view the strengthening effect on bond prices of a prolonged period of low policy rates as intended, and not an undesired side effect of policy, particularly in the face of the large scale asset purchases known as quantitative easing. The effect of low rates on stock prices is less predictable (see Shiller, 2007), although a number of knowledgeable observers expressed concern about an excessively ebullient stock market under the low policy rate regime. These included Fed chair Janet Yellen, who, on May 6, 2015, warned that stock prices might again be bubbly (http://www.bloomberg.com/news/videos/2015-05-07/yellen-warns-stock-valuations-quite-high). Nevertheless, before increasing interest rates with the intent of deflating a bubble (a policy pursued for precisely these reasons in 1928 and 1929), we should consider historical and contemporary evidence relevant to these two questions: Are low rates necessarily the fuel for bubbles (what about Britain or New Zealand in the 2000s)? And, are rates above those mandated by a Taylor rule sufficient to prevent bubbles and/or the best means of combatting them? 9 Although Taylor has not indicated how a price bubble is to be recognized and measured, modified the formal specification of his rule to incorporate a response to asset bubbles, or specified what the interest rate response coefficient should be if the values of an asset class exceed “fundamental value”, he is now implicitly positing three rather than two policy objectives (targets) for monetary policy makers armed with but one instrument (interest rate policy). Even without a full specification of a modified rule, however, we can be almost certain that introducing the possibility that either asset price inflation or goods and services price inflation should trigger tightening will result in an average level of nominal and real rates over time higher than would have eventuated under a rule that looked only at the output gap and goods and services inflation. The reason is simply that we now have not one but two possible triggers for higher rates. Partly this is due to an asymmetry: there is more emphasis on the prospect that high or rising asset prices should trigger higher policy rates, than on the prospect that low or falling asset prices (independently of the signals offered by the inflation rate and output gap), should trigger lower rates. Thus over the short term, there would almost certainly be more periods in which the ex post real interest rate exceeded the two percent Taylor has assumed the Wicksellian “natural” rate to be, or whatever other value the natural rate may be. At least over the short to medium run, this will result in lower trajectories for output and employment, and higher trajectories for unemployment than otherwise would have resulted. It’s possible a policy focusing on asset price inflation as well as goods and service price inflation would nevertheless reduce the likelihood and severity of 2008 style financial crisis, and the recession, slow recovery, and slowed growth of potential output that followed. From a longer run perspective output losses in the short run might be more than compensated for by the avoidance of severe cumulative output losses following financial crisis. But to pursue a version 10 of a modified Taylor rule that also responds to divergence of assets from “fundamental value” would be to trade the near certainty of output loss now for the uncertain possibility of avoiding possibly larger losses in the future. It is a wager that might be worth making, but one that deserves careful scrutiny.10 In the worst case, it might be no more reasonable than making sacrifices to the gods in hopes of avoiding volcanic eruptions or earthquakes. If we agree that identifying and deflating asset bubbles are appropriate concerns for policy makers, we need to inquire whether there may be other policies that are more effective in controlling them and involve fewer costs in the short run. These include especially the direct control of leverage through the regulation of margin limits on stock purchases and minimum down payments on housing purchases, and the imposition of capital requirements in the financial sector that are both counter-cyclical and on average higher. The study of the macroeconomic history and policy of the 1920s is relevant to all of these questions. Since macroeconomics is not an experimental science, and certainly not one involving controlled experiments, there is a particular premium on trying to learn from the experiments provided by history. Interest Rates To assess the applicability of a Taylor rule to the 1920s, both descriptively and prescriptively, we will need data on interest rates, inflation rates, and deviations of actual from potential output (output gaps). In each case, and particularly in the case of estimates of the output gap, the selection of data poses challenges. I begin with interest rates. The BIS report makes the case for the wager in these terms: “If the ultimate criterion for a successful monetary policy is to promote sustainable economic growth and, in the process, help avoid major macroeconomic damage, then a rebalancing of policy priorities towards greater attention to financial stability would seem justified” (2015, p. 82). 10 11 In the 1920s, the principal instrument of Federal Reserve monetary control was the discount rate, the rate that member banks paid to obtain additional reserves (high powered money) through the sale (discounting) of eligible paper to the central bank. As figure 1 shows, the New York Fed discount rate appears to have exercised a strong influence on short term rates in the New York money market. An individual, nonfinancial corporation, or financial institution with money to loan short term in open markets had five basic options in the 1920s: Treasury bills (the safest), commercial paper, bankers’ acceptances, demand or call loans on collateral (generally stocks), or time loans on collateral (Hubbard, 1920, p. 23). As figure 1 shows, these rates covaried closely with each other and with the discount rate. Figure 1 displays monthly values of the NYFED discount rate and short term private sector rates: the rate on bankers’ acceptances, the call money rate, and the rate on 4 to 6 month commercial paper. The rate on bankers’ acceptances (which can be thought of as postdated checks whose payment has been certified or guaranteed by the account holder’s financial institution) tended to run a little bit below the discount rate. The 4 to 6 month commercial paper rate tended to be a little bit above the policy rate, reflecting a premium associated with the default risk of the corporations issuing it. The call money rate was the most volatile, generally above but sometimes below the discount rate, reflecting the varying perceptions of risk associated with collateralized lending to those acquiring stocks on margin. Because of their close covariation with the discount rate, and because the latter was the policy rate, I will use the New York Fed discount rate series in my comparisons with policy rates that might have been prescribed by a Taylor rule. 12 10.00 Figure 1 Short Term Interest Rates, United States, 1919-1929 9.00 8.00 7.00 6.00 5.00 4.00 3.00 2.00 1.00 1919-01-01 1919-05-01 1919-09-01 1920-01-01 1920-05-01 1920-09-01 1921-01-01 1921-05-01 1921-09-01 1922-01-01 1922-05-01 1922-09-01 1923-01-01 1923-05-01 1923-09-01 1924-01-01 1924-05-01 1924-09-01 1925-01-01 1925-05-01 1925-09-01 1926-01-01 1926-05-01 1926-09-01 1927-01-01 1927-05-01 1927-09-01 1928-01-01 1928-05-01 1928-09-01 1929-01-01 1929-05-01 1929-09-01 0.00 Bankers Acceptances Call Money Rate Discount rate NY Commercial Paper Source: FRED database. Discount rate series is for the New York Fed, not seasonally adjusted: series M13009USM156NNBR. Accessed May 17, 2015 Output Gaps The most challenging and possibly most controversial aspect of this investigation is the construction of a series on output gaps, which requires estimates of both actual and potential output. In policy circles and most macroeconomics textbooks potential output is defined as the highest level of real output an economy can sustain without receiving so much demand stimulus (nominal income growth) that it experiences an acceleration of inflation. The labor force and its characteristics, the amount and qualities of physical capital, the availability of natural resources 13 (including the weather), the level and availability of technological knowledge, and the nature of the legal system and political regime can all affect it. Not all economists are interested in measuring output gaps. Real business cycle proponents do not distinguish between actual and potential, so the concept of an output gap is of little interest, and there is no imperative to estimate potential as something distinct from actual. Cycles are to be explained as the result of the same kinds of supply shocks (“technology shocks”) that help determine long run growth. In new Keynesian DSGE models, potential output is understood as the difference between actual output and what would be possible if all nominal wage and price rigidities were eliminated (see Mishkin, 2007). Mishkin suggests that this definition has “much in common” with the standard definition, although it seems to me they point in quite different directions in terms of policies that could help close gaps: the former implying that broad structural reform (to remove rigidities) might be front and center, while the standard definition focuses attention on the possible ameliorative roles of fiscal and monetary policies.11 Assuming we are interested in measuring output gaps, and we will need to be if we wish to study what a Taylor rule might have prescribed in the 1920s, we will need estimates of potential as well as actual. In 1979 Schwert and Plosser noted that …most efforts to estimate potential output … are essentially equivalent to extrapolation of output. The details of different approaches really amount to questions of whether the trend line should pass through the peaks of past output (as Okun’s approach implies…) or through the middle of past output…. The former approach produces an “output gap” which is always positive, implying a need for stimulative government policies, while the 11 Mishkin also claims that “the DSGE definition accords with the idea that potential output is the level of output at which inflation tends neither to rise nor fall.” But this I think is wrong. If there are no nominal wage and price rigidities, the rate of nominal income growth would be completely neutral with respect to output. The price level and/or inflation could be rising or falling at any rate. 14 latter approach implies that the “output gap” can be both positive and negative (1979, p. 185). The final sentence is not quite right, since the Okun vision allows output to be above potential, provided there is evidence of accelerating inflation. As Okun put it, the objective is “maximum production without inflationary pressure” (1962, p. 99). The Schwert and Plosser quote does however succeed in identifying two distinct approaches to estimating the nature of the trajectory of potential and deviations of actual from it. The first, which we can call trends through peaks, can seem primitive to modern economists. Although rooted in NBER traditions, it requires careful attention to data and the exercise of judgment. And it imposes a constant growth rate of potential between business cycle peaks. For those dissatisfied with trends through peaks, it has become common to use a filter, in particular Hodrick-Prescott, to estimate trend based on data on actual GNP. The technique allows the trend growth rate to vary continuously, thus addressing one of the perceived deficiencies of the older methodology, and also allows the researcher to specify a smoothing parameter (lambda), influencing how much variation is allowed in the growth rate. The HP filter minimizes the sum of squared deviations of trend from actual plus a term equal to lambda times the sum of the squared second differences in the trend. Second differences are changes in the rate of change, so the latter term imposes a penalty on variation in the rate of change. If there is no constraint on the variability of trend (lambda = zero), the “fitted” trend will just pass through the data points of actual output. With a very high lambda one will approach a straight line with a slope equal to what one would obtain with ordinary least squares. The smoothing parameter commonly used in the literature for annual data is 100. 15 The HP method pushes trend (interpreted as potential) through what Schwert and Plosser call the “middle of past output” in the sense that the deviations of actual from potential will be symmetrically distributed above and below potential. If the filter is run on logged values of actual output, the sum of the deviations will, moreover, be exactly zero. In many cases – not necessarily by RBC proponents, but by those who use its favored statistical methods – including New Keynesians -- the filter-produced trend series is interpreted as representing the evolution of potential output (Gibbs, 1995, p. 89). Indeed, although Taylor (1993) is not explicit about how the trend of potential is to be retrospectively constructed, Taylor (1999a, pp. 327-8) reflects adoption of the use of the HP filter – illustrative of the second approach referenced in the Schwert and Plosser quote. On the other hand, there is no evidence that Taylor has abandoned the standard definition of potential output: the highest level of output the economy is capable of sustaining with receiving so much stimulus it experiences an acceleration of inflation. The problem is that these positions are contradictory. Using the HP filter to identify potential, will identify may years as “above potential” which were in fact experiencing stable or decelerating inflation There are several problems with the HP approach. One is that it frequently leads to suggestions, usually after the fact, that an economy was above potential prior to a downturn, because it was above trend. This is of course conceivable. It is possible within a wide variety of theoretical specifications for output to be above potential (in which case it will be associated with accelerating inflation) as well as below potential (in which case there will be a negative output gap associated with involuntary unemployment, and, eventually, deceleration in inflation). But the conclusion that an economy was above potential prior to a downturn is partly an artefact of the algorithm underlying the HP filter. If, following a downturn, the most recent observations 16 are included in the data inputted into the filter, it is almost inevitable, using recommended smoothing parameters, that the HP trend will be below actual prior to the downturn. Even more consequentially, if one interprets the HP trend of actual as reflecting the evolution of potential output, the filtering algorithm carries with it and indeed imposes the view that deviations of actual from potential are symmetrically distributed above and below potential. This does substantially more violence to reality than the imposition of a constant growth rate of potential between business cycle peaks, the often cited flaw in the trends through peaks approach. I come down on the side of Milton Friedman, as well as Arthur Okun, in arguing that, with the possible exception of periods bracketing war12, it is unlikely that these deviations will be symmetrically distributed. Friedman’s “plucking” model of the business cycle, first introduced in 1964, viewed deviations from potential as asymmetrically distributed below (Friedman, 1969, p. 274). His evidence was that there was “no systematic connection” between the size of an expansion and the size of a subsequent contraction, whereas the reverse was true for the relationship between the size of a contraction and the size of a subsequent expansion. He saw no reason to modify his views when he revisited the issue three decades later (1993, pp. 171-72). In 1988 De Long and Summers cautiously endorsed similar views. An HP filter produces smoothed estimates of the growth trajectory of actual GNP. Over the long run the growth rate of actual is likely to approximate the growth rate of potential. But what we need in order to calculate output gaps are estimates of levels, particularly levels of potential output. The fact that output was growing at above filter-produced trend in a given year 12 The economy was clearly above potential during the Second World War. Massive fiscal and monetary stimulus drove unemployment to extremely low rates, and inflationary pressures were temporarily repressed with rationing, price controls, and the unavailable to important consumer durables. When controls were lifted after the war inflation accelerated between 1945 and 1950. 17 doesn't necessarily mean it was above potential. The latter is a statement about levels, whereas the former is an observation about rates of growth. Although RBC pioneers such as Prescott might have objected, others using RBC empirical methods, including Taylor, have adopted the practice of using the HP filter to calculate trend from data on actual output and then using that calculated trend, interpreted as the evolution of potential, to infer in a particular year whether we were above or below potential. My methods will be closer to Okun or Friedman than Lucas or Prescott: a blend of data and narrative history, characterizable as the trends through peaks approach popular in the 1960s and 1970s (see Gibbs, 1995). I start by trying to identify particular years or months where the economy was at potential by searching for a combination of both high output/low unemployment and price stability. Such a combination should indicate points in time where the economy was at or close to the “natural” or non-accelerating inflation rate of unemployment associated with natural or potential output, where, by definition, the output gap would be zero. I then use output for years when the economy was at or close to potential to infer trend, assuming that potential grew at a constant continuously compounded rate between and beyond them. That assumption, of course, is not likely to be exactly true. That said, I insist that, especially retrospectively, identification of a period as “above potential” must be validated by evidence of an actual acceleration of inflation. This principle, which until recently was uncontroversial, and a feature of every mainstream macro textbook, comes under explicit attack in the 2015 Bureau for International Settlements Annual Report, which argues that “the behaviour of inflation may not be a fully reliable guide to sustainable (or potential) output” (2015, p. 14). Lest the reader miss the significance of this claim, and the challenge to established ways of thinking it represents, it is italicized. 18 RBC proponents have subsequently suggested that if we insist on talking about an output gap it might be thought of as the difference between where we are now and an ideal state in which all favored structural reforms, resulting in a perfectly competitive economy, have been implemented. New Keynesians walk this back somewhat by restricting the counterfactual to the elimination of nominal wage and price rigidities. Traditional Keynesians, as well as Milton Friedman, took the economic system more or less as it was and focused on the available policy levers (fiscal and monetary policy) that might close a gap. Actual Output To gauge an output gap one needs estimates of both actual and potential. As Orphanides and van Norden (2002) have noted, constructing a measure of an output gap in real time is especially challenging. Aside from the difficulties of estimating potential, real time measures of actual are at best preliminary and often subject to subsequent revision. This contributes additional uncertainty about the size of a possible gap. If a gap is estimated to be larger than it actually is, inflation may result, because monetary and fiscal policy may be too loose. If estimated to be smaller than it actually is, unnecessary unemployment and output loss may be the consequence. The challenge of obtaining output gap estimates is, however, not necessarily easier when addressing historical data. The Bureau of Economic Analysis maintains estimates of actual GNP going back to 1929. These are of course no longer “preliminary”, although still subject to (less frequent) revision as, for example, when chained index methods were introduced. Estimates prior to 1929 have engendered lively controversy in recent years. Simon Kuznets, the developer of the NIPA framework, constructed an annual series, ultimately going back to 1869, but was adamant that although he thought the numbers useful for calculating trend growth rates, he did 19 not think they were sufficiently accurate to form the basis for the exploration of cyclical variation (Kuznets, 1961; Rhode et al, 2005). His pre-1919 estimates were based on commodity (goods) data from Shaw (1947), which Kuznets marked up by assumed margins for transportation and distribution. The size of those margins, and more generally the question of whether service sector (non-commodity) output varied roughly one for one with goods output, lay at the heart of Christina Romer’s subsequent questioning of how much more moderate had been post World War II business cycles relative to those that preceded them (Romer, 1989). For 1919-1929, however, Kuznets was able to construct his estimates from the income side, rather than less complete data on the product side, and his estimates for these years are widely assumed to be of higher quality than the pre-1919 estimates, For 1919-1929, the years with which we are concerned in this paper - the vexing issue of the size of transportation and distribution margins and more generally the elasticity or comovement of GNP with respect to fluctuations in commodity output do not apply. Most of Romer’s revisionism applies to the pre-1909 data. Romer also endorses income side estimates for 1909-1918 contained in an appendix to Kuznets (1961), although Kuznets believed these numbers to be less accurate than those for 1919-29, and ultimately judged his product side estimates for 1909-18 to be superior (see Weir, 1986, p. 355). Kendrick (1961) made adjustments to the treatment of government expenditure in Kuznets’ GNP series to make them more comparable to the annual series from 1929 onward maintained by the BEA. Kuznets had treated most government expenditure as an intermediate good, not part of final product. Kendrick’s have since become the standard series referenced by students of the 1920s. Romer’s 20 annual GNP series for 1919-1929 (reproduced below) are Kuznets’ income side estimates with Kendrick’s adjustments and some other minor adjustments. Romer’s big differences with Kuznets involve how much pre-1909 GNP was likely to have varied with changes in commodity output. Kuznets used freehand regression to estimate the elasticity of GNP with respect to goods production using data for the years 1909-1938 (Kuznets, 1961, pp. 536-37). Based on data from these years, he concluded that the elasticity approached one, and used this to backcast a GNP series using goods data for the earlier years. Romer reestimated the elasticity using data from 1909 to 1985, but excluding the years of the Great Depression and the Second World War (1929-46). She made a number of other more minor changes – using log differences rather than ratios, allowing the elasticity to vary over time, and using what she described as “normal” rather than peak years to establish trend from which deviations might be calculated. Based on these regressions she found that the elasticity was not in fact very time sensitive: “the time varying coefficient measuring the sensitivity of GNP to commodity output fell from .583 in 1909 to .527 in 1985 (Romer, 1989, p. 20). Her more important result however was not the small difference in these two numbers but rather their moderate size. Kuznets had concluded that GNP varied almost one for one with commodity output. Romer argued that the elasticity was closer to .5 or .6. Thus her estimates of pre-1909 GNP are much less volatile than Kuznets’, which is what underpins her conclusion that the pre-World War I business cycle was not markedly more severe than the post-World War II cycle. Romer justified excluding 1929-1946 from her regression on the grounds that we could expect the elasticity of GNP with respect to commodity output to have been unusually large during these years because the fluctuations in both series were so substantial. She maintained 21 that it would be inappropriate to extrapolate backwards from this “abnormal” period to more “normal” years between 1869 and 1908, and she attributed Kuznets’ high elasticity estimate in part to his inclusion of the years 1929-1938 in his estimating regression. Weir (1986, p. 355) questioned Romer’s exclusion of 1929-46, arguing that there was little evidence of a structural break during the depression years. Nevertheless, Romer’s argument about the relative severity of the pre-World War I business cycle, and her means of reaching that conclusion, have subsequently been widely accepted. Table 1: Annual Estimates of GNP and the Civilian Unemployment Rate, United States, 1919-1929 Balke-Gordon GNP `1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 BalkeGordon Implicit GNP Deflator Romer GNP Ca213 Ca215 Ca216 Billion Index Billion 1982 dollars 1982=100 1982 dollars 507.1 15.2 503.9 496.3 17.6 498.1 478.8 15.3 486.4 513.2 14.2 514.9 585.0 14.6 583.1 600.5 14.6 600.4 614.1 14.9 615.1 651.0 15.0 655.0 654.6 14.7 661.4 666.7 14.6 669.3 709.6 14.6 709.6 Lebergott Romer /Weir Implicit GNP Civilian Deflator Unemploy rate Ca218 Ba475 Index 1982=100 15.6 2.34 17.7 5.16 15.1 11.33 14.3 8.56 14.7 4.32 14.5 5.29 14.8 4.68 14.8 2.90 14.5 3.90 14.6 4.74 14.6 2.89 Source: Carter et al, Historical Statistics of the United States (2005): Series Ca213, Ca215, Ca216, Ca 218, Ba475. From the standpoint of this paper, the important takeaway is this: controversies surrounding estimates of pre-1929 GNP mostly involve the years before 1909; there is more 22 consensus about 1919-29, the period with which we are concerned. Contemporaneously with Romer, Nathan Balke and Robert Gordon (1989) also published revised annual series for GNP prior to 1929. In backcasting GNP, they brought to bear not just the Shaw commodity output series but also additional information on non-commodity output. Their major differences with Kuznets, and with Romer, again involved the pre-1909 data. The Romer and Balke-Gordon estimates are very similar to each other for the years 1919-1929 (see table 1). I split their small differences, and use an average of the two as an annual GNP series for 1919-1929. Potential Output Annual estimates and forecasts of potential are currently provided by the Congressional Budget Office in their publication Budget and Economic Outlook, which appears each January. The CBO methodology for forecasting involves extrapolating trends in the growth of the components of potential output: the potential labor force, capital services, and TFP. The CBO has constructed estimates of potential output going back to 1950 but not earlier (see Carter et al, 2005, series Cb23). CBO methods explicitly endorse the criterion that in order for a period to be above potential, it must be associated with accelerating inflation (Congressional Budget Office, 2004).13 Again, my approach is to search for benchmark years with high employment and low or nonexistent inflation. My first candidate is 1929. Why 1929? The National Bureau’s chronology of business cycles identifies four expansion (trough to peak) business cycle phases between 1919 and 1929: March 1919 to January 1920 (10 months), July 1921 through May 1923 (22 months), 13 “Many methods used to compute potential output do not benchmark their trends to inflation or any independent measure of capacity and therefore cannot be interpreted as estimating the level of maximum sustainable output. That is, they provide a measure of trend output but not potential output.” Congressional Budget Office, 2004, p. 2. 23 July 1924 through October 1926 (27 months), and November 1927 through August of 1929 (21 months).14 The Bureau committee responsible for business cycle dating has always taken pride in eschewing a mechanical algorithm, instead considering and evaluating multiple sources of data. For this decade, however, the chronology appears to be driven almost entirely by the Fed’s industrial production series, since every single NBER peak or trough corresponds exactly to a low or high point in that monthly index, with the exception of the very last (industrial production peaks in July of 1929 whereas the chronology shows the cycle peak in August). The argument that for annual data, we should consider the US economy at potential in 1929 is also based on the fact that although the unemployment rate was very low (2.89 percent of the civilian labor force) there is no evidence at all of goods and service price inflation. Thus I differ with Orphanides who suggests that the economy was above potential in that year because output growth was “above normal.” That’s not enough. One has to be able to show, at least retrospectively, evidence of an acceleration of inflation. Both Romer and Balke-Gordon show no change in the implicit price deflator between 1928 and 1929 (see table 1). In other work Gordon as well has repeatedly chosen 1928 as a benchmark, suggesting that in 1929 the economy was above potential. But the absence of inflationary pressures in 1929 argues against this view (see Field, 2003). Romer’s “remeasuring” of pre-World War II business cycles results in only minor changes in chronology for the 1920s. The July 1921 trough gets moved back to March of that year. The October 1926 peak gets advanced to March of 1927. The November 1927 trough and the August 1929 peak each get moved forward a month. See Romer, 1994, p. 592. Of the eight turning points in the chronology between 1919 and 1929, four are unchanged and two are altered by only one month. This is a much higher rate of agreement than between 1887 and 1917, when only five of the sixteen turning points are unchanged using Romer’s methods. The 1919-1929 adjustments are consistent with the pattern of advancing the peaks and retarding the troughs that is central to Romer’s contention that pre-World War II expansions were not that much shorter than those in the postwar period. See Romer (1994, pp. 592-593). 14 24 One candidate for a second benchmark is 1919, given the low unemployment rate (2.34 percent). Because of demobilization and the inflation situation, however, the choice of that year is more problematic. Economic activity in 1919 was affected by a highly unstable economic and political environment following the First World War. A short wartime recession ended, according to the NBER, in March of 1919, followed by expansion through January of 1920. Inflation during 1919 remained high, as it had during the war, although the monthly inflation rate, calculated on a year over year basis, did not accelerate (see data below on calculation of inflation rates). Romer’s implicit price deflator, however, increased modestly from 1918 (2.4 percent year over year), and Balke-Gordon show a substantial 13 percent increase between the two years. So the picture is clouded: the recession and slow recovery provide grounds for believing the economy might have been below potential, but the possibly accelerating rate of inflation support for the view that it might have been above potential. The standard explanation for the continuing postwar inflation is overhang from the monetization of debt issued to finance the war (Kemmerer, 1918, Hollander, 1920). During the war the Treasury repeatedly borrowed by selling revenue anticipation certificates to the commercial banking system. Banks were not required to hold reserves against the demand deposits created in the process, and they could also easily discount these Treasury securities at the Fed. An unsterilized inflow of gold from European combatants in payment for food and munitions added to the inflationary fuel. On the other hand, it does not appear as if longer run inflationary (or price level) expectations became unanchored. The rapid and relatively cost-free disinflation and indeed deflation (compare with 1982) subsequent to 1919 suggests that the inflation 25 was not expected to be permanent (Kemmerer, 1918). People apparently anticipated reversion to lower price levels (as in fact happened). Calculating the continuously compounded growth rate between 1919 and 1929 using an average of the Romer and Balke/Gordon estimates of annual real GNP yields an estimate of the trend growth of potential of 3.39 percent per year between 1919 and 1929. This is above the long run average growth rate of the US economy since 1869 (about 3 percent per year), lending credence to the view that 1919 may have been below potential (since high real growth rates are often observed during recovery as one approaches potential from below). 1923 is another candidate for a benchmark. Industrial production peaked in May after a 22 month expansion and remained relatively high throughout the remainder of the year. Inflation was virtually nonexistent, with the consumer price index in 1924 (and Balke-Gordon’s implicit price deflator) identical to what it was in 1923 (HSUS, series Cc1 and Cc2, Ca 215). Calculating from 1923 to 1929, we have continuously compounded growth of 3.25 percent per year. If 1919 was in fact above potential, the calculated growth rate from that year to 1929 should have been lower than from 1923 to 1929. In fact it is somewhat higher, providing more grounds for believing that 1919 was in fact somewhat below potential. The troubling aspect of 1923, however, remains the unemployment rate, 4.32 percent, which suggests that there was still some slack in the economy. Finally, consider 1926. The unemployment rate was 2.9 percent, virtually identical to 1929, and like that year showing little evidence of goods and services price inflation. The combination of low unemployment and low inflation argues for this year as the best second benchmark, and I use it to calculate the trend growth rate of potential, and assume that potential grew steadily at 26 that rate between 1926 and 1929, as well as between 1919 and 1926. Calculating between 1926 and 1929 yields a continuously compounded growth rate of 2.77 percent per year. This estimate of the growth rate of potential will also approximate the trend growth of actual, unless the size of the output gaps increase or decrease systematically over the course of the time period. We can now measure the log differences between the average of the Balke-Gordon and Romer estimates of actual and these estimates of potential yielding a series on deviations of actual output from potential in percentage terms. These calculations suggest 1919 output about 6.2 percent below potential, falling to 16.4 percent below potential in 1921, and then within 3.3 percent of potential each year between 1923 and 1929 (the above numbers all refer to log differences). 27 Figure 2 Log of Actual and Potential GNP Based on Annual Data 6.600 6.550 6.500 6.450 6.400 6.350 6.300 6.250 6.200 6.150 6.100 1919 1920 1921 1922 1923 Actual GNP (see text) 1924 1925 1926 1927 1928 1929 Log of Potential Based on 1926-29 Growth A second approach to constructing a series on potential output and the varying size of the output gap is to use the Fed’s seasonally adjusted industrial production index. The advantage over Shaw or the GNP estimates just discussed is that output data is available, like interest rate and price data, at a monthly frequency. Its disadvantage is that it captures at best only a fraction of overall economic activity, albeit a fraction that was growing in the 1920s. (Romer (1989, p. 34) also notes that unlike the Shaw commodity output series, the Fed’s series does not include non-manufactured agricultural commodities). 28 The industrial production series peaks in January of 1920, and also in August of 1929. If one assumes that the economy was at potential during each of these months, one calculates a continuously compounded growth rate of .302 percent per month, or 3.63 percent per year. Industrial production also peaks in May of 1923. If one measures from there to August of 1929, one has growth at .313 per month or 3.76 annually, very similar. My choice is to take January of 1920 and August of 1929 as benchmark months at which the economy is at potential, and assume that potential industrial production grew at a constant continuously compounded rate between these months as well as between January of 1919 and January of 1920 and between August and December of 1929. In none of these three months of industrial production peaks is there evidence of an acceleration of or sudden spike in the inflation rates (see discussion on inflation data below). Both of these measured growth rates (from May of 1923 or from January of 1920) are somewhat higher than the long run growth rate of the US economy, and higher than those suggested using annual GNP data. The explanation is to be found in the growing relative size of the manufacturing sector over the course of the decade. This is not necessarily a fatal objection, however, given the aims here. We wish to use the industrial production data to develop a monthly measure of the output gap. The more important concern, aside from the fact that industrial production covers only a fraction of the economy, is that it is known to vary more widely than GNP itself, which brings us back to Romer and Kuznets. In estimating the elasticity of GNP with respect to fluctuations in industrial production, Romer ran a trend through “normal” years and calculated deviations of actual output, roughly symmetrical, both above and below. What I am after is not exactly the same. Again, 29 endorsing the Friedman view of the cycle, I am interested in deviations from potential, mostly in the downward direction, where potential output is viewed as a ceiling to production at any moment in time established by aggregate supply conditions, one exceedable in the short run only through a sufficiently strong aggregate demand stimulus that results in an acceleration of inflation. This approach to estimating trend growth is consistent with an older NBER tradition associated with Burns and Mitchell or Kuznets, which was the tradition within which Okun and Friedman were working.15 Under the influence of real business cycle advocates in the 1980s it became fashionable to view short run deviations of actual output from trend as symmetrically distributed above and below a non-linear trend constructed using an HP filter, with business fluctuations explained almost entirely as the consequence of supply shocks. When RBC proponents described booms as above trend and recessions as below trend, however, they didn’t actually mean that the economy was above or below potential, since conceptually there could be no difference between actual and potential.16 What was meant was that output was above or below what would be predicted by a trend estimated in the manner described. This notion of an output “gap” is quite different from what Arthur Okun had in mind in 1962. Real business cycle theorists, however, found it difficult to explain why output not only grew more slowly in recessions, but usually declined. It was one thing to explain slow and fast periods of growth as due to the differential rate of arrival of positive technology shocks. But As Romer (1989, p. 18) puts it, “Often, researchers choose to connect peak years and thus form an estimate of potential rather than trend GNP or commodity output....” That is my practice here, which closely follows CBO practice. See CBO (2004) for discussion of their methods, and consideration of advantages and disadvantages relative to statistical filtering approaches. 16 Contributors to the policy ineffectiveness literature, such as Lucas and Barro, modified the postulate to allow that unanticipated changes in the money supply (nominal income) might temporarily drive actual away from potential. Some RBC proponents could be read as suggesting that a gap between actual and “potential” might be attributable to misguided government legislation or regulation. 15 30 where were there plausible negative shocks to explain most of the declines in output? The theory predicted price moderation rather than inflation during booms, and the reverse during recessions, the opposite of experience in almost all recessions (the 1974-75 recession one of the few exceptions). As an intellectual experiment, real business cycle theory foundered on its inability to explain most business cycle fluctuations, most spectacularly the Great Depression. Few practicing applied economists today, individuals involved in policy work or advising banks, hedge funds, or nonfinancial corporations, adhere to the belief that supply shocks (not fluctuations in aggregate demand) are the principal cause of aggregate fluctuations in a large economy. Nevertheless, although the explanatory objectives of the RBC project remain by and large unrealized, its methodology left a legacy among non-RBC macroeconomists, even those who reject many of its tenets. This is the tendency to view deviations of actual from potential as symmetrically distributed above and below the trend of potential.17 For the 1920s (and indeed for most of the twentieth century), I assume that most deviations from potential were in the downward direction. Like Taylor and Romer, I reject the RBC presumption that actual almost always equals potential, and embrace the more traditional (and in my view, more realistic) Okun/Friedman assumptions. Below the trend of potential Taylor, Romer and I assume an unused opportunity for non-inflationary growth due to underutilized or unused resources. I am firmer and more explicit, however, in endorsing the views that deviations from 17 This is, however, not the rationale given by Romer, who argues that because of the overestimate of GNP volatility prior to 1909, the peaks and troughs are less accurately measured than “normal” years. (1989, p. 18). 31 potential are asymmetrically distributed below potential, and that for the economy to be identified as above potential, we must see evidence of an acceleration in the inflation rate.18 Finally, for simplicity’s sake, I assume that the evolution of potential is relatively smooth over short periods, such as a decade: linear in logs. These assumptions are likely to be more realistic than the RBC view that the cycle reflects short run variability in the rate of growth of potential. For a large economy we can expect the arrival of positive supply shocks to sum to a reasonably steady rate. In reverting to the Burns-Mitchell-Kuznets tradition of measuring potential from non-inflationary peak to non-inflationary peak, I discard the assumption of symmetric distribution of deviations above and below trend, and insist that we strictly adhere to this rule: a claim that an economy is above potential must be accompanied by evidence of accelerating goods and services inflation. Belaboring this point is important, because there is a recurring tendency to view any period of above average growth and/or low unemployment as “above potential.” We have already seen this in Orphanides’ remark about 1929. A more recent instance comes from, James Bullard, president of the St. Louis Fed, who has made this argument about the period prior to the financial crisis in the US, in spite of the absence of strong inflationary pressures. John Cochrane has made similar arguments. Without the evidence of an acceleration of inflation, however, the argument is problematic.19 18 Christy Romer has pointed out to me that, due to lags, one might be concerned about being above potential even without evidence of accelerating inflation. This seems however a slippery slope. One can always fear that inflation is just around the corner. Sometimes (although of course not always), such fears are groundless, as were the repeated expressions of concern among inflation hawks from 2010 through 2016. 19 Again, see the 2015 BIS Annual Report: “Such indicators (deviations of debt service ratios and leverage from their long run trends) would have helped establish that output was running above its sustainable, or potential, level ahead of the most recent crisis in the United States – something that typical estimates used in policymaking, partly distorted by subdued inflation, have done only ex post, as they rewrite history based on new information” (p. 14). 32 Figure 3 Log of Potential and Actual Industrial Production, 19191929 2.2000 2.1000 2.0000 1.9000 1.8000 1.7000 1.6000 1.5000 1.4000 1.3000 1929-11-01 1929-06-01 1929-01-01 1928-08-01 1928-03-01 1927-10-01 1927-05-01 1926-12-01 1926-07-01 1926-02-01 1925-09-01 1925-04-01 1924-11-01 1924-06-01 1924-01-01 1923-08-01 1923-03-01 1922-10-01 1922-05-01 1921-12-01 1921-07-01 1921-02-01 1920-09-01 1920-04-01 1919-11-01 1919-06-01 1919-01-01 1.2000 Figure 3 shows the trajectory of potential industrial production relative to actual industrial production as described in the text. The question now is how much do we assume GNP declined for a given percentage decline in industrial production. Given Romer’s work, I will assume an elasticity of .55, about midway between her point estimate for 1909 and 1985. This is a conservative assumption that will bolster the arguments made later in the paper. We can now plot a monthly series of estimated deviations of GNP from potential, and compare it with the annual estimates of the output gap reflected in figure 1 (figure 4). The idea that “subdued inflation” distorts our estimates of potential is an even more direct challenge to the idea that evidence of accelerating inflation is a necessary condition for identifying a period as above potential. 33 Figure 4: Output Gap Comparisons: Estimates based on annual GNP and monthly Industrial Production data (see text) 0.0500 1929-11-01 1929-06-01 1929-01-01 1928-08-01 1928-03-01 1927-10-01 1927-05-01 1926-12-01 1926-07-01 1926-02-01 1925-09-01 1925-04-01 1924-11-01 1924-06-01 1924-01-01 1923-08-01 1923-03-01 1922-10-01 1922-05-01 1921-12-01 1921-07-01 1921-02-01 1920-09-01 1920-04-01 1919-11-01 1919-06-01 -0.0500 1919-01-01 0.0000 -0.1000 -0.1500 -0.2000 -0.2500 -0.3000 Based on Monthly Industrial Production Data Based on Annual GNP Data What do we make of this figure? Does it make sense that in spite of a conservative estimate of elasticity, the absolute value of monthly output gaps are generally larger and have more variance than are estimates based on annual data? I think so. First of all, output varies over time during a year, and even in a year that we might designate as at potential, there will be some months during which the economy will be rising towards a monthly peak, or declining from one. Thus, a measure of the annual flow of economic activity is likely to be lower than the peak monthly output measured at an annualized rate. One might object that the check as to whether one is above potential – whether the peak coincides with an acceleration in the inflation rate, might work more poorly for monthly data. That is, in principle, a component of GNP (industrial production) could be a poor proxy for the potential of the entire economy 34 because at its peaks it could be drawing inputs away from other sectors of the economy without necessarily causing the entire economy to press against a ceiling of potential output thereby triggering an acceleration of inflation. And at its troughs it might be releasing inputs that were easily reemployed elsewhere. But the possibilities for such reallocation may be more limited than one might first imagine. Industrial products must be transported and distributed, so it is unlikely, during an industrial production boom, that too many inputs can be drawn from these or in fact other service sectors. We make allowances for some of this in choosing an elasticity of GNP with respect to commodity output of less than 1. But GNP still varies strongly with commodity output, and that elasticity, if it is not close to 1, is also clearly not close to 0. Comovement of many sectors of the economy is widely recognized by macroeconomists as an essential feature of business fluctuations (Long and Plosser, 1987). Inflation rates We turn now to measures of inflation. The only price series available at a monthly frequency is the CPI for all urban consumers. There are a variety of ways in which monthly inflation rates might be calculated. One can simply log the price index, calculate the change in logs from one month to the next, and annualize by multiplying by twelve. The resulting series is quite volatile, so I also consider a three month moving average. Finally, for each month I measure inflation on a year over year basis: this measures by what percent prices have risen relative to the same month a year ago. This last measure – the smoothest of the three series approximates what Taylor references in his 1993 article (p. 202): the inflation rate over the previous four quarters. 35 0.40 Figure 5 Three Measures of Monthly Inflation using CPI-Urban United States, 1919-1929 0.30 0.20 0.10 -0.10 1919.01 1919.05 1919.09 1920.01 1920.05 1920.09 1921.01 1921.05 1921.09 1922.01 1922.05 1922.09 1923.01 1923.05 1923.09 1924.01 1924.05 1924.09 1925.01 1925.05 1925.09 1926.01 1926.05 1926.09 1927.01 1927.05 1927.09 1928.01 1928.05 1928.09 1929.01 1929.05 1929.09 0.00 -0.20 -0.30 -0.40 -0.50 Month to Month, Annualized 3 Month Moving Average of Month to Month Monthly Inflation Rate, Year over Year 36 Figure 6 Taylor Rule, United States, 1919-1929 0.40 0.30 0.20 0.10 1929-11-01 1929-06-01 1929-01-01 1928-08-01 1928-03-01 1927-10-01 1927-05-01 1926-12-01 1926-07-01 1926-02-01 1925-09-01 1925-04-01 1924-11-01 1924-06-01 1924-01-01 1923-08-01 1923-03-01 1922-10-01 1922-05-01 1921-12-01 1921-07-01 1921-02-01 1920-09-01 1920-04-01 1919-11-01 1919-06-01 -0.10 1919-01-01 0.00 -0.20 -0.30 -0.40 -0.50 It is this last measure that I use in the Taylor rule calculations that follow. With monthly estimates of an output gap and a series on monthly inflation rates, we are now in a position to calculate what a Taylor rule would have prescribed in the 1920s and compare it with the actual course of the discount rate. Figure 6 shows the results, based on the rule described at the start of the paper.20 We can separate our discussion of these eleven years into two periods: the first, stretching from 1919 through 1922, was characterized by high inflation, a severe but short recession, followed by rapid disinflation and then price stabilization. Mechanical application of a Taylor rule would have mandated interest rates of close to or above 20 percent from 20 It is important to keep in mind that what we are doing is a retrospective exercise, which does not imply that policy makers had real time access to industrial production data and would have altered monetary policy at that frequency. 37 January of 1919 through August of 192021 followed by falling and indeed negative interest rates from December of 1920 through February of 1923. The second period, running from March of 1923 onward, was characterized by generally full employment and low and stable inflation. The prescribed Taylor rule enters positive territory in March of 1923 and continues positive through March of 1924. It then drops below zero again through February of 1925, returning to below zero again in July of 1926 where it remains through May of 1929. Between 1923 and 1929, a total of 84 months, a positive rate is recommended during only 34 of the months. Here’s what the Taylor rule recommendations look like superimposed on the actual course of the discount rate: 0.40 Figure 7 Taylor Rule vs. Discount rate 0.30 0.20 0.10 1929-11-01 1929-06-01 1929-01-01 1928-08-01 1928-03-01 1927-10-01 1927-05-01 1926-12-01 1926-07-01 1926-02-01 1925-09-01 1925-04-01 1924-11-01 1924-06-01 1924-01-01 1923-08-01 1923-03-01 1922-10-01 1922-05-01 1921-12-01 1921-07-01 1921-02-01 1920-09-01 1920-04-01 1919-11-01 1919-06-01 -0.10 1919-01-01 0.00 -0.20 -0.30 -0.40 -0.50 Taylor rule 21 Discount rate These rates would have been even more extreme than those resulting from Volcker’s monetary tightening in 1982. 38 For most of 1919 and 1920, a Taylor rule would have mandated rates above 20 percent and over 30 percent in June of 1920 to break the back of the legacy of wartime inflation. This is in contrast to the actual path of rates, which peaked at 7 percent between June of 1920 and April of 1921. Would providing such shock therapy have been a good idea?22 As it turned out such extreme measures were not necessary to produce a substantial disinflation between June of 1920 and June of 1921. And if we argue that excessively “low” interest rates between January of 1919 and June of 1920 somehow laid the foundation for subsequent asset bubbles in real estate and stocks, we must reckon with the fact that for the much longer period between December of 1920 and March of 1923, a Taylor rule would have mandated negative nominal rates. Because of the zero lower bound, realized real interest rates in 1921 and 1922 were in fact very high. Figure 8 Taylor Rule vs Discount Rate, 1923-1929 0.08 0.06 0.04 0.02 1929-10-01 1929-07-01 1929-04-01 1929-01-01 1928-10-01 1928-07-01 1928-04-01 1928-01-01 1927-10-01 1927-07-01 1927-04-01 1927-01-01 1926-10-01 1926-07-01 1926-04-01 1926-01-01 1925-10-01 1925-07-01 1925-04-01 1925-01-01 1924-10-01 1924-07-01 1924-04-01 1924-01-01 1923-10-01 -0.06 1923-07-01 -0.04 1923-04-01 -0.02 1923-01-01 0.00 -0.08 Taylor Rule Discount rate Taylor (1999) notes that “When inflation gets very high or negative, interest rate rules lose their usefulness because expectations of inflation shift around a lot and are hard to measure. In these circumstances interest rate rules lose their advantages over money supply rules and can break down completely…” 22 39 The second subperiod to consider is the seven year stretch running from 1923 through 1929 (figure 8). These years experienced remarkable price stability and generally high employment. What we find is that with the exception of a period running from July of 1925 through May of 1926, a Taylor rule would have mandated interest rates lower, often much lower than prevailed. This includes a remarkable number of months, again, when the Taylor rule prescribes negative nominal rates. The period from July of 1925 through May of 1926, when rates were “too low” does correspond with the height of the residential real estate boom. But just as the inflation of 1919 and the first part of 1920 dissipated without draconian interest rate hikes, so too did the residential real estate bubble (see Galbraith, 1955; Field, 1992). Since the period of “too low” rates corresponded with the peak of the real estate bubble, not its genesis, and since “too low” rates did not stand in the way of the collapse of the bubble on its own accord, it seems difficult to credit this period of “lax” policy with having fueled the housing boom. Even more tenuous would be linking this to the equities bubble of the second half of the decade, since for every month from July of 1926 through May of 1929, the Taylor rule prescription lies below, often substantially below, the actual discount rate. It may well be of course that the equilibrium real interest rate in the 1920s was higher than Taylor, writing in the early 1990s, assumed it had been in the 1980s. The decade, after all was technologically dynamic, although not as much so as the 1930s (see Field, 2003, 2011). Nevertheless, the diffusion of the automobile, the electrification of the internal distribution of power within much of the manufacturing plant, and the introduction of a wide range of new electrical appliances did make this a progressive period, providing some justification for the initial upward movement in stock indexes. Irving Fisher went too far in validating high equity valuations, but some of what he said was justified. However, setting the Wicksellian natural 40 rate at 2.5 percent, closer to the 2.7 percent we have assumed for the trend growth rate of potential during this period, would raise the plot of the Taylor rule prescription by only .5 percentage point, and would have only a marginal effect on the above discussion. In constructing what a Taylor rule would have prescribed for the 1920s, is it fair to retain the 2 percent inflation target that Taylor assumed in 1993 and has come to be the norm? After all, looking at the entire nineteenth century, there is no long run trend in the price level, so perhaps setting the inflation target at 0 would be a fairer comparison. This would push up the Taylor rule recommendation by a full percentage point across the board. But again, the impact on the above discussion would be relatively minor. In only three of the 75 (out of 132) months in which a negative policy rate is recommended under the two percent inflation target does the shift to a zero percent inflation target move the recommendation (barely) into positive territory (March, May, and November 1929). There would be an additional three months in 1923 – August, September and October -- when the recommended Taylor rule would have been half a percentage point above the actual discount rate. And there is a question whether, because of the zero lower bound, a Taylor rule with a zero percent inflation target is really practical. The problem is that the zero lower bound means that a central bank limited to controlling its policy rate has no easy means of combatting deflation or its prospect. If deflation gets established, it will mean high real interest rates, and the central bank has no means of lowering them unless it adopts unconventional policies outside of the purview of the rule.23 If the deflation is due to a powerful sequence of positive 23 In addition, given the non-indexed character of most bonds and financial contracts, the transition to deflation will mean large and disruptive wealth transfers to creditors, compensated only partly by higher defaults likely to ensue as the output gap grows. This is the Fisherian debt deflation process. When Taylor was writing in the 1990s, the zero lower bound had not been approached since the 1930s. 41 supply shocks opening up new arenas for profitable investment, thus raising the Wicksellian natural rate for a prolonged period, this might be tolerable. Otherwise, serious depression is likely, and the central bank will need to resort to unconventional measures – large scale purchases of longer term assets (quantitative easing), or forward guidance, or other methods of trying to raise inflationary expectations. Without the buffer of at least a 2 percent inflation target, the potentially contractionary bias of the rule, aggravated by the new concern with asset prices, becomes even more pronounced, and the rule, to use Taylor’s language, less “practical.” Thus I think it is reasonable to construct a hypothetical Taylor rule mandate for the 1920s using the two percent inflation target. Because of different choices about methods and data, and assumptions about parameters, Eugene White (2014, p. 131) reached somewhat different conclusions about the relationship between a Taylor rule recommendation and the actual course of short term interest rates. One reason is his calculation of an output gap as the difference between actual output and an HP filter estimate of trend interpreted as potential, which lowers the average output gap compared to the methods used in this paper.24 He also used a four percent natural rate assumption in his Taylor rule calculations, which shifted up his Taylor rule recommendations by two percentage points across the board, as well as an inflation target of zero, which added an additional half a percentage point, assuming one uses the 1993 Taylor response parameters. Figure 9 shows, using my output gap measure, what assuming a natural rate twice as high and an inflation target of zero implies for the critical period 1923-29. The Taylor rule recommendations are now that rates should have been above the actual policy rate from April 1923 through March Although familiar with White’s chapter and the working paper that preceded it, which I had cited earlier in work on the interwar housing cycle (Field, 2014), I simply did not recall that he had essayed a similar exploration of the Taylor rule in the 1920s until after the first versions of this paper was completed. At that point the challenge became to understand where and why his conclusions differed. 24 42 of 1924 (except December 1923), from May 1925 through June of 1926, and in July and August of 1929. If one accepts these assumptions, particularly the view that the natural rate was four percent throughout the 1920s,25 there might be warrant for believing that too low rates were partly responsible for the mid-decade residential real estate boom. White considers this possibility, but is skeptical. As he notes, there was little connection between short term rates and mortgage rates. And there does not seem to be evidence of a significant deterioration in lending standards during this period. The last half of 1929 provides powerful evidence of what would likely have been the damaging effect of higher policy rates on the real economy had they been adopted earlier in the decade. The discount rate was much more closely connected to the costs of borrowing to invest in the stock market (see figure 1). For the equities boom in the second half of the decade, however, there is little plausible argument that “too low” rates caused the asset bubble. Even in figure 9, the Taylor rule recommendations are below the policy rate from June of 1926 through the end of the decade, with the exception of July 1929, just before the Fed initiated a further sharp increase in nominal/real rates that began the economy’s devolution into the Great Depression. 25 Four percent is quite high; White acknowledges that the private sector short rates upon which he bases this include a risk premium. Two percent is incorporated in Taylor’s baseline model; perhaps three percent could be justified for the 1920s. If one examines three month T bill yields (zero default risk), they decline almost to three percent in August of 1922, before rising to a peak of 4.22 percent in October of 1923, coinciding with the first stage of a recession that ran from May 1923 until July of 1924, by which time they have dropped to below two percent. Rates then begin rising, reaching a peak of 3.65 percent in November 1925, a level not exceeded again until May of 1928, well into the Fed’s explicit tightening. Rates peaked at 5.09 in May of 1929, but these end of decade levels must be interpreted as above the natural rate, since they triggered recession. FRED series M1329AUSM193NNBR, accessed February 23, 2016. 43 Figure 9 Taylor Rule, United States, 1923-1929 target p = 0 and natural rate = .04 0.10 0.08 0.06 0.04 0.02 1929-10-01 1929-07-01 1929-04-01 1929-01-01 1928-10-01 1928-07-01 1928-04-01 1928-01-01 1927-10-01 1927-07-01 1927-04-01 1927-01-01 1926-10-01 1926-07-01 1926-04-01 1926-01-01 1925-10-01 1925-07-01 1925-04-01 1925-01-01 1924-10-01 1924-07-01 1924-04-01 1924-01-01 1923-10-01 1923-07-01 -0.04 1923-04-01 -0.02 1923-01-01 0.00 -0.06 Taylor Rule Discount Rate Conclusion John Taylor was of course not present in the 1920s, and thus unable to offer his counsel to monetary policy makers. But there were a number of strong voices in the central bank prospectively channeling his concerns that “too low” interest rates might be fueling what was correctly perceived as a speculative bubble in the stock market. Starting in February1928, their views prevailed, as the discount rate was raised from 3.5 percent to a peak of 6 percent in September and October of 1929. Those advocating for this policy were well aware that an attempt to deflate an asset bubble might end up damaging the real economy, and they tried, notably unsuccessfully, to address this, through a policy of discouraging discounting for purposes of increasing margin lending on stocks. The tight money policy triggered the beginning of a 90 percent nominal, 60 percent real slide in the value of the stock market, 44 which unfolded over the following three years, and in the real economy, which declined about 30 percent in real terms over the subsequent four years. What can we learn from this history? First, as in the case of Britain or New Zealand’s housing bubble in the 2000s, asset bubbles are quite capable of developing in environments in which policy rates are, by and large, higher than those that would be mandated by a Taylor rule. Secondly, the experience in the 1920s of trying to deflate a bubble perceived to be fueled by rates that were “too low” by raising them was not a happy one, and should give grounds for pause to those who would recommend pursuing similar policies. There is also a broader definitional and methodological issue raised by this inquiry. This study has been premised on what has until recently been uncontroversial: potential or natural output is defined as the highest level of real output an economy can sustain without experiencing an acceleration of inflation. An economy has exceeded potential when inflation accelerates, and the absence of such acceleration indicates the presence of remaining slack in an economy. In other words, we may not know exactly where potential output is exactly, but we will know if and when we have exceeded it. What is quite notable, and in many ways disturbing, is that Taylor’s new emphasis on the possibility that low rates will fuel asset bubbles whose collapse will inflict great damage to the real economy in the future now often goes hand in hand with the claim that an economy can exceed potential even in the absence of accelerating inflation (see the 2015 BIS Annual Report). Suppose an economy experiences a boom – let us say a private housing construction boom, as the result of a surge in the extension of credit that proves unsustainable in the sense that borrowers are ultimately unable to pay back their debts in the absence of continued 45 acceleration in property values. During the boom the economy experiences high employment and little or no acceleration in inflation, or even very mild deflation (as at the end of the 1920s). A financial crisis ensues followed by an economic recession and slow recovery. Do we conclude, as the dust settles, that the economy had been above potential prior to the crash, in other words that the pre-crash output level was unsustainable – simply because it was not sustained? An alternate interpretation might be that it was the financial structure (in particular the leverage and debt service to income ratios) that underpinned the high level of aggregate demand that was unsustainable, and that if this is what the private sector must do to bring us close to potential, then the answer is not to define potential down after the fact26 but to replace some of the private sector funding mechanism with government spending, ideally on wellchosen infrastructure. Progress in understanding the macroeconomics of the 1920s and the 2000s, will require that we continue to have a shared understanding of what is meant by potential and how we identify conditions under which it has been exceeded. Without trying to minimize the concern that we may now be establishing the groundwork for an even more catastrophic global financial crisis, the logic and merits of the argument that we were above potential pre-crash (as advanced by James Bullard for example) have to be seriously addressed. Absent agreement that necessary evidence that an economy is above potential is acceleration of inflation, it is difficult to calculate meaningful output gaps, either contemporaneously or after the fact. The questioning of this principle lies at the heart of the clamor, particularly well-articulated in the 2015 BIS report, that interest rates must be raised even in the absence of accelerating inflation and in the presence of what appear, by 26 This is not to question the claim that recession and slow recovery following a financial crisis may bend downward the trajectory of potential output growth. That argument is distinct from the claim that the economy was above potential pre-crisis. 46 standard measures, to be continuing output gaps. As the report asserts, “ahead of the financial crisis, the methodologies widely used in policymaking generally failed to detect that output was above its sustainable level” (2015, p. 77). Does Taylor endorse the opprobrium now heaped on his rule’s original policy objectives: preventing an acceleration of inflation and closing output gaps – now dismissed as “illusory short term macroeconomic fine-tuning” (BIS, 2015, p. 3)? There is no question that when rapid expansion of credit leads to rising leverage and debt service to income ratios, we need to take notice and possibly action. But may we not be overburdening the policy rate if we expect it to keep a lid on these variables along with its other traditional responsibilities? A better way to control bubbles may be through the direct control of leverage.27 During the 1930s, policy actually moved in different directions on this with respect to housing and equities. For stocks, the Fed was given responsibility for regulating margin; since 1974 this has been limited for new purchases to 50 percent. On the other hand, in housing policy, the establishment of the Federal Housing Authority and Fannie Mae led to the standardization of the 30 year fully amortized mortgage, which may have represented an increase in leverage compared to what had prevailed in the 1920s.28 But from the standpoint of the 1930s, it was stocks not real estate that were perceived as having given rise to the speculative excesses that had helped precipitate depression, and it was with respect to stocks that the most radical action was taken. 27 Interested readers should consult section IV of the BIS report, which develops a number of counterarguments. In particular, the report argues that in recent years the real economy has become less sensitive to the policy rate while property and other asset values have become more sensitive. This section of the report also challenges the empirical importance of Fisher’s debt deflation mechanism outside of the interwar years (p. 79), and argues that we should be less afraid about the prospect of deflation, which in some instances reflects positive supply shocks rather than deficiencies in the growth of aggregate demand. 28 But see Postel-Vinay (2015), which describes the role second mortgages in the 1920s effectively increased leverage in many instances. 47 References Balke, Nathan S., and Robert J. Gordon. 1989. “The Estimation of Prewar Gross National Product: Methodology and New Evidence.” Journal of Political Economy 97 (February): 38–92. Bank for International Settlements. 2015. 85th Annual Report. Basel. Available at http://www.bis.org/publ/arpdf/ar2015e.pdf . Basu, Susanto and John Fernald. 2009. “What do We Know (and not Know) about Potential Output?” Federal Reserve Bank of St. Louis Review (July/August): 187-213. Bernanke, Ben. 2010. “Monetary Policy and the Housing Bubble.” Available at http://www.federalreserve.gov/newsevents/speech/bernanke20100103a.htm. Bernanke, Ben. 2015. The Courage to Act. New York: W.W. Norton. Congressional Budget Office. 2004. “A Summary of Alternative Methods for Estimating Potential GDP.” Congressional Budget Office Background Paper. Available at https://www.cbo.gov/sites/default/files/108th-congress-2003-2004/reports/03-16-gdp.pdf DeLong, J. Bradford and Lawrence H. Summers. 1988. “How Does Macroeconomic Policy Affect Output?” Brookings Papers on Economic Activity 2: 433-494. Field, Alexander J. 1984. "Asset Exchanges and the Transactions Demand for Money, 1919-1929," American Economic Review 74 (March): 43-59. Field, Alexander J. 1992. "Uncontrolled Land Development and the Duration of the Depression in the United States," Journal of Economic History 52 (December): 785-805. 48 Field, Alexander J. 2003. “The Most Technologically Progressive Decade of the Century,” American Economic Review 93 (September): 1399-1414. Field, Alexander J. 2011. A Great Leap Forward: 1930s Depression and U.S. Economic Growth. New Haven: Yale University Press. Field, Alexander J. 2014. “The Interwar Housing Cycle in the Light of 2001-2011: A Comparative Historical Approach.” in Eugene N. White, Ken Snowden and Price Fishback, eds., Housing and Mortgage Markets in Historical Perspective. Chicago: University of Chicago Press, for the National Bureau of Economic Research, pp. 39-90. Friedman, Milton and Anna J. Schwartz. 1963. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press. Friedman, Milton. 1969. The Optimum Quantity of Money and Other Essays. Chicago: Aldine. Friedman, Milton. 1993. “The “Plucking Model” of Business Fluctuations Revisited.” Economic Inquiry 31 (April): 171-77. Gibbs, Darren. 1995. “Potential Output: Concepts and Measurement.” Labour Market Bulletin 1: 75115. Gordon, Robert J. 1982 "Price Inertia and Policy Ineffectiveness in the United States, 1890-1980." Journal of Political Economy 90 (December): 1087-1117. Hauptmaier, Sebastian, Freidrich Heinemann, Marcus Keppler, Margit Kraus, Andreas Schrimpf, , Hans-Michael Trautwein, and Qingwei Wang. 2009. Projecting Potential Output: Methods and Problems. Physica-Verlag: Heidleberg. 49 Hollander, Jacob H. 1920. “Prices.” Annals of the American Academy of Political and Social Scuience 89(May): 253-257. Hubbard, Joseph R. (1926). “A Weekly Index of Money Rates, 1922-25”. Review of Economics and Statistics 8 (January): 23-28. Jahan, Sarmut and Ahmed Saber Mahmud. 2013. “What is the Output Gap?” Finance and Development (September): 38-39. Kemmerer E. W. 1918. “Inflation.” American Economic Review 8 (June): 247-269. Kuznets, Simon. 1961. Capital in the American Economy: Its Formation and Financing. Princeton: Princeton University Press. Long, John B. and Charles I. Plosser. 1987. “Sectoral vs. Aggregate Shocks in the Business Cycle.” American Economic Review, 77: 333-336. Mishkin, Fredric S. 2007. “Estimating Potential Output.” Speech given at the Federal Reserve Bank of Dallas Conference, May 24. Available at http://www.federalreserve.gov/newsevents/speech/mishkin20070524a.htm#f10. Okun, Arthur M. 1962. “Potential GDP: Its Measurement and Significance”. Proceedings of the Business and Economic Statistics Section, American Statistical Association, pp 98–104. Orphanides, Athanasios. 2003. “Historical Monetary Policy Analysis and the Taylor Rule.” Paper prepared for the Carnegie-Rochester Conference on Public Policy on the 10th anniversary of the Taylor rule. Subsequently published in Journal of Monetary Economics 50 (July 2003): 9831022. 50 Orphanides, Athanasios and S. van Norden. 2002. “The Unreliability of Output Gap Estimates in Real Time.” Review of Economics and Statistics, 84(4), 569-583, November. Postel-Vinay, Natacha. 2014. “Debt Diffusion in 1920s America: Lighting the Fuse of a Mortgage Crisis.” (working paper). Rhode, Paul W. and Richard Sutch. 2006. “Estimates of National Product before 1929.” In chapter Ca of Historical Statistics of the United States, Earliest Times to the Present: Millennial Edition, edited by Susan B. Carter, Scott Sigmund Gartner, Michael R. Haines, Alan L. Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press. Romer, Christina D. 1988. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (July): 91–115. Romer, Christina D. 1989. “The Prewar Business Cycle Reconsidered: New Estimates of Gross National Product, 1869–1908.” Journal of Political Economy 97 (February): 1–37. Romer, Christina D. 1994. “Remeasuring Business Cycles.” The Journal of Economic History 54 (September): 573-609. Schwert, G. William and Charles Plosser. 1979. “Potential GNP: Its Measurement and Significance -- A Dissenting Opinion.” Carnegie-Rochester Conference Series on Public Policy (supplement to Journal of Monetary Economics), 10 (Spring 1979): 179-186. Shaw, William H. 1947. The Value of Commodity Output since 1869. New York: National Bureau of Economic Research. Shiller, Robert. 2007. “Low Interest Rates and High Asset Prices: An Interpretation in Terms of Changing Popular Economic Models.” Brookings Papers on Economic Activity 2: 111-132. 51 Sutch, Richard. “Economic Fluctuations, Recessions, and Depressions” in chapter Cb of Historical Statistics of the United States, Earliest Times to the Present: Millennial Edition, edited by Susan B. Carter, Scott Sigmund Gartner, Michael R. Haines, Alan L. Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, 2006. Taylor, John B. 1993. “Discretion vs. Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public Policy 39: 195-214. Taylor, John B. 1999a. “A Historical Analysis of Monetary Policy Rules.” In Monetary Policy Rules, ed. John B. Taylor. Chicago: University of Chicago Press, pp. 319-48. Taylor, John B. 1999b. “The Robustness and Efficiency of Monetary Policy Rules as Guidelines for Interest Rate Setting by the European Central Bank.” Journal of Monetary Economics 43(3): 655-679. Taylor, John B. 2012. First Principles. New York W.W. Norton. Tobin, James. 1997. “Can we Grow Faster?” Cowles Foundation Discussion Paper No. 1149. Weir, David. 1986. “The Reliability of Historical Macroeconomic Data for Comparing Cyclical Stability.” Journal of Economic History 46: 353-365. White, Eugene. 2014. “Lessons from the Great American Real Estate Boom and Bust of the 1920s” in Eugene N. White, Kenneth Snowden, and Price Fishback, eds., Housing and Mortgage Markets in Historical Perspective. Chicago: University of Chicago Press for the NBER. Pp. 115-158. 52