Download References - Brad DeLong`s Website

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Business cycle wikipedia , lookup

Fei–Ranis model of economic growth wikipedia , lookup

Steady-state economy wikipedia , lookup

Long Depression wikipedia , lookup

Non-monetary economy wikipedia , lookup

Đổi Mới wikipedia , lookup

Productivity improving technologies wikipedia , lookup

Chinese economic reform wikipedia , lookup

Economic growth wikipedia , lookup

Ragnar Nurkse's balanced growth theory wikipedia , lookup

Transformation in economics wikipedia , lookup

Transcript
Productivity Growth in the 2000s
J. Bradford DeLong
U.C. Berkeley and NBER
January 2002
Outline
I. Introduction
--Purpose of this paper is to lay out what the very simple yet standard models
of economic growth and business cycles tell us about the likely course
of productivity growth over the next decade
--Present a menu of models: a productivity-of-computer-capital model, a
demand-for-high-tech-goods model, a fluctuating-NAIRU-and-highpressure-economy model.
--Also necessary to grapple with measurement issues...
--And very necessary to look at what micro studies have to tell us about the
parameters
of the macro models: because the implications of the macro models depend
critically
on production function parameters, demand elasticities, and other things rarely
sighted in the real world, it is very important to draw the map between real world
phenomena and macro conclusions...
--Conclusions: to be added later
II. "Measured" vs. "True" Productivity Growth: Omissions and New Goods
--Boskin Commission and other studies
III. "Measured" vs. "True" Productivity Growth: The Dot-Com Bubble
--Overinvestment theories of booms
--If the economy is highly productive at making investment goods, overinvestment
will produce rapid productivity and output growth
--But to the extent that we value these investments put into place at their "real"
value, the boom is significantly smaller. (By how much?)
IV. Computer Capital and Economic Growth
--A model with one final good
--The final good produced by labor, regular capital, computer capital, and residual
--Take investment in these two types of capital as exogenous
--Moore's Law
--Conclusion: any belief that productivity growth will not be rapid in the 2000s
rests on a belief that the exponent on computer capital is about to fall off
a cliff.
V. Demand for High-Tech Goods and Economic Growth
--A model with two final goods
--Technological change produces shifts in the relative prices of these goods
--Demand elasticity is key to the productivity growth trend: is the elasticity
of demand for the goods whose prices are falling greater than one or
less than one?
--Assessing the demand elasticity for high-tech goods with rapidly falling prices.
--Conclusion: any belief that productivity growth will not be rapid in the 2000s
rests on a belief that the high-tech sectors have close to exhausted their
set of potential uses.
VI. Can We Find a Model in Which Productivity Growth in the 2000s Will Be Slow?
--Okun's law and the falling NAIRU: how much of rapid productivity growth
can be attributed to simply a high-pressure economy?
--The dot-com bubble--causing overinvestment, and artificially generating a
"false" high value for the elasticity of demand for high-tech goods.
--Learning about productivity growth: maybe the NAIRU will rise rapidly.
--Conclusion: if everything goes wrong at once--if workers' wage growth
aspirations rise quickly to return the NAIRU to its pre-1995 level,
if the high-pressure economy unwinds according to Okun's law, and
if "true" elasticity of demand for high-tech goods is relatively low-then we should expect productivity growth to be at a rate of X
VII. Conclusion
I. Introduction
--Purpose of this paper is to lay out what the very simple yet standard models
of economic growth and business cycles tell us about the likely course
of productivity growth over the next decade
--Present a menu of models: a productivity-of-computer-capital model, a
demand-for-high-tech-goods model, a fluctuating-NAIRU-and-highpressure-economy model.
--Also necessary to grapple with measurement issues...
--And very necessary to look at what micro studies have to tell us about the
parameters
of the macro models: because the implications of the macro models depend
critically
on production function parameters, demand elasticities, and other things rarely
sighted in the real world, it is very important to draw the map between real world
phenomena and macro conclusions...
--Conclusions: to be added later
The essence of the "new economy" is quickly stated. Compare our use of information
technology today with our predecessors' use of information technology half a century
ago. The decade of the 1950s saw electronic computers largely replace mechanical and
electromechanical calculators and sorters as the world's automated calculating devices.
By the end of the 1950s there were roughly 2000 installed computers in the world:
machines like Remington Rand UNIVACs, IBM 702s, or DEC PDP-1s. The processing
power of these machines averaged perhaps 10,000 machine instructions per second.
Today, talking rough orders of magnitude only, there are perhaps 300 million active
computers in the world with processing power averaging several hundred million
instructions per second. Two thousand computers times ten thousand instructions per
second is twenty million. three hundred million computers times, say, three hundred
million instructions/second is ninety quadrillion--a four-billion-fold increase in the
world's raw automated computational power in forty years, an average annual rate of
growth of 56 percent per year.
There is every reason to believe that this pace of productivity growth in the leading
sectors will continue for decades. More than a generation ago Intel Corporation cofounder Gordon Moore noticed what has become Moore's Law--that improvements in
semiconductor fabrication allow manufacturers to double the density of transistors on a
chip every eighteen months. The scale of investment needed to make Moore's Law hold
has grown exponentially along with the density of transistors and circuits, but Moore's
Law has continued to hold, and engineers see no immediate barriers that will bring the
process of improvement to a halt anytime soon.
The new economy will have more examples of very high fixed costs and very low
marginal costs. Such a pattern can produce positive-feedback: rising demand will often
produce higher efficiency and higher returns, drives and lower prices, leading to yet
higher demand. The old economy is driven by negative feedback: rising demand leads to
higher prices, which leads producers when prices rise, to produce more and, consumers to
buy less, which restores and equilibrium at a lower level of demand. By contrast, in an
information economy, In that sense, if the agricultural and industrial economies were
"Smithian," the new economy may well be "Schumpeterian." But will that make any
difference for medium-run productivity growth?
Past "new economies," past economic "revolutions" have also seen extraordinary growth
in technology, the rise to dominance of new industrial sectors, and the transformation.
The fifty years after the invention of electricity, 1880-1930, saw an increase in the
mechanical horsepower applied to U.S. industry of perhaps a hundredfold and an
enormous increase in the flexibility of factory organization--a rate of technological
progress of more than nine percent per year (David (1990)). The hundred years from
1750 to 1850, the core of the (technological) industrial revolution itself, saw British
textile output multiply thirtyfold; in the middle of the eighteenth century it took handspinning workers 500 hours to spin a pound of cotton, but by the early nineteenth century
it took machine-spinning workers only 3 hours to perform the same task--a rate of
technological progress of ten percent per year sustained across half a century (Freeman
and Louca (2001)).
These earlier transformations revolutionized their economies' leading industries and
created "new economies": they changed the canonical sources of value and the process of
production. The industrial revolution itself triggered sustained increases in median
standards of living for the first time, a shift to an manufacturing- and then to a servicesheavy economic structure, changed what people's jobs were, how they did them, and how
they lived more completely than any previous economic shift save the invention of
agriculture and the discovery of fire. The economic transformations of the second
industrial revolution driven by electrification and other late nineteenth-century generalpurpose technologies were almost as far reaching: mass production, the large industrial
enterprise, the continent- and then world-wide market in staple manufactured goods, the
industrial labor union, the social insurance state, even more rapid sustained increases in
median living standards, and the middle-class society.
How important will the information economy—the sectors and industries that have
extremely rapid productivity growth driven by the enormous and ongoing technological
revolutions in data processing and data communications—turn out to be? Will this wave
of innovation and technological development have consequences similar to the trio of
steam power, metal forging, and automatic machinery that powered the original British
Industrial Revolution and transformed economies and societies beyond recognition? Or
will it turn out to have a much smaller impact on long-run economic growth, as did
previous leading sectors like civil aviation, illumination, and chemical engineering—
leading sectors that produced astonishing leaps in productivity in their relatively narrow
sectors, but that had little long-run influence on the structure of the rest of the economy
or the rate of overall productivity growth?
Where Did the Post-1995 Productivity Speed-Up Come From?
Percentage Points per year
Difference in growth (1995-2000 minus 1973-95)
Oliner-Sichel
Output per hour
1.08
BailyLawrence
1.26
Jorgenson et
al
0.92
Contributions from
Capital
IT Capital
0.34
0.44
0.52
0.59
0.59
0.44
Other Capital
Labor Quality
Multifactor Productivity
Computer Sector MFP
Other MFP
-0.25
0.04
0.72
0.47
0.25
-0.15
0.04
0.82
0.18
0.64
0.08
-0.11
0.51
0.27
0.24
The long-run economic impact of the "new economy" is likely to be very large indeed for
two reasons. First, the pace of technological progress in the leading sectors driving the
"new economy" is very rapid indeed, and will continue to be very rapid for the
foreseeable future. Second, the computers, switches, cables, and programs that are the
products of today's leading sectors are general-purpose technologies, hence demand for
them is likely to be extremely elastic. Rapid technological progress brings rapidly falling
prices.
Rapidly falling prices in the contest of extremely elastic demand will produce rapidlygrowing expenditure shares. And the economic salience of a leading sector--its
contribution to productivity growth--is the product of the rate at which the cost of its
output declines and the share of the products it makes in total demand. Thus unless
Moore's Law ceases to hold or the marginal usefulness of computers and communications
equipment rapidly declines, the economic salience of the data processing and data
communications sectors will not shrink, but grow.
II. "Measured" vs. "True" Productivity Growth: Omissions and
New Goods
--Boskin Commission and other studies
III. "Measured" vs. "True" Productivity Growth: The Dot-Com
Bubble
--Overinvestment theories of booms
--If the economy is highly productive at making investment goods, overinvestment
will produce rapid productivity and output growth
--But to the extent that we value these investments put into place at their "real"
value, the boom is significantly smaller. (By how much?)
IV. Computer Capital and Economic Growth
--A model with one final good
--The final good produced by labor, regular capital, computer capital, and residual
--Take investment in these two types of capital as exogenous
--Moore's Law
--Conclusion: any belief that productivity growth will not be rapid in the 2000s
rests on a belief that the exponent on computer capital is about to fall off
a cliff.
In the nonfarm business sector—the part of the economy on which productivity studies
typically focus—output per labor hour rose between 1995 and 2000 at 2.5 percent per
year, more than double the pace seen in the preceding quarter century since 1970.1 This
acceleration of productivity growth raises for the first time in a generation the likelihood
of reasonably rapid and broad-based real income growth, if the jump in productivity
growth can be sustained. In the long run, productivity growth and average income growth
must correspond. An era like that of 1970-1995 in which productivity growth is slow
must be, in Paul Krugman’s (1989) phrase, an “age of diminished expectations.”
The case for attributing this acceleration in productivity growth to the technological
revolutions in information technology is now very strong. If this attribution is correct,
then this reacceleration of productivity growth is the most significant macroeconomic
consequence of the “new economy,” and one that all by itself justifies focusing much
attention on computers and communications.
Back before 1995 critics of visionaries who saw the computer as transforming the world
pointed to slow and anemic growth in aggregate labor productivity. The end of the 1960s
saw the American economy undergo an aggregate productivity slowdown, in which the
trend growth rate of labor productivity fell by more than half. It seemed unreasonable that
what computer visionaries were touting as an extraordinary advance in technological
capabilities should be accompanied by a record-breaking slowdown in economic growth.
1
Productivity growth starting in the early 1970s was anomalously and unexpectedly low—a phenomenon
called the “productivity slowdown.” It is depressing to note that even now the causes of the productivity
slowdown and of its persistence are not well understood. See Fischer (1988).
As Nobel Prize-winning MIT economist Robert Solow posed the question, if the
computer is so important "how come we see the computer revolution everywhere but in
the [aggregate] productivity statistics?"
After Solow wrote, productivity performance worsened still further. In the decade and a
half before Solow asked his question in 1987 output per hour grew at 1.1 percent per
year. In the eight years after 1987 output per worker grew at only 0.8 percent per year.
This "productivity paradox" was sharpened because at the microeconomic level
economists and business analysts had no problem finding that investments in high
technology had enormous productivity benefits. MIT economist Erik Brynjolffson and
his coauthors found typical rates of return on investments in computers and networks of
more than fifty percent per year. Firms that invested heavily in information technology
and transformed their internal structures so that they could use their new technological
capabilities flourished in the 1980s and 1990s--and their lagging competitors did not.2
However, as Federal Reserve Board economists Oliner and Sichel (1994) pointed out in
the early 1990s, the then-failure to see the computer revolution in the aggregate
productivity statistics should not have come as a surprise.3 In the 1970s and 1980s the
computer industry was simply too small a share of the economy and its output was not
growing fast enough for it to have a large impact on aggregate productivity. According to
their estimates, in the 1980s information technology capital—computer hardware,
software, and communications equipment—accounted for 3.3% of the income earned in
the economy, and the price-adjusted information technology capital stock was growing at
only 14% per year. You multiply these two numbers together to get an estimate of the
contribution of the information technology sector to economic growth: in this case, a
contribution of 0.49% per year.
2
3
See Erik Brynjolfsson and Loren Hitt (1996); Erik Brynjolfsson (1993).
An argument developed at greater length in Sichel (1997).
But beginning in 1992, the American economy began an extraordinary investment boom.
From 1992 to 2000, real business fixed investment grew at 11% per year, with more than
half of the additional investment going into computers and related equipment. And as the
information technology investment boom took hold, productivity growth and growth in
real GDP accelerated as well. Real GDP rose by an average of 3.9% per year between
1995 and 2000. Nonfarm business measured labor productivity—measured output per
hour worked—grew at 2.7% per year.
Initially some economists—most prominent among them Robert J. Gordon4—doubted
that the acceleration in labor productivity growth in the 1990s was anything more than an
unsustainable cyclical phenomenon. Indeed, as Figure 2 shows, labor productivity can
exhibit large swings from year to year: the boom in productivity growth in 1992 was a
one-time flash in the pan (although it did give rise to a series of papers on the “jobless
recovery”). The “Morning in America” boom in productivity growth of 1983-1986 was
also not sustained, at least in part because of high government budget deficits that
reduced capital accumulation. What reason is there to believe that this boom in the
second half of the 1990s is different?
One of the most powerful reasons to believe that this acceleration of aggregate
productivity growth is permanent, and not a flash in the pan, comes from the underlying
growth accounting of the impact of the information technology revolution. Consider the
standard growth-accounting calculations applied to a model with one final good…
Back in the 1980s information technology capital accounted for 3.3% of income earned
in the economy; today according to Oliner and Sichel (2000) it accounts for 7.0% of
income earned. Back in the 1980s the economy’s stock of information technology capital
was growing at 14% per year; today according to Oliner and Sichel (2000) it is growing
at 20% per year. Multiply these two sets of numbers together to find that the increase in
4
See Gordon (2000). At one level the differences between Gordon and the others are simply differences of
emphasis: what is “large” enough for us to pay attention to? At another level, a key difference revolves
around how one accounts for the effect of the business cycle, and what one would expect the effect of a fall
in the natural rate of unemployment to be on potential output. According to Oliner and Sichel’s growthaccounting model, a 2.5 percentage point fall in the natural rate would boost potential output by the
increase in employment times the share of income accruing to labor—a boost to potential output of perhaps
1.5%. According to Gordon’s more macro-oriented model, a 2.5 percentage point fall in the natural rate
would boost potential output by the Okun’s Law coefficient of 2.5 times the change in the unemployment
rate—a boost to potential output of 6.2%. I come down on the side of the first approach, largely because I
believe that the effect on potential output could not be as large as Okun’s law suggests without generating
markedly higher inflation, and we know that this decline in unemployment is associated with a fall in the
natural rate and has not generated inflation. But it is not yet conclusively clear that Gordon’s analysis is
wrong.
the economy’s information technology capital stock was responsible for 0.5% per year of
economic growth in the late 1980s, and for 1.4% per year of economic growth today.5
At this growth-accounting level of analysis, all of these factors are highly persistent. The
rate of growth of the economy’s information technology capital stock will not slow down
rapidly or immediately. For one thing, the same number of dollars spent on computers or
communication equipment today deliver perhaps three times as much in the way of real
useful capital as they did five years ago because of the extraordinary fall in computer and
communications equipment prices.6 Even simple use of amortization funds to replace
obsolete computers will generate enormous rates of increase in the capital stock.
Moreover, there is every reason to think that the fall in computer and communications
equipment prices will continue. The pace of technological advance in information
technology has been well-described for three decades by what has come to be called
"Moore's Law"--the rule of thumb that Intel cofounder Gordon Moore's set out a
generation ago that the density of circuits we can place on a chip of silicon doubles every
eighteen months with little or no significant increase in cost. Moore's Law has held for
thirty years; it looks like it will hold for another ten at least. Moore's Law means that
today’s computers have 66,000 times the processing power of the computers of 1975. It
means that in ten years computers will be approximately 10 million times more powerful
than those of 1975 at roughly constant cost. The installed base of information processing
power has increased at least million-fold since the end of the era of electro-mechanical
calculators in the 1950s. Such extraordinary increases in productivity in data processing
and data communications equipment manufacture have the potential to have a large
impact on overall productivity growth as long as the share of total income attributable to
computer capital does not collapse.
Oliner and Sichel’s conclusions are very similar to those of Jorgenson and Stiroh (2000). Both are backed
up and strengthened in a very interesting series of papers by Nordhaus (2000a, 2000b, 2001) that I have not
yet had a chance to fully digest.
6
See Triplett (1999a, 1999b).
5
Will the share of total income attributable to computer capital collapse? Probably not.
One might wonder whether rapid improvements in a particular branch of industry will
rapidly run into diminishing returns. The first candlepower of light one can produce after
dark--with a candle or an oil lamp steady enough to read by--is a really big deal. The tenthousandth is not. The share of total income attributable to computer capital will remain
constant only if the productive value of the marginal computer declines no more rapidly
in percentage terms than the total computer capital stock increases. In theory there is no
reason that the productive value of the marginal computer might not decline very rapidly
indeed.
In theory the marginal returns to investment in computers could diminish very rapidly. In
practice this seems very unlikely to be the case. As John Zysman has pointed out, one
thing that makes computers likely to fit Bresnahan and Trajtenberg’s (1995) definition of
a true engine of growth, a true general purpose technology, is that each sequential fall in
the price of computers has been accompanied by an exponential increase in the demand
for computers because it makes feasible a whole new set of capabilities and uses.
V. Demand for High-Tech Goods and Economic Growth
--A model with two final goods
--Technological change produces shifts in the relative prices of these goods
--Demand elasticity is key to the productivity growth trend: is the elasticity
of demand for the goods whose prices are falling greater than one or
less than one?
--Assessing the demand elasticity for high-tech goods with rapidly falling prices.
--Conclusion: any belief that productivity growth will not be rapid in the 2000s
rests on a belief that the high-tech sectors have close to exhausted their
set of potential uses.
An alternative approach would be to look not at a model with one final good produced by
labor, non-IT capital, and IT capital, but to consider a model with two final goods—IT
products and non-IT products—in which the underlying dynamic of Moore’s Law
produces a sharp ongoing fall in the relative price of IT products. What then determines
whether productivity growth accelerates or not as the technological revolution in the
leading sector proceeds?
If total factor productivity growth in the rest of the economy is growing at a rate R, and
if total factor productivity in the leading industries and sectors is growing at a faster rate
L, then total factor productivity growth in the economy as a whole will be equal to:
(1)
 = (L) + (1-)(R)
where  is the share of total expenditure on the goods produced by the economy’s fastgrowing technologically-dynamic leading sectors.
As the process of innovation and technological revolution in the leading sectors proceeds,
we would not expect the leading sector share  of total expenditure to remain constant. If
the goods produced by the leading sectors are superior (or inferior) goods, the share 
will rise (or fall) as economic growth continues: only if the income elasticity of demand
I for its products is one will changes in the overall level of prosperity leave the leading
sector share unchanged. If the goods produced by the leading sector have a high (or low)
price elasticity of demand, the falls over time in their relative prices will boost (or reduce)
the share of total expenditure : only if the price elasticity of demand P is one will the
fall in the relative price of leading sector products produced by the technological
revolutions leave the leading sector share unchanged.
Moreover, the leading sector share of total expenditure  matters only as long as the
leading sector remains technologically dynamic. Once the heroic phase of invention and
innovation comes to an end and the rate of total factor productivity growth returns to the
economy’s normal background level R, the rate of productivity growth in the economy
as a whole will return to that same level R and the leading sector share of expenditure 
will no longer be relevant.
Thus five pieces of information are necessary to assess the aggregate economic impact of
an explosion of invention and innovation in a leading sector:

The initial share of expenditure on the leading sector’s products, 0.

The magnitude of the relative pace of cost reduction, L – R, during the leading
sector’s heroic age of invention and innovation.

The duration of the leading sector’s heroic age of invention and innovation.

The income elasticity of demand I for the leading sector’s products.

The price elasticity of demand P for the leading sector’s products.
To gain a sense of the importance of these factors, let’s consider a few simulations with
sample parameter values. For simplicity’s sake, set the initial share of expenditure on the
leading sector’s products 0 equal to 0.02, set the income elasticity of demand for the
leading sector’s products I equal to 1.0, set the heroic age of invention and innovation to
a period 40 years long, and set the background level of total factor productivity growth R
to 0.01 per year, one percent per year. Consider three values for the price elasticity of
demand P: 0.5, 2.0, and 4.0. And consider two values for the wedge in the annual rate of
technological progress between the leading sector and the rest: 0.03, and 0.05.
With a price elasticity of demand of 0.5, the expenditure share of the leading sectors
declines from its original value of 2% as technology advances and the prices of leadingsector goods fall. With a productivity wedge of 5% per year, the initial rate of growth of
economy-wide productivity growth is 1.1% per year—1% from the background growth
of the rest of the economy, and an extra one-tenth of a percent from the faster
productivity growth in the one-fiftieth of the economy that is the leading sector. By the
twelfth year the expenditure share on leading sector products has fallen below 1.5%. By
the twenty-eighth year the expenditure share has fallen below 1.0%. By the fortieth year
the expenditure share has fallen to 0.7%.
The low initial and declining share of the leading sector in total expenditure means that
40 years of 6% per year productivity growth in the leading sector has only a very limited
impact on the total economy. After forty years total productivity in the economy as a
whole is only 2.54% higher than had the leading sector not existed at all. Rapid
productivity growth in the leading sector has next to no effect on productivity growth in
the economy as a whole because the salience of the leading sector falls, and the salience
of other sectors resistant to productivity improvement rises as technology advances. This
is Baumol and Bowen’ (1966) “cost disease” scenario: innovations become less and less
important because the innovation-resistant share of the economy rises over time. Indeed,
as time passes the rate of aggregate growth converges to the rate of growth in the
productivity-resistant rest of the economy.
By contrast, with a price elasticity of 4 the expenditure share of the leading sectors grow
rapidly from their original value of 2%. With a productivity growth wedge of 5% per
year, the leading sector share of spending surpasses 10% by year 12, 30% by year 20, and
reaches 89% by year 40. As the spending share of the leading sectors rise, aggregate
productivity growth rises too: from 1.1% per year at the start to 1.4% per year by year 10,
2.4% per year by year 20, 4.2% per year by year 30, and 5.4% per year by year 40. The
impact on the aggregate economy is enormous: total factor productivity after 40 years is
113% higher than it would have been had the leading sector never existed.
In these simulations, there is only one reason for the sharp difference in the effects of
innovation in the leading sector: the different price elasticities of demand for leadingsector products in the two scenarios. The initial shares of leading sector products in
demand, the rate of technology improvement in the leading sector, and the duration of the
technology boom are all the same. But when demand for leading sector products is priceelastic, each advance in technology and reduction in the leading sector’s costs raises the
salience of the leading sector in the economy and thus brings the proportional rate of
growth of the aggregate economy closer to the rate of growth in the leading sector itself.
By the end of the 40 year period of these simulations, the scenario with the price
elasticity of 4 has seen the leading sectors practically take over the economy, and
dominate demand. This is the “economic revolution” scenario: not only does productivity
growth accelerate substantially and material welfare increase, but the structure of the
economy is transformed as the bulk of the labor force shifts into producing leading-sector
products and the bulk of final demand shifts into consuming leading-sector products.
VI. Can We Build a Model in Which Productivity Growth in the
2000s Will Be Slow?
--Okun's law and the falling NAIRU: how much of rapid productivity growth
can be attributed to simply a high-pressure economy?
--The dot-com bubble--causing overinvestment, and artificially generating a
"false" high value for the elasticity of demand for high-tech goods.
--Learning about productivity growth: maybe the NAIRU will rise rapidly.
--Conclusion: if everything goes wrong at once--if workers' wage growth
aspirations rise quickly to return the NAIRU to its pre-1995 level,
if the high-pressure economy unwinds according to Okun's law, and
if "true" elasticity of demand for high-tech goods is relatively low-then we should expect productivity growth to be at a rate of X
Back at the start of the 1990s most macroeconomists estimated that the economy’s
natural rate of unemployment was between 6.5 and 7.0 percent. If unemployment fell
below that level, it was argued, inflation would begin to accelerate. Thus a Federal
Reserve that wished to avoid major recessions by maintaining the public’s confidence in
its lack of tolerance for inflation could not afford to let the unemployment rate fall below
6.5 percent. These estimates were based on long historical experience, summarized in
Figure 3 which shows the track of inflation and unemployment in the U.S. economy since
1960. In the 1960s inflation increased when the unemployment rate fell below 5.5%. In
the early 1970s, it seemed as though inflation fell when the unemployment rate rose
above 5.5%, but then came the major accelerations in inflation produced by the oil price
shock of 1973, and by the late 1970s it seemed as though it required an unemployment
rate of 6.5% or more to put downward pressure on inflation.
In the 1980s, the workings of the labor market seemed worse: only when unemployment
rose above 7% in the early 1980s did inflation fall noticeably. And in the late 1980s and
early 1990s it seemed as though inflation rose whenever the unemployment rate fell
below 6.5%, and fell when the unemployment rate rose above 6.5% percent.7 Just about
the time in the mid-1990s when the aggregate rate of productivity growth began to boom,
the comovements of inflation and unemployment went off track. The fall in
unemployment to 6% in the mid-1990s did not lead to any acceleration in inflation, nor
7
For a more formal econometric analysis of the time-varying natural rate of unemployment—one that
stresses the uncertainty surrounding our estimate of the natural rate at any moment in time—see Staiger,
Stock and Watson (1997).
did the fall in unemployment to 5% and then 4.5% in the late 1990s. Only in the last year
and a half or so, as the unemployment rate has fallen to 4%, have there been any signs of
rising inflation. In the early 1970s most macroeconomists thought the NAIRU was in the
range of 5 to 5.5 percent. By the early 1990s most macroeconomists thought the NAIRU
was in the range of 6 to 7 percent. So nearly all macroeconomists have been surprised by
the stunningly swift fall in the NAIRU down to somewhere in the neighborhood of 4.5
percent by the late 1990s.
To what extent might the productivity boom of the late 1990s be the result of the fall in
the NAIRU? According to Okun’s Law a 2 percentage point fall in the unemployment
rate would be linked to a five percentage point rise in the level of output relative to
potential output—enough to by itself drive a one percentage-point acceleration of
economic growth over a five-year period.
It is, however, possible that the natural rate of unemployment is linked to the rate of
economy-wide productivity growth. The era of slow productivity growth from the mid1970s to the mid-1990s saw a relatively high natural rate. By contrast, rapid productivity
growth before 1973 and after 1995 has been associated with a lower natural rate. If
workers' aspirations for real wage growth themselves depend on the rate of
unemployment and do not depend directly on productivity growth, then a speedup in
productivity growth will reduce the natural rate.
If productivity growth is relatively slow, then a low rate of unemployment will lead
workers to demand high real wage increases—real wage increases above the rate of
productivity growth. But firms cannot continuously grant real wage increases higher than
the rate of productivity growth and still remain profitable. Long before their profits
disappear they will respond to the higher real wage growth aspirations and demands by
economizing on workers. Unemployment will rise until the average unemployment rate is
high enough to curb worker aspirations for real wage growth to a level consistent with
trend productivity growth.
With a higher rate of productivity growth, firms can afford to pay higher real wage
increases without going bankrupt. The unemployment rate consistent with real wage
growth aspirations that match productivity is lower. Hence an economy with higher
productivity growth has a lower natural rate of unemployment.
The attribution of the fall in the NAIRU in the 1990s to the “new economy”—as an
indirect consequence of the acceleration in productivity growth—is plausible and
enticing, but far from proven.
If we look far back in history at the long bull runs of the American stock market—18901910, or 1920-1930, or 1950-1970—we see that for each 10% that the real value of
dividends rise over a twenty-year period, the real value of stock prices tends to rise by
15%. But if we look just at the most recent bull market—the one that started in 1982—we
find that a market-wide rise in dividends of 10% produces not a 15% but a 26% increase
in stock prices. The runup in stock prices during the 1920s was extraordinary, but in real
terms the increase in dividends paid out in the 1920s, and the increase in corporate
profitability, was more than half of the increase in real stock market values. The runup in
stock prices during the 1950s and 1960s was extraordinary too, but in real terms increases
in dividends and in earnings were two-thirds as large as the increase in real values.
The most recent bull market, as measured by the S&P composite index, is the largest: a
more than seven-fold increase in real values. Yet real dividends paid on a pro-rata share
of the S&P composite index have risen by less than 30% since the early 1980s. And
earnings on a pro-rata share have increased by less than 50 percent.
Any economist examining this pattern must reach one of two conclusions (or hedge his or
her bets by taking a position between them). The first is that for a century the stock
market has been grossly underpriced—has discounted the risk associated with owning
equities at a much too high rate. It is only now that equity valuations are “fair” in the
sense of promising expected real returns on stocks akin to those on bonds plus a small
extra risk premium. The second is that the stock market today is subject to irrational
exuberance on a scale never before seen in America.8
If the second conclusion is correct, what role has the “new economy” played in drawing
tighter limits around the stabilizing potential of arbitrage9 and in diminishing the
information about fundamentals in the hands of the marginal investor? Odean and Barber
8
The conclusion reached by Robert Shiller (2000), who backs up his quantitative estimates of fundamental
values with a great deal of thick description of the thought processes of market participants. Of course, only
(2001) point out that experimental economists have spelled out conditions under which
markets are most vulnerable to prolonged mispricing and to speculative bubbles, and that
our current stock market as it has been fueled by the growth of online trading and online
information appears to meet all of them. Stock markets have managed to generate
prolonged mispricing and spectacular crashes in the absence of the internet in the past.
But there is definitely reason to worry that the extra information about and access to the
stock market provided by the information technology revolution has not led to a more
informed marginal investor, or to a market that is a better judge of fundamental values.
If the future of the "new economy" is as bright as the previous section suggests, then why
have high-tech stock market values fallen so far in the past year and a half? There is a
strand of today's conventional wisdom that holds that the crash of the NASDAQ reveals
that the "new economy" was smoke and mirrors. It was the irrational exuberance that
often breaks out at the peak of a boom, not any deeper or permanent change in the
economy. But it is more likely that the crash of the NASDAQ was the result of the
realization by investors that the "new economy" was, in most sectors and for most firms,
likely to lead not to large quasi-rents from established market positions but to heightened
competition and reduced margins.
The exuberance that pushed the NASDAQ so high in 1999 and early 2000 rested on the
belief technological leap forward in data processing and data communications
technologies had created a large host of winner-take-all markets in which increasing
the thought processes of the marginal agent are truly relevant to assessing the information content of prices.
9
A way of thinking about the problem of noise trading developed by Shleifer and Vishny (1997).
returns to scale were the dominant feature. An information good--a computer program, a
piece of online entertainment, or a source of information--the work only needs to be done
once and then it can be distributed to a potentially unlimited number of consumers for
pennies: producing at twice the scale gains you nearly a 50% cost advantage. Moreover,
information goods produced at larger scale are more valuable to consumers. The version
with the largest market share becomes the standard. It is the easiest to figure out how to
use, the easiest to find support for, and the one that works best with other products (which
are, of course, designed to work best with it).
In that part of the new economy dominated by supply-side economies of scale and
demand-side economies of scope, a firm that establishes a market-share lead gains a
nearly overwhelming position. Its products are most valuable to customers. Its cost of
production is the least. Unless its competitors are willing to take extraordinary and
extraordinarily costly steps--like those Microsoft took against Netscape, pouring a
fortune into creating a competitive product and then distributing the competing Internet
Explorer for free--the first firm to establish a dominant market position will reap high
profits as long as its sector of the industry lasts.
But increasing returns to scale and winner-take-all markets are not the only or even the
primary consequence of high-tech's technological revolution. It is at least as likely that
innovations in computer and communications technologies are competition's friends.
Theythe frictions that in the past gave nearly every producer in the economy a little bit of
monopoly power. They enable swift searches that reveal the prices and qualities of every
single producer, while in the past such information could only be acquired by a lengthy,
costly, painful process. In the past you could find comparison-shop only by trudging from
store to store. In the present you can use the world wide web.
Thus in the "new economy" more markets will be contestable. Competitive edges based
on past reputations or brand loyalty or advertising footprints will fade away. As they do
so profit margins will fall: competition will become swifter, stronger, more pervasive,
and more nearly perfect.
Consumers will gain and shareholders will lose. Those products that can be competitively
supplied will be at very low margins. The future of the technology is bright; the future of
the profit margins of businesses--save for those few that truly are able to use economies
of scale to create mammoth cost advantages--is dim. Is it really possible for anyone to
acquire significant economies of scale by writing a single suite of software that will cover
the heterogeneous purchasing requirements of millions of businesses seeking to
streamline their operations by using the internet? Is it really possible for anyone to
acquire significant economies of scale by using the internet to distribute information
about groceries? Ther NASDAQ crash was the result of the marginal investor's realizing
that the odds were heavily against. But the NASDAQ crash tells us little about the future
of the underlying technologies, or about their true value.
The end of a period of high euphoria and extravagant boom will inevitably bring a
reduction in investment in the economy's leading sectors. This reduction will not by itself
bring about a Great Depression--or even more than a period of "readjustment"--as long as
other sources of demand are present and able to absorb the slack in productive resources
created by the end of high euphoria. However, managing this expenditure-switching is a
very delicate macroeconomic task.
Moreover a euphoric boom is a period during which people stop thinking as intensely
about problems of macroeconomic management and the business cycle. Ironically, it is
precisely during euphoria that countercyclical policy becomes less important, but it is in
the aftermath of euphoria that countercyclical policy becomes more important than at any
other time. For example, nobody in Japan in the late 1980s paid any attention to problems
of business cycle management. Few in Japan in the early 1990s paid sufficient attention
to the business cycle. And the Japanese economy and the world economy today are
suffering from that lapse.
VII. Conclusion
What determines whether demand for a leading sector’s products is price-inelastic—in
which case we are in Baumol and Bowen’s “cost disease” scenario in which
technological progress in the leading sector barely affects the aggregate economy at all—
or price-elastic—in which case we are in the “economic revolution” scenario, and
everything is transformed? What determines the income and price elasticities of demand
for the high-tech goods that are the products of our current leading sectors?
The more are high-tech products seen as "luxury" goods, and the greater is the number of
different uses found for high-tech products as their prices decline, the larger will be the
income and price elasticities of demand--and thus the stronger will be the forces pushing
the expenditure share up, not down, as technological advance continues.
Modern silicon and fiber-based electronics technologies may well fit Bresnahan and
Trajtenberg's (1995) definition of a "general purpose technology"--one useful not just for
one narrow class but for an extremely wide variety of production processes, one for
which each decline in price appears to bring forth new uses, one that can spark off a longlasting major economic transformation. Such general purpose technologies are, as
Bresnahan and Trajtenberg say, “engines of growth”: precisely because they have a wide
range of potential uses, and are complementary to a large proportion of other inputs, their
price elasticity of demand is likely to be high.
The possibility of the demand-side externalities called “network effects”—Metcalfe’s
law, the idea that the value of any network is proportional to the square of the number of
connected nodes—raises the likely elasticity of demand still further. (However, offsetting
the point that value is proportional to the square of the size of the network is the point
that the most valuable nodes are likely to be connected to the network first—a point that
Paul Krugman (2000) has made and called “DeLong’s law.”)
The wide potential domain of use of information technology is one sign that it is truly a
high-elasticity general purpose technology. Selling plastic doghouses in warehouse stores
in middle America is not usually thought of as a high-tech enterprise. Yet Wal-Mart’s
extraordinary efficiency advantage over other retailers in the 1980s and 1990s can be
credited in large part to its early investments in modern information technology, and to
careful thought and skilled execution of how modern information technology can achieve
economies of distribution. As Wal-Mart founder Sam Walton (1992) wrote in his
autobiography:
Nowadays, I see management articles about information sharing as a new
source of power in corporations. We’ve been doing this from the days
when we only had a handful of stores. Back then, we believed in showing
a store manager every single number relating to his store, and eventually
we began sharing those numbers with the department heads in our stores.
We’ve kept doing it as we’ve grown. That’s why we’ve spent hundreds of
millions of dollars on computers and satellites--to spread all the little
details around the company as fast as possible. But they were worth the
cost. It’s only because of information technology that our store managers
have a really clear sense of what they’re doing most of the time.
In addition, the history of the electronics sector suggests that the income and price
elasticities tend to be high, not low. Each successive generation of falling prices for
computers, switches, and cables has produced radically new uses for computers and
communications equipment.
The first, very expensive, computers were seen as good at performing complicated and
lengthy sets of arithmetic operations. The first leading-edge applications of large-scale
electronic computing power were military: the burst of innovation during World War II
that produced the first one-of-a-kind hand-tooled electronic computers was totally funded
by the war effort. The coming of the Korean War won IBM its first contract to actually
deliver a computer: the million-dollar Defense Calculator. The military demand in the
1950s and the 1960s by projects such as Whirlwind and SAGE [Semi Automatic Ground
Environment]--a strategic air defense system--both filled the assembly lines of computer
manufacturers and trained the generation of engineers that designed and built.
The first leading-edge civilian economic applications of large--for the time, the 1950s-amounts of computer power came from government agencies like the Census and from
industries like insurance and finance which performed lengthy sets of calculations as they
processed large amounts of paper. The first UNIVAC computer was bought by the
Census Bureau. The second and third orders came from A.C. Nielson Market Research
and the Prudential Insurance Company. This second, slightly cheaper, generation was of
computers was used not to make sophisticated calculations, but to make the extremely
simple calculations needed by the Census, and by the human resource departments of
large corporations. The Census Bureau used computers to replace their electromechanical tabulating machines. Businesses used computers to do the payroll, reportgenerating, and record-analyzing tasks that their own electro-mechanical calculators had
previously performed.
The still next generation of computers--exemplified by the IBM 360 series--were used to
stuff data into and pull data out of databases in real time--airline reservations processing
systems, insurance systems, inventory control. It became clear that the computer was
good for much more than performing repetitive calculations at high speed. The computer
was much more than a calculator, however large and however fast. It was also an
organizer. American Airlines used computers to create its SABRE automated
reservations system, which cost as much as ten airplanes (see Cohen, Delong, and
Zysman (2000)). The insurance industry automated its back office sorting and
classifying.
Subsequent uses have included computer-aided product design, applied to everything
from airplanes designed without wind-tunnels to pharmaceuticals designed at the
molecular level for particular applications. In this area and in other applications, the
major function of the computer is not as a calculator, a tabulator, or a database manager,
but is instead as a what-if machine. The computer creates models of what-if: would
happen if the airplane, the molecule, the business, or the document were to be built up in
a particular way. It thus enables an amount and a degree of experimentation in the virtual
world that would be prohibitively expensive in resources and time in the real world.
The value of this use as a what-if machine took most computer scientists and computer
manufacturers by surprise. None of the engineers designing softare for the IBM 360
series, none of the parents of Berkeley UNIX, nobody before Dan Bricklin programmed
Visicalc had any idea of the utility of a spreadsheet program. Yet the invention of the
spreadsheet marked the spread of computers into the office as a what-if machine. Indeed,
the computerization of Americas white-collar offices in the 1980s was largely driven by
the spreadsheet program's utility--first Visicalc, then Lotus 1-2-3, and finally Microsoft
Excel.
For one example of the importance of a computer as a what-if machine, consider that
today's complex designs for new semiconductors would be simply impossible without
automated design tools. The process has come full circle. Progress in computing depends
upon Moore's law; and the progress in semiconductors that makes possible the continued
march of Moore's law depends upon progress in computers and software.
As increasing computer power has enabled their use in real-time control, the domain has
expanded further as lead users have figured out new applications. Production and
distribution processes have been and are being transformed. Moreover, it is not just
robotic auto painting or assembly that have become possible, but scanner-based retail
quick-turn supply chains and robot-guided hip surgery as well.
In the most recent years the evolution of the computer and its uses has continued. It has
branched along two quite different paths. First, computers have burrowed inside
conventional products as they have become embedded systems. Second, computers have
connected outside to create what we call the world wide web: a distributed global
database of information all accessible through the single global network. Paralleling the
revolution in data processing capacity has been a similar revolution in data
communications capacity. There is no sign that the domain of potential uses has been
exhausted. So far there are no good reasons to believe that the economic salience of hightech industries are about to decline, or that the pace at which innovation continues is
about to flag.
There is room for computerization to grow on the intensive margin, as computer use
saturates potential markets like office work and email. But there is also room to grow on
the extensive margin, as microprocessors are used for tasks like controlling hotel room
doors or changing the burn mix of a household furnace that few, two decades ago, would
ever have thought of.
Thus the balance of probabilities is that the elasticity of demand for the products of our
current high-tech computer and communications leading sectors is high, not low. Because
of the general purpose nature of the technology, it has an enormous number of potential
uses, many of which have not yet been developed. The way to bet is that our new
economy will have not a limited but an enormous impact on how we live.