Download Firms Text Book, Chapter 5: Evaluation Market Power:

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Economic democracy wikipedia , lookup

Transcript
1.1
Firms Text Book,
Chapter 5: Evaluation Market Power:
Economic theory and competition:
The rational for the link between seller concentration and price in an industry is that firms that
operate in concentrated markets are sheltered from competitive pressures, and therefore are
able to charge prices that are substantially higher than those that they would charge if the
competed against many other firms and earn monopoly profits. Economic theory provides policy
makers and courts of the law with theoretical tools to assess whether the impact of each merger
has damaging effects on the economy through the increase in the level of seller concentration.
The work of Mason (1929) and Bain (1959) laid the foundations for the neoclassical economic
analysis of the impact of mergers on competition in a market through the structure-conductperformance (SCP) model. The SCP model states that firms in concentrated industries
tend to collude and to gain supernormal profits at the expense of consumers who
have to pay high prices even though collusive behaviour can be hard to sustain even
in very concentrated markets.
The empirical method developed by Bain was the cross-sectional approach. Bain’s primary unit
for evaluation in the model was the ‘industry’ or competing group of firms. A wide range of
industries were investigated in one time period.
Let us start by reviewing the economic theory underpinning the SCP paradigm. The basic
components of the SCP model are shown in Figure 5.1. The model, in its basic form, predicts
that industrial performance depends on the conduct or behaviour of firms in the marketplace,
and that this conduct in turn depends on market structure. Market structure is determined by a
number of basic supply and demand conditions assumed to be given.
Figure 5.1 The Structure Conduct Performance Model:
Basic Conditions
Demand
Growth
Price elasticity
Cyclical character
Supply
Technology
Raw materials
Value/weight
Structure
Seller concentration
Product differentiation
Barriers to entry
Conduct
Pricing Behaviour
Advertising behaviour
Research and development Activity
Performance
Allocate efficiency
Technical progress
1
Defining Performance:
Performance should be clear when used in SCP model. The Industrial Performance refers to the
extent to which industries operate to maximize economic welfare. It is the static view of
economic efficiency as allocative efficiency. In perfect competition allocative efficiency requires
that prices be set equal to the marginal costs of production throughout the economy.
Figure 5.2 shows that if firms are able to restrict output in order to maintain price (P) above
marginal costs (MC), this leads to a misallocation of resources and loss of economic welfare. If
few oligopolistic firms in an industry manage to collude, they can maximize total profits by
behaving like a single monopolistic firm.
Price, Cost
A
B
Pm
Pc
Figure 5.2:
The welfare effects of competition and
monopoly
E
MC=AC
G
MR
Qm
AR=D
QC
Quantity
Figure 5.2 compares the welfare effects (or performance) of perfect competition and monopoly.
Constant return to scale is assumed; that is, marginal costs are equal to average costs (MC=AC)
for all levels of output. In the case of monopoly, the expansion of the industry’s output
corresponds to the expansion of the monopolist, while in the case of perfect competition
expansion of output will be due to entry of new small firms, and the individual firms will
experience diseconomies of scale.
For the monopolist, he produces the quantity Qm at which MC = MR. He sets the price greater
than MC, Pm>MC, and sells the quantity Qm, and the firm gains supernormal profits equal to
PmBGPc.
For the competitive industry, the firms cannot set price above MC (P=MC), and the quantity
produced is Qc. The amount of supernormal profits in the industry is then zero.
To measure the loss of consumer welfare due to monopoly, we employ the concept of consumer
surplus. The demand curve AR=D shows the amount consumers are willing to pay for each
additional unit of the product. While consumers pay the market price Pc for all units of the good
purchased in a competitive market environment some consumers would be willing to pay more
than Pc for units up to Qc (the demand curve is above Pc). For each unit of output below Qc,
the difference between the price the consumer is willing to pay and the price actually paid is the
consumer surplus. The triangular area AEPc represents total consumer surplus when an industry
is perfectly competitive.
The area ABPm represents consumer surplus under monopoly. The decrease in consumer
surplus when a competitive industry becomes a monopoly is therefore the area PmBEPc. This
decrease can be split into two parts. The area PmBGPc is a redistribution of the surplus from
consumers to the owners of the firm in the form of supernormal profits. The triangle BEG is
2
called the dead-weight welfare loss resulting from monopoly power. The source of the deadweight welfare loss is the withdrawal from the market by consumers reacting to the distorted
price signal sent by the producer Pm which is above the opportunit cost MC of proiucing the
output foregone Qc-Qm.
This analysis shows that the increase of price above MC due to monopoly creates a
misallocation of resources among industries. Too little of the good is produced and purchased
so consumers spend their money on other goods and services, although they would have
preferred the original good at price PC.
Question?
Look back at Fig 5.1 How might industrial performance affect the structure and conduct
elements of the SCP model, as shown by the dashed arrows in the figure? Page 127.
--------------------------------------
Hypothesis Testing:
We need to translate our theoretical model into a mathematical form, we can only obtain an
estimate of industry profits, which contains the overall effects of random effects. Marc Wuyte
compared the problem facing econometricians to the problem of tuning a radio as it is similar to
making a distinction between sound and noise:
MODEL = STRUCTURE + NOISE
What do we mean when we say that we test a statistical hypotheses?
How to test a hypothesis for samples of different sizes?
It is important that you work through the exercises in order to gain familiarity with the
procedures while you learn them.
The first step is to derive a statistical hypothesis from the theory that we want to test. Bain’s
SCP model predicts that profit margins in concentrated industries are on average above those in
non-concentrated industries; that is Average profit > 0. This means that theory is presumed
false until proven true. So, we start by assuming that concentration and profits are not related.
Thus, if we find a positive difference between average profits in concentrated and
unconsecrated industries, this might be due just to chance. This starting-point is called the null
hypothesis.
The null hypotheses, H0, is the model we believe until the data rejects it with a certain
degree of significance.
What is the null hypothesis in the case of the SCP model? The answer is in Page 135.
The second step is to test the null hypothesis. Here, we need to collect a sample of observations
(the population). Random sampling ensures that a sample is abstained in such a way that the
investigator cannot be accused of obtaining data that are more likely to support the model.
Try to solve exercise 5.1 and 5.2 listed in page 143 and page 148:
Calculate the standardized score of the values 1.5, 0.9 and 1.1 given a mean of 1.1 and a
standard deviation of 0.
Then, let us discuss exercise 5.3 listed in page 153:
3
Q1. A random sample of size 16 was taken from an assumed normal distribution to
test the hypothesis that the population mean was 40. If the sample had mean 45
with standard deviation 6:
a) state the null hypothesis,
b) state the alternative hypothesis.
c) using a 5% significance level , use t tables to find tctritical ,
d) calculate the value of ttest ,
e) decide whether to accept or reject the null hypothesis and explain your answer.
Answer:
a) H0: µ = 40 where µ (pronounced ‘mu’) is the population mean. Our null hypothesis, the
model we believe until the data prove it false, is that the population mean, µ, is 40 . if the
data are inconsistent with this hypothesis and we no longer believe that the population
mean is 40, we accept the alternative hypothesis, HA. The alternative hypothesis here is
that the population mean is not equal to 40.
b) HA: µ ≠40
c) Tcritical = ± 2.131 ( this is listed in table 5.4, page 152) (sample size -1), that is (16-1) = 15
under 5% = 2.131
d) Ttest = sample value – hypothesized value / standard error
= 45 – 40 / 1.5 = 3.33
Standard error can be found using the formula listed in page 149. that is standard
deviation (6) divided by the square root of sample size, 16, which is 4 = 3.33. use your
calculator to do this calculation.
e)
Do we accept or reject the null hypothesis? Since the test statistic falls in the rejection
region (3.33 > 2.131) then we reject the null hypothesis and conclude that µ = ≠ 40. as
shown in the following figure:
2.5%
2.5%
-2.131
Rejection Region
0 Values
Critical
Acceptation Region
2.131
Rejection Region
4
Q2. If, in the usual notation n=25, sample value = 67, standard deviation s =8, test
at the 5% significance level the hypothesis that the population mean µ could be: (a)
65; (b)70; (c) 72.
Answer:
First, we should get the tcritical value n-1 = 25-1=24 under 5% = ±2.064
Then, we apply the formula: z= sample value – hypothesis value/ standard error
a) for sample value 67, s=8, n=25 and hypothesized value, µ, 65, then Ttest = 1.25
b) for sample value 67, s=8, n=25 and hypothesized value=70, then Ttest = -1.875
c) for sample value 67, s=8, n=25 and hypothesized value=72, then Test= -3.125
The test statistic (a) and (b) are in the range ±2.064, so we would accept the null hypothesis
in these cases at the 5 % level of significance. However, we reject the null hypothesis in (c)
since -3.125 is not in the range ±2.064.
Q3. (a) An economist takes a random sample of 12 companies within the SIC
categories 200 to 299 and calculate their average PCM (price-cost margin) for 1990
as 0.75 with standard deviation 0.0877. Test at the 5 % significance level the
hypothesis that the population mean could be 0.775.
Answer:
First, we should get the tcritical value in the same manner we did above. Calculate it and the
result will be ±2.201.
Then calculate ttest using the same formula listed in page 150 , = 0.976
This value is not significant, so we accept the null hypothesis. The data are therefore consistent
with the overall mean value being 0.775.
Q3, (b) The economist also takes a random sample of 35 companies from within the
SIC categories 400 to 499. These companies had a mean PCM of 0.80 with standard
deviation 0.1. Test at the 5% significance level the hypothesis that the overall
average could be 0.713.
Answer:
tcritical = ±1.96 at the 5% significance level.
ttest = (sample value – hypothesized value) / standard error = 5.147
This value is significant so we reject the null hypothesis. The data are therefore not consistent
with the overall mean value being 0.713
How we test the SCP model with multiple regression?
We select three explanatory variables: concentration (CR), advertising intensity (ADINT) and
industry sales growth (GROW). These variables can be measured using data from the Census
of production. We assume a linear relationship between dependent and independent variables
in the following form:
PCM1 = β0 + β1 CR + β2 ADINT + β3 GROW + u
This tells us that PCMs are determined by a constant term and the additive effect of the
independent variables, concentration advertising and industry sales growth. Our theoreticall
model tells us that PCM should be positively related to the independent variables.
For each coefficient β, we specify the null and alternative hypothesis. The results of the
estimation carried out with multiple regression analysis, for the data relating to 1990 for the 95
industries included in our data set, is:
5
PCM1 =0.5638 + 0.1109 CR + 1.6541 ADINT + 0.0928 GROW + u
(0.0247) (0.035)
(0.2691)
(0.0766)
The standard errors of each coefficient are reported in the brackets.
The test statistics for each coefficient can be calculated using the usual formula (see page
166).
ttest for CR variable = 3.169 . This should be compared with the critical values at 5% significant
level, ±1.645. We found out that the value falls in the rejection region. Hence, we accept the
alternative hypothesis that concentration is positively associated with profitability at the 5%
significance level.
Do the same calculation with the other variables and compare your answer with the one listed
in page 167.
Q5.5 (page 169)
Exercise 5.5 concerns the 22 SIC categories in the range 200-299. The multiple regression
model and other details from the regression coefficients when using growth, advertising
intensity and concentration as explanatory variables to predict PCM for 1987 were (standard
errors are in brackets):
PCM = 0.837 – 0.0967 CR + 0.0664 ADINT – 0.604 GROW
(0.07506)
(0.4443)
(0.2834)
For each regression coefficient:
1. specify H0 and HA
2. Calculate the test statistic.
3. Find the critical values and test in turn at the 5 % significance level, the hypothesis that the
coefficient could be zero.
4. Comment on the results.
Try to solve this question and compare your answers with the one listed in page 402.
Conclusion:
The primary focus of this long chapter is to give you an idea of how economists evaluate
theories using quantitative techniques. We hope that you now feel more confident about
critically interpreting the results of regression models. You should now be able to appreciate
the advantages and the difficulties involved in testing a theoretical model using econometric
techniques. Many skills are needed. The econometrician needs the economic theory to build
sound models and derive sensible predictions that can be tested. Hypothesis testing offers the
great advantage of measuring the level of significance that we attach to our findings. With
simple assumptions about the nature of the random error we can derive useful tests that allow
us to decide whether some relationships that we identify between economic variables are just
due to chance or to structural features of the economy at a certain significance level. No
wonder econometric techniques are so widespread in the economic profession.
6