Download Economics of Regulation and Antitrust

Document related concepts

Business valuation wikipedia , lookup

Systemic risk wikipedia , lookup

Financial economics wikipedia , lookup

Mergers and acquisitions wikipedia , lookup

Transcript
Economics of Regulation and Antitrust
-Concerned about what's efficient behavior in various contexts. Economic efficiency is the main framework for the
course.
CHAPTER 1
1)
What does the govt. do? Gov’t. regulates us (among other things), and these regulations may often increase
efficiency.
-First question to ask: what is the market and how has it failed? In other words, why do we need the govt. to get
involved. When markets work perfectly, we don't need gov’t. intervention.
-Markets could fail because of negative externalities, transaction costs, etc.
-Equity- who gets what, how the pie is cut up.
2) We'll start out looking at antitrust- we're concerned when producers and not the market determine prices. Then
we'll look at economic regulation. Then we'll look at health and safety regulation.
3) Efficient regulation is something that maximizes net benefits to society.
-Gov’t. failure = just because markets fail doesn't mean govt. intervention is beneficial.
Chapter 1- Introduction
1
-One of govt.’s main roles is regulating the behavior of both firms and individuals within the context of
issues classified as antitrust and regulation. These regulations are pervasive throughout American society.
-If the world functioned in accord with perfect competition, there'd be little need for antitrust policies and other
regulatory efforts. All markets would have large numbers of producers, and consumers would have perfect info
about each product. There'd also be no externalities, and all effects would be internalized by buyers and sellers.
2
-In reality, many industries are dominated by a small number of firms, consumers may not have much info
about products, and there are many externalities (such as environmental problems).
-Gov’t. has two ways to address departures from perfect competition: 1) Price incentives- such as imposing a tax on
activities to decrease their attractiveness; or 2) control behavior directly- such as blocking mergers in antitrust that
threaten competition
-Antitrust Regulation-stimulated by a belief that consumers are vulnerable to the market power of monopolies.
Less concerned with monopolies now, but the intent of antitrust is to limit the role of market power that might result
from substantial concentration in a particular industry.
-Concentration can lead to efficiency losses, and product quality and diversity may also be damaged. Don't want
barriers to entry for potential competitors to be too high.
-Economic Regulation- In areas where natural monopolies have emerged, it may be more efficient to have a
monopolistic market structure. But there should be some regulation to make sure that the natural monopolies don't
charge excessive prices.
-The Interstate Commerce Commission (late 1800s) was first big device of economic regulation, created to regulate
railroads. FCC and SEC came in 1900s. They recognize that market concentration is inevitable and even desirable
in some circumstances. They do intend to place limits on the performance of the firms in this market in order to
limit the losses that might be inflicted.
-Very tough to set a rate structure that will provide efficient incentives for all parties. If rates are too low,
company won't stay in business, or produce shoddy product. One question is how to divide the firm's fixed costs
(plant, equipment) among consumers.
-Health, Safety, and Environmental Regulation- Newest form of regulation, came in 1970s with agencies such as
EPA, Consumer Product Safety Commission. Created for two reasons: 1) big externalities often result from
economic behavior; 2) Hard to set economic numbers for externalities.
-Focuses mainly on risks in our environment, the workplace, and from the products we consume.
Criteria for Assessment- Ideally, the purpose of antitrust and regulation policies is to foster improvements judged in
efficiency terms. Move closer to perfect competition. Maximize the net benefits of the regulations to society (so
need to assess both costs and benefits).
-But private interests often make regulatory policies perform in a manner that economists wouldn't intend
in the real world. Regulations often result in transfers within particular groups in society. These transfers may be
inefficient. In this sense, "government failure" is as big a concern as "market failure".
Chapter 2- The Making of a Regulation
1) Where does regulation come from? Agencies draft them, and send them to Congress for approval.
-Agencies aim at maximizing welfare, fulfilling their regulatory role.
-Agencies can't just run amok. They must work under inst'l constraints.
On paper, regulation and antitrust policies evolve in this way: A single national regulatory agency establishes the
govt. policy in an effort to maximize the nat’l. interest, where the legislative mandate of the agency defines its
specific responsibilities in fostering these interests.
-This isn't the case in reality. First, not all regulation is national in scope. Much regulation occurs at state and
local levels.
-Even when a nat’l. regulatory body is the source of the regs, it may not be fostering the nat’l. interest. Special
interest groups and lobbyists influence regs.
-The regulatory agency may not be the only govt. player. Congress and Pres. have their own agendas. Regulations
must fit within the agency's mandate established by Congress, and are subject to review by the courts. Other
agencies are within the exec branch, and their regs are subject to review by the Office of Management and Budget.
2) Issues: a) Federalism: should the regulation be set at the national or state level? State vs. Fed regs: The
Federalism Debate- much regulation is at the local level (drinking ages and cigarette smoking), and state level
(utility regs, insurance).
-States might know local conditions more than the nat'l gov’t. will.
-Also, the Tiebout model says that people will relocate to where the public goods
provided are in line with their preferences- people will move to areas where regulations fit their desires.
-Also, if there's heterogeneity in the costs or benefits of regulations, should let the states
decide the regulations.
-states are more capable of experimentation and innovation.
Some regs should be handled at the state level. OMB has said that "fed regs shouldn't preempt state laws or regs,
except to guarantee rights of nat’l. citizenship or to avoid burdens on interstate commerce."
-Advantages to this principle of federalism/local regulation:
(1) local conditions affect both the costs and benefits of reg. Preferences and economic considerations
vary locally.
(2) Citizens wishing a different mix of public goods can relocate (if you like gambling, move to Nevada).
People relocate to best match the local public policies and their preferences.
(3) local regs can reflect the heterogeneity of costs and benefits in a particular locale.
(4)States often embark on innovative regulatory policies. Experimentation at the local level is cheaper
than at the national level.
-Fed gov’t. might have more resources, be able to efficiently spread resources from state to state.
-Can better address issues that cross state lines.
-More sophisticated, so can address scientific issues better than states.
-Better able to deal with nationally marketed products.
-Individual states may not share national values- Civil rights.
-There are advantages to federalism/ nat’l. regs:
(1) Nat. reg. agencies often have an informational advantage over local agencies. It would be hard for states
to match research of FDA.
(2) uniform nat’l. regs are generally more efficient for nationally marketed consumer products. Hard to
comply w/ 50 different regs.
(3) Many problems occur locally but have nat’l. ramifications.
(4) Certain policy outcomes are so important that all citizens should be guaranteed them. (such as civil
rights regs).
-Nat. regs tend to have preemptive effect, so fed has to be careful not to step on states' toes.
-Fed gives states an increased role in administration of fed regs. Does this because:
(1) states play role in filling gaps left by fed regs.
(2) recognition that there are legit differences between states.
(3) disappointment with performance of fed regs.
3) Example: food safety warnings. CA required products with carcinogens and reproductive poisons to be labeled.
Other states might follow, which would lead to 50 different labeling requirements.
-Such a state-based regulatory system would be expensive.
-Companies would have to comply with different labeling requirements,
-consumers might be confused,
-each state would have to set up a regulatory regime,
-transportation and warehousing costs for companies would increase
-The various expenses would lead to a 2% price increase.
-A national regulatory scheme would be less expensive. Cheaper for companies to comply with a national regulation
than 50 state regulations.
4) The process of making regulations: Agency decides it wants to regulate something. They send it to OMB for
approval, and they either approve or amend it.
-Then the OMB issues an RIA (regulatory impact analysis).
-There's 30-90 days for interested parties to make public comment on the proposed regulation.
-Then send the regulation to OMB and ask them for final ruling.
-Congress may get involved in the process too.
The Character of the Rule-making Process- Fed reg. agencies have substantial discretion, but don't have complete
leeway to set regs they enforce.
-One constraint is legislation. Regs by agencies must be consistent with their Legis mandate, or else they might be
overturned by courts.
-There are also procedures agencies must follow to institute regs.
Stages of Development: (1) agency decides to regulate a particular area of econ. activity. Must list it as part of its
reg. program if it will have a substantial cost. OMB can review the reg. program, in order to I.D. overlaps b/w
agencies, screen out particularly undesirable regs, and become aware of potentially controversial reg. policies.
(2) Prepare Regulatory Impact Analysis (RIA). Calculate benefits and costs, determine if there's a net
benefit. Also, must consider other potentially more desirable policy alternatives.
(3) Send RIA to OMB for review. OMB has 60 days to review it, and can then issue a Notice of Proposed
Rule-making (NPRM) in the Federal Register. Generally, OMB approves the reg. in its current form, but can also
ask for changes, or reject it. This is generally a secret process, allows the agency and OMB to alter their positions
without admitting they were wrong publicly. But it also breeds distrust of OMB and leaves Congress out of it.
Under Clinton, OMB has tried to be more open.
(4) If reg. is withdrawn, agency can attempt to circumvent OMB review by appealing to pres. or VP.
(5) After getting OMB approval, agency can publish NPRM in the Fed Register. It's now open to public
debate. 30-90 day period for public comment, usually from lobbyists.
(6) After processing comments, agency must put the reg. in its final form, finalizes its RIA and presents it
to OMB.
(7) OMB has one month to review and decide whether to approve the reg. Judicial and legislative
deadlines may make it even shorter. OMB can pass, return for amendment, or reject them.
(8) Most regs are approved by OMB. Still subject to judicial review.
5) Regulatory era started under Nixon. Ford set up Council on Wage and Price Stability, to measure costs of
regulations.
Carter administration set up performance standards for firms, Bubble Policy (instead of requiring every
smokestack to be clean, a plant was only held to an overall standard for its total output).
-Under Carter administration, looked at regulations using cost-effectiveness test (is it the cheapest way to
regulate? didn't look at whether the regulation itself was actually desirable), and least burdensome test (basically the
same thing). These were pretty weak tests. Also looked at performance standards (basically just interested in the
outcome, economists like these standards) vs. design standards (non-economists like these, easier to enforce, less
expensive- are compliance procedures in place?)
-Carter administration also used the Regulatory Analysis Review Group, made up of big-wigs in
the exec branch. They'd review controversial regulations.
-Before Reagan, there was no formal oversight process for regulations- the agencies could
basically do whatever they wanted.
Nature of the Regulatory Oversight Process- This form of issuing regs arose in 80s. In 70s, there was no executive
branch oversight. It became clear that some form of oversight beyond just Legis mandates and possible judicial
review was needed.
-Capture Theory says reg. agencies get captured by special interest groups, and serve the interests of that
group as opposed to the nat'l interest. So oversight is needed.
Nixon and Ford- Nixon intro'd exec oversight with "quality of life" review, focused on obtaining some
sense of costs and econ. implications of new regs. Formalized under EO 11821, required agencies to prepare
inflationary impact statements for regs.
-Ford intro'd the Council on Wage and Price Stability to administer this effort.
-Before this, agencies intro'd regs w/o investigating costs vs. benefits. However, even the CWPS could
issue no binding requirements as long as the agency assessed the costs.
-CWPS filed comments on proposed regs in public record, which was the basis for lobbying efforts.
Carter- First, issued EO 12044, which added cost-effectiveness test to the inflationary impact
requirement. Regs now had to be the least burdensome of the possible alternatives.
-In reality, this test only weeded out the most ill-conceived regs.
-Second, established Regulatory Analysis Review Group, included members from OMB, CWPS, Pres.’s
staff. Intended to bring the political pressure of a consensus body on the particular reg. agency. Didn't issue
binding reports, but had big political influence on agencies.
-Cotton Dust regulation incident showed that even when economic officials opposed a reg. on cost-benefit
grounds, political factors and economic consequences to special interest groups can still drive the policy decision.
Reagan- Reagan administration gave powers of Council on Wage and Price Stability to Office of
Management and Budget, and gave regulatory powers to Office of Information and Regulatory Affairs (OIRA).
-OMB has more powers than old CWPS.
-First, moved oversight function from CWPS to OMB. OMB has responsibility for budgets of agencies
and has big authority over them, so oversight process was given more clout.
-Second, increased stringency of tests imposed. Executive Order 12291- key provision is that regulation
won't be taken unless there a net benefit to society. went beyond cost- effectiveness requirement to a full-blown
benefit-cost test- benefits of reg. to society must outweigh costs to society, basically. If the benefit-cost test
conflicted with the agency's legislative mandate, as it does for environmental regs, the test isn't binding.
-Third, passed EO 12498, developed a formal regulatory planning process where the agencies would have
to clear a regulatory agenda w/ OMB. Called the Regulatory Calendar. Agencies had to tell OMB what policies
they were thinking about. Let OMB get involved in the process, prevent agencies from having overlapping
policies. It's been a failure, although it looked good on paper.
-OMB must approve all regulations under Reagan. Under Carter, OMB only commented.
Bush- Same as Reagan. Bush administration maintained the structure of Reagan. Regulations under
Carter, Reagan, and Bush basically cost the same.
Clinton- Clinton kept same process of Reagan and Bush.
-Executive Order 12866- made the regulatory process more user friendly. It sounds like a more touchy
feely, open process. OMB process were made more public. estab'd principles for regulatory oversight similar to the
emphasis on benefits, costs, and benefit-cost analysis of previous administrations. It was less adversarial towards
the agencies than Bush and Reagan were.
-Also, smaller regulatory staff. Focus more on major regulatory initiatives, less small-scale stuff.
6) Criteria Applied in Oversight Process- Main criterion that's applied is cost effectiveness in terms of benefits net of
costs.
-Regulatory Success Stories- Rear window brake lights. DOT found net benefits.
-Unleaded gas also showed net benefits.
-Gov’t. promotes cost-effective regs by encouraging performance-oriented regulation. More concerned with ends
rather than means. However, performance-oriented alternatives may be more expensive to enforce (tougher to
inspect when firms have to meet certain performance guidelines but aren't given specific directions to follow).
-Regs are also subject to distortions of benefit and cost estimates. Regs focus on the upper limit of the potential
risk, at the expense of the major sources of accidents and illness that are more precisely understood.
-Agencies try to avoid regulating prices.
7) The Impact of the Oversight Process- Regulatory oversight focuses on producing better regs, not necessarily less
regs. But improving regs eliminates unproductive regs.
-Most important measure of reg. activity is the costs generated by the reg.
-In 1991, regs equaled 9.6% ($542 billion) of GDP. Biggest reg. is federal paperwork requirements ($189
bill).
-Currently (1991), OMB approves 63% of proposed regs w/o modification. (Was 87% in 1981)
- 27% are approved after changes are made at OMB's request.
- 3% are withdrawn by the agency
-1.1% are returned for consideration, and 2.7% are suspended.
8) What do Regulators Maximize?- In theory, reg. agencies maximize the nat'l interest subject to their legislative
mandates. OMB is motivated to maximize net benefits to society.
-Capture Theory (Stigler) says the reg. agency is captured by the economic interests that it serves. Government
regs foster industry interests. (Like airline dereg let airlines raise prices)
9) What's the ideal set-up? One side says turn the regulatory agencies loose, OMB has too much power to veto
agencies. Other side says OMB is too limited because of numerous loopholes in regulatory process.
10) Specific Agencies- (a) OSHA (Occupational Safety and Health Administration)- aims at eliminating workplace
accidents. Mandate is to assure safe and healthy working conditions. Not a big impact on injury rates, in reality
(2-3% decrease). It's also very costly, may not be worth it.
-Benzene (1980) decision said OSHA must focus on big problems, not trivial problems. Focus on
"significant risk". But how big is "significant"? Court said a 1 in 1 billion risk of a fatal outcome from drinking
water is insignificant. That's dumb, though, because we drink lots of water.
-Cotton Dust case (1981) said feasibility requirements for OSHA regulations means "capable of being
done", no attention to cost-benefit analysis. If it's possible, it's feasible.
-Also Chevron case, reasonable interpretations O.K.
-In 1985, Congress passed HR 1022.
-a) Overrides legislative mandates of regulatory agencies.
-b) Risk assessment.
-Scientifically objective, unbiased. Evaluate costs and benefits of all data.
-Provide a best estimate of risk, if they show the upper bound of risk, must show lower bound too.
-Do risk comparisons (if the benefits of reducing the risks are greater than the costs, clean it up), show the
exposed population.
-Substitution risks- talk about risks that will be created if one risk is avoided (if they outlaw saccharine,
what's the risk of obesity?)
c) Benefit-cost analysis- Show that benefits are greater than the cost (strong form)
-Viscusi says weak form is better, permit comparison of benefits and costs, but don't bind the
agency to only doing projects where benefits outweigh costs.
-Peer review is also allowed (basically a way to filibuster regs, peer review takes years)
-Judicial review is allowed too, which takes forever.
11)Handout (Types of Action Taken by OMB Oversight Process)- OMB isn't reviewing everything. Over time,
their decisions have gotten a little tougher.
12) Hicksian Compensation Principle- Gainers can "potentially" compensate losers.
-It's "potentially" because compensation is often not made. Gainers and losers are often from different
groups in society.
-This principle is important both for cost-benefit analysis of regulations and antitrust violations.
- See graph drawn on handout.
KEY: Basically, on the costs/benefits graph, we want to maximize the distance between the benefit curve and the
cost curve, where benefits are higher than costs.
-That happens where the slope of the benefits curve (AKA marginal benefits) = slope of costs curve (AKA
marginal cost).
(-As a footnote, monopolists pick the scale until its marginal benefit = marginal cost)
-Beyond the point where marginal benefit = marginal cost, it takes more than one unit of cost to get one unit of
benefit, which makes it undesirable to get any further benefit. Better off not incurring the extra cost to get a
comparatively smaller extra benefit.
13) Costs, 1998
-Environmental regulation = $199 billion/yr. (Note that just because it's very costly doesn't mean it's very bad, but
we do want to get the most bang per buck)
-Other social regulations = $68 bill/yr.
-Economic regulation = $77 bill/yr. (in efficiency costs).
-Economic regulation = $143 bill/yr. (transfer costs)
-Economic regulation = $229 bill/yr. (paperwork costs)
14) Regulatory Victories- biggest is unleaded gas- exposure to lead has fallen greatly.
-They often measure success of an agency in how many pages it puts into the Code of federal regulations. That's
not a good measure, because you could write really concise or long regs, but they might still suck. Reagan made it
a goal to reduce registry pages, a dumb goal, worthless as an analytical tool.
15) Possible change in White House Regulatory Oversight Process- Justice Breyer said we need more rotation of
leadership in regulatory policies.
Chapter 3- Introduction to Antitrust
1) 3 aspects of Antitrust- structure (industry barriers and characteristics), conduct (policies), performance (how well
they work).
2) Main enforcement mechanism (93%) are private lawsuits, most settle out of court.
-This chapter focuses on industries that aren't governed very much by gov’t. controls, and where competition is the
primary mechanism that society relies on to produce good economic results.
-Antitrust policy aims to create and maintain market environments that enhance competitive processes.
1) Industrial Organization Analysis- Industrial organization analysis seeks to analyze markets in a more practical
way than pure economic models.
-Focuses on structure (number of sellers, ease of entry), conduct (pricing policy, advertising), and
performance (efficiency, technical progress). Structure determines conduct, and performance is an evaluation of
the results of the conduct.
-Gov’t. policy (antitrust and regulation) can influence both structure and conduct.
-Concentration- measures the size distribution of sellers because it gives weight to the inequality of sizes. Such as
adding together the market shares of the top 4 firms in the market.
-Entry barriers- something that makes entry more costly or more difficult. They permit existing firms to charge
prices above the competitive level without attracting entry.
-Product differentiation- If consumers perceive there are real differences among the products in a market, the
competitive tactics of sellers may focus more on advertising and product design than if there are no such differences.
-In markets where the product is homogeneous (wheat, steel) price may be the primary basis for
competition.
2) Antitrust- Sherman Act is main antitrust statute, from 1890, a reaction to trusts.
-Sec 1 prohibits combos in restraint of trade/price-fixing, sec 2 prohibits monopolies.
-Clayton Act (1914)- Sec 7: mergers. Sec 2: price discrim.
-FTC Act (1914)- FTC investigated unfair methods of competition.
-Economists view the antitrust laws as ways to promote competition and economic efficiency. Both economics and
political concerns have influenced antitrust policy.
-This book assumes economic efficiency is the only goal of antitrust decisions.
Enforcement and Remedies- Antitrust laws can be enforced by govt. and by private actions for treble damages.
Private suit is main means of enforcement. Most end in settlement.
Exemptions: include labor unions, export cartels, agriculture co-ops, some joint ventures.
Chapter 4- Efficiency and Technical Progress
1) Two measures of performance: snapshot efficiency and dynamic efficiency.
-Efficient markets have desirable properties: if markets are fully competitive (in competitive equilibrium), they're
pareto optimal (that means you can't make anyone else better off without making someone else worse off).
-Pareto optimality is a fairly weak test, but that's what we're going to run with because that's the big factor
in economics.
-Economic performance measures how well industries accomplish their economic tasks in society's interests. Two
dimensions of economic performance to be discussed here are efficiency (we assume that the technology is given)
and technical progress (we assume that resources are being allocated to developing new technologies for producing
old products more cheaply, and for producing completely new projects).
1) Economic Efficiency Competitive equilibrium - Perfect competition's key assumptions:
1) consumers are perfectly informed about all goods, all of which are private goods. This is seldom
satisfied.
2) Producers have production functions that rule out increasing returns to scale and technological change.
no increasing returns to scale (increasing return to scale equals, for example, double input more than doubles
output).
3) Consumers seek to maximize their preferences given budget constraints; producers seek to maximize
profits given their production functions.
4) All agents are price takers, and externalities among agents are ruled out. no externalities
5) A competitive equilibrium, that is, a set of prices such that all markets clear, is then determined.
-Competitive equilibrium is pareto optimal: it can't be replaced by another one that would increase the welfare of
some consumers w/o harming others.
-Monopolies are bad because they influence price.
-An important property of equilibrium is that price equals marginal cost in all markets.
-Perfectly competitive world doesn't need gov't intervention in the market.
-Key assumption is the price taking assumption (#4). That is, antitrust economics is concerned with causes and
consequences of firms' abilities to set price above marginal cost.
-Partial Equilibrium Welfare Toolsa) The Pareto criterion is one tool for evaluating the effect of a policy change. So, if everyone is made better off
by the change, or if no one is made worse off, the Pareto criterion would say that the change is good. In reality, at
least some people will be harmed.
b) The Compensation Principle is an alternative standard, choosing policies that yield the highest total economic
surplus. If the winners from any policy change can compensate the losers, then it's a good change. (But
compensation isn't actually required).
Supply curves (called "S")- Price is y-axis (vert), quantity is x-axis(horiz). Supply curves tell us at any given price,
what is the quantity of the product firms will sell at that price.
-E.G.- below $10,000 price, no firm will produce a car. As price increase, the quantity supplied increases.
The supply curve will slope upward to show that at a higher price, they'll sell a higher quantity.
Demand Curve ("D")- shows what is the quantity of the product consumers are willing to buy at any price.
-Again, price is y-axis, quantity is x-axis.
- As price decreases, quantity increases. D will slope down so show this. At 10,000, many people will
buy a car, fewer will buy at 30,000.
Demand curve slopes down, supply curve slopes up, intersection (where S=D) is the competitive equilibrium.
Intersection of S and D- Where the S and D curves intersect on the graph, that is the price and quantity of the cars
that will be sold. The price = P* and the quantity = Q*. The intersection is the point where both consumers and
producers are happy, the only price and quantity combination where both parties are happy.
-This is called market equilibrium.
-On the DS graph, the individual firms' supply curves are their marginal cost curves. The industry's supply curve is
the industry's marginal cost curve.
-The area under the marginal cost curve is the sum of the incremental costs for all units of output, and thus
equals total cost.
-The demand curve is the schedule of marginal willingness to pay by customers. The area under this schedule is
total willingness to pay.
-Difference between total willingness to pay and total cost is the total surplus, which is divided into
consumer surplus and producer surplus.
-Consumer surplus is the total willingness to pay minus what the customers must actually pay. On the D
and S graph, the consumer surplus is the net welfare gain to consumers in dollars.
-It's the spread between the price they were willing to pay, and the price they had to pay (the actual price
for the good). On the graph, it's the triangle enclosed by the D curve and the prices above the equilibrium price.
-Producer surplus is equal to profit of the firms in the industry, the difference between profits and costs.
Producer surplus is spread between how much producers are willing to sell for and how much they actually sell for.
-Maximizing total surplus is equivalent to maximizing the sum of consumer and producer surplus. It's also
equivalent to selecting the output level at which price equals marginal cost. on the graph, it's the triangle enclosed
above the S curve up to the market price.
-Ultimately, consumers are better off b/c they spent less than they were prepared to pay, and producers are better off
because they sold for more than they were prepared to sell for.
Cartel restricting output--If a cartel agrees to restrict output below the competitive level, it results in a deadweight
loss. This is also referred to as the social cost of monopoly or an efficiency loss. When producers form cartel to
restrict output, they can sell for a higher price. Leads to larger producer surplus, and smaller consumer surplus.
-Many consumers are squeezed. People who aren't willing to pay the cartel price are squeezed out of the market.
-Note that this isn't necessarily inefficient, if the producers are making enough extra money to outweigh the losses to
consumers.
-Is society better off or worse off because of this?
-Producers make more, consumers are worse off, all the consumers shut out of the market are lost. There's
an efficiency loss (the deadweight loss), and an equity loss (consumers surplus shrinks a lot).
-These sorts of diagrams will be used throughout the Antitrust section, similar principles will be used.
-Note that this basic diagram is figure 4.1 in the textbook.
-Monopoly versus Competition Example- A policy to replace a monopoly with competition will increase the total
surplus and the consumer surplus, but at the expense of the producer surplus.
Monopoly- how does a monopolist behave? Like a cartel. How does it calculate how much to produce?
-In monopoly, D curve slopes down, and S curve is straight horizontal line (AC = MC). Again, where S and D
intersect is P and Q that would be chosen if the firm was acting under competition. Under monopoly, monopolist
chooses to supply less than socially optimal amount. Leads to smaller consumer surplus, larger producers surplus,
and deadweight loss.
-To maintain the monopoly, the monopolist would be willing to pay (in lobbying) up to the total of the producer
surplus in order to keep its monopoly (because the producer surplus is the difference between marginal cost and
price, the amount of profits the monopolist takes in from the monopoly)
-A monopolist is generally one firm, the single seller of the good. (for this class, it will always be one firm)
-In figure 4.2, AC (average cost of production) = MC (marginal cost of production, i.e., cost of producing last unit)
= 20 (meaning there's a constant production cost per unit.
-Where demand curve intersects with S curve (straight line at P=20), that's the quantity that would be produced if the
firm were acting competitively, because it could sell more cars there than at any other price.
-However, firm won't want to sell at that price because it's goal is to maximize profits, not reach efficient result.
-Firm will restrict output to sell for higher price. It will set price to maximize marginal revenue (MR, extra revenue
the firm gets from selling an extra unit). (note: extra unit cranks up quantity but decreases price)
-MR (marginal benefit to firm of selling unit) = MC (marginal cost of extra unit)
-Where MR = MC is the max. profit for the monopolist.
-To find the price a monopolist will choose, draw a vertical line straight up from the point where MR=MC up to the
demand curve. The resulting price will be the monopoly price.
Numerical example (about monopolies)
-Demand: Q=100- P or P= 100- Q
-Total revenues = P -Q = (100 -Q)Q = 100Q - Q^2
-Marginal revenue = 100 - 2Q
--MC = AC = 20
Possible questions:
(A) Find quantity level selected by monopolist
-look where MR = MC. Plug in the equations.
-- MR = 100- 2Q, MC = 20
-- 100 - 2Q = 20, 2Q = 80, Q = 40
(B) Find price chosen by monopolist
--P= 100- Q, Q = 40 (from above answer), P = 60
(C) What would have happened if we had pure competition instead of a monopolist?
-- Look where D curve = MC
MC =20, 20 = 100 - Q, Q =80
P = 100 - Q = 100 - 80 = 20
(D) What's the consumer surplus under perfect competition?
--See notes, basically look to triangles
-Also, consumer surplus under monopoly
-Producer surplus under monopoly
-Deadweight loss.
(E) How to check answers?
-[CS (competition) = [CS (monopoly) + PS (monopoly) + DL]
-Oil Industry Application- Should the gov't deregulate energy prices? Regulation led to efficiency losses. Dereg
would lead to higher prices, but economists (Arrow and Kalt) thought there'd still be a net gain (efficiency gains
would outweigh higher prices). Arrow and Kalt (1979) evaluated the benefits and costs of removing oil price
controls in the US. US gov't had been holding oil prices low in order to fight inflation. This resulted in efficiency
losses of $2.5 billion a year. It's up to politicians to decide if the welfare transfer offsets the efficiency loss.
-Arrow and Kalt said undoing oil price controls would mean higher prices for consumers and higher profits for
producers, a politically bad transfer.
-The transfer from consumers to producers would equal $2.8 billion. A dollar transfer from consumers to
producers would lose about half its value (making it equal $1.4 billion). So, the efficiency gain of $2.5 billion
exceeded the equity cost of $1.4 billion, so oil price decontrol was in the public interest. -It led to a deadweight
loss, and also an equity loss by consumers.
-Let's say producers gained $3 bill, consumers lost $1 bill.
-Need to figure out how much to weigh value of dollar to consumers against value of dollar to producers.
It basically depends on societal preferences. Do we care more about losses to consumers than about gains to
producers.
-Estimates of Welfare Loss from Monopoly
Deadweight loss in our economy- Harberger said DWL = 1/2 (P* -Pc)(Qc - Q*)
P* = firm's price, Pc = competitive price, Qc = competitive quantity, Q* = monopoly quant.
-As an aside, DWL = (1/2)(n)(d^2)P*Q*, where n = elasticity of demand (a.k.a. % change in Q divided by %
change in P), and d = price-cost margin. (This is the answer to question #3, the implications are about price cost
margin and elasticity of demand)
-DWL increases with Price-cost margin
-DWL increases with elasticity of demand
-When it comes to smoking, elasticity of adults ranges from -.4 to -.7.
-One school says teens have same elasticity as adults. (if that's true, cranking up the price will mainly hurt
poor people and won't deter kids from smoking)
-Another school says teens are more responsive to changes in price. (if that's true, it's worth it to stick it to
the poor because the higher price deters teens from smoking).
-Main idea for Harberger is that DWL is .5% of GNP. It seems a little low because it's based on data from the 20s,
there are sources of welfare loss from monopoly other than DWL, later researchers have found higher values of
DWL., and the economy is very different now.
-Cowling and Mueller said deadweight welfare loss created by a firm is approximately equal to half of its profits.
They said DWL equaled about 4% of GNP. If you include wasted resources such as advertising, the measure jumps
to 13% of GNP. (this equals 4% (1/2 profits) plus 9% (ads) = 13% of GNP). Their estimate seems a little high.
2) Technical Progress- Solow (1957) found that output per worker-hour rises (at a decreasing rate) with the amount
of capital per worker-hour. he said you can produce more output worker by putting capital back into the business.
-Scherer and Ross review the conflicting incentives that market structure provides for innovation:
(1) more rivals tend to stimulate more rapid innovation in order to be first with a new product and benefit
from the disproportionate rewards of being first, and
(2) more rivals split the potential benefits into more parts, making each firm's share less.
-So, a large number of rivals may not always produce better results for society.
Be able to talk about whether big firms or little firms are better for innovation.
-The good thing about increasing the number of rivals is that it increases the amount of competition, more firms will
want to be first, so innovation will be helped.
-If you can't get exclusive rights to your innovation and people copy you, that will decrease the incentives for
innovation, because they'll get less profits out of their innovation.
Problem #2 from chapter 4- see notes.
Chapter 5- Oligopoly, Collusion, and Antitrust
1) Intro.- Sherman #1 mainly covers conspiracies to fix prices or share markets. Trace the evolution of legal rules
covering price fixing.
-Oligopoly- a market structure with a small number of sellers, small enough to require each seller to take into
account its rivals' current actions and likely future responses to its actions.
- In oligopoly (a few dominant firms instead of one), the oligopoly price is higher than marginal cost. In
competition, the price = marginal cost.
-An important assumption in this chapter is that potential entry isn't a problem. We'll assume the number of active
firms is fixed. We'll focus on the internal industry problems of firms reaching an equilibrium when the only
competition comes from existing firms.
2) Game Theory-Example 1: Advertising Competition: Prisoner's Dilemma in advertising- main goal of ads is to take away from
your competitors. Assume a duopoly where firms don't compete b/c of collusion or regulation. Price = 15, q
demanded = 100, cost = 5/unit. Profit = 10/unit. Firms do compete through advertising. They can advertise at a
low rate ($100) or a high rate ($200). A firm's market share depends on how much it advertises relative to its
competitor. If they advertise an equal amount, they each sell 50 units. If one advertises high and the other
advertises low, the high advertising firm sells 75 units.
low ads
(firm 1)
Low ads (firm 2)
900
400
High ads (firm 2)
550
150
high ads
(firm 1)
150
200
550
200
-If both advertise low, each firm makes net profit of 400 (500 profit on sales-100 for ads)
-If one ads low and one ads high, the low firm gets 150 net profit, and high firm gets 550 net profit.
-If both ad high, they both get net profit of 300.
-If each firm decides simultaneously how much to advertise, how much should each firm advertise to max. its
profits?
-If firm 2 ads low, firm 1 gets 400 for low, 550 for high.
-If firm 2 ads high, firm 1 gets 150 from low, 300 from high.
-Thus, firm 1 gets more money for high ads.
-Firms will choose to ad high. Joint profits won't be max'd though. If they both ad high, joint profits are
600, and if they add low joint profits are 800.
-Both firms will be best off if both use low ads.
-Each firm has dominant strategy (for either outcome chosen by the other party, their optimal outcome comes with
high ads).
-Firm 2's dominant strategy is high adds
-Firm 1's dominant strategy is high adds
-This is a stable equilibrium, even though it's a non-optimal outcome, a pareto inferior equilibrium, since we could
make both parties better off
Example 2: Compatibility of Standards
-Suppose firm 1 supplies VCRs and firm 2 supplies video tapes. Each firm's product can have either Beta or VHS
format.
-Firm 1's cost of producing VHS VCRs is slightly less than producing Beta VCRs, and firm 2's cost of producing
Beta tapes is slightly less than producing a VHS tape.
-These two firms are the sole exporters of these products to a country that currently has no VCRs or video tapes.
Each firm can only produce in one format. Consumers don't care about format, as long as tapes and VCRs have
same format.
-If both use VHS, firm 1 earns 500, firm 2 earns 200.
-If both use Beta, firm 1 earns 400, firm 2 earns 250.
-If the supply different formats, each earns zero.
-Here, the decisions of the two firms about which format to use are interdependent.
-The Strategic Form of a Game: In game theory, a game is a well-defined object. Game theory is a tool designed
for investigating the behavior of rational agents in settings for which each agent's best action depends upon what
other agents are expected to do. This makes game theory useful for investigating firm behavior in oligopolies.
-First step is to define what the relevant game is. Three elements: 1) a list of agents who are making decisions, 2) a
list of possible decisions that each agent can make, and 3) a description of the way in which each agent evaluates
different possible outcomes.
-Decision making agents are players. Decisions of players are strategies. A player's strategy set tells a
player how to behave in the setting being modeled. A player's payoff function describes how he evaluates different
strategies.
-In other words, given the strategies chosen by all players, a player's payoff function tells him his state of
well-being (or welfare or utility) from players having played those strategies.
-Nash Equilibrium - each party picks its decision taking the decision of other party as given.
-For example, if firm #1 picks low ads, firm #2 would be better off picking high ads.
-If firm #1 picks high ads, firm #2 still picks high ads.
-The Nash equilibrium in this example is where both parties pick high ads, because it's where no party has
incentive to change its individual decision. Collectively, they'd prefer both using low ads, but individually they'd
each choose high ads.
-The Nash equilibrium here is pareto inferior because we can make all parties better off (by both choosing
low ads).
-Nash Equilibrium: each firm i picks its output Qi taking as given Qj of some other firm j.
(**)-Note that for the test we'll be given the reply functions.
- Game theory is used to recommend to players how they should play or to make predictions as to how they will
play. We assume choose their strategies simultaneously at the beginning of the game.
-Assuming players are rational, they choose the strategy that gives them the highest payoff, the profit-maximizing
strategy.
-A list of strategies is a Nash Equilibrium if each player's strategy maximizes his payoff given the
strategies chosen by other players and if his condition holds simultaneously for all players.
3) Oligopoly Theory
-An oligopoly is an industry with a small number of sellers. The criterion is whether firms take into account their
rivals' actions in deciding upon their own actions. The essence of oligopoly is recognized interdependence among
firms.
p. 102 -The Cournot Solution- It's a model of an oligopoly, two firms trying to pick the output that results in a
Nash equilibrium, based on other firm's reply function.
-Example:
Reply functions: Firm 1 Q1 = 30 - .5Q2, Firm 2 Q2 = 30 - .5Q1
-Graph these reply functions, Qs vs. Q1 (see notes).
-The intersection of the two curves is the Nash equilibrium.
--Firm 1-- Q1 = 30 - .5Q2, Q2 = 60 - 2Q1
-- Firm 2 -- Q2 = 30 - .5Q1, Q1 = 60 - 2Q2
-- Q2 = Q2 = 60- 2Q1 = 30 - .5Q1, Q2 = 20, Q1 = 20
- Monopoly Price > Cournot Price > Competition Price
-- (P- MC)/P (the price-cost margin) decreases as the number of firms increases.
Another example: MC = 40, P = 100- Q. Marginal revenue curve of a monopolist is MR = 100 - 2Q. A
monopolist maximizes its profit by setting Q to equate marginal revenue and marginal cost. This results in a Q of
30, and a P of 70, and monopoly profit = 900.
-Now we assume there are two firms, each with MC = 40. The distinguishing feature of the Cournot model are that
firms choose quantity (rather than price) and do so simultaneously. If Q1 and Q2 are outputs of firms 1 and 2, P =
100 -Q1 - Q2.
-If we interpret the Cournot model in game theory terms, the set of players is firms 1 and 2, and the strategy
of a firm is its quantity. A firm's payoffs is its profits.
P1 = (100 - Q1 - Q2)Q1 - 40Q1
P2 = (100 -Q1 - Q2) Q2 - 40Q2
-Here, the profits of each individual firm depends on the output of both firms.
-We need to find a Nash equilibrium, a quantity for each firm that results in each maximizing profits given the
quantity of its competitor.
-Firm 1 wants a quantity that maximizes P1, taking into account the anticipated quantity of firm 2.
-Going through all possible Q2s, the value of Q1 that maximizes P1 is Q1 = 30 -.5(Q2).
-That equation is firm 1's best reply function b/c it gives the value of Q1 that is firm 1's best reply to firm
2's output.
-Best reply function for firm 2 is Q2 = 30 - .5Q1
-Firm 1's best reply function is downward sloping, because firm 1 produces less the more firm 2 produces.
- A Nash equilibrium is defined by a pair of quantities such that both firms are simultaneously on their best reply
functions (shown in figure 5.4 on p. 107). This is good because no firm has an incentive to change its output given
what its competitor is doing.
p. 106 (chap 5, question #3)- "What is the relationship between the monopoly price, the Cournot price, and the
competitive price?" The price in the Cournot solution exceeds the competitive price (which equals unit cost) but is
less than the monopoly price. The Cournot price is higher than marginal cost because firms don't act as price
takers. In the Cournot setting, the firms know that the more they produce, the lower is the market price. As a
result, each firm supplies less than they would if they were price takers, which results in the Cournot price
exceeding the competitive price.
-The Cournot price is less than the monopoly price because each firm cares only about its own profits and
not industry profits. As a result, in maximizing one's own profit, each firm produces too much from the perspective
of maximizing industry profit. Hence the monopoly price (which is also the joint profit maximizing price under
constant marginal cost) exceeds the Cournot price.
-Note that both firms could raise their joint profits in the Cournot setting if they agreed to lower their output
together. Of course, this is another prisoner's dilemma, because they can't hold the other side to the agreement, and
if the second firm lowered output, the first firm would have incentive to raise his output and increase his personal
profit at the expense of the second firm's profit.
-(p. 108) Note that: [(P - MC)/P] = 1/(Nn), where N = number of firms, and n = elasticity of market demand. The
Cournot solution predicts that the price-cost margin is inversely related to the number of firms and the elasticity of
market demand. The elasticity of demand measures how responsive demand is to a change in price. According to
this formula, as the number of firms increases, the right-hand side expression shrinks which implies that the
price-cost margin shrinks.
4) Cartel/Collusion Problems- One of the most difficult problems a cartel faces is in reaching agreement on what
price to set and how demand should be allocated among cartel members. Any price between the non-collusive
equilibrium price (like the Cournot price) and joint-profit maximizing price yields higher profits than not
colluding, but it's illegal for firms to actually discuss what price to set. They have to coordinate w/o overt
communication.
a) -One method of setting price is price leadership. The price leader might be the largest, or lowest-cost
firm. It openly announces its intention to change its price, and the other firms normally follow with similar price
changes. Leader has to assess which price would be acceptable to its rivals, otherwise rivals might not follow.
b)-Another method is mark-up pricing rules, where all firms in an industry become accustomed to
calculating prices with the same formula.
c)-Another method is basing point pricing system, especially in industries with large freight costs and
spread out consumers. Use one city as the base for calculating freight costs to all other cities, even if a firm isn't
operating from the base city.
-Where firms have different cost functions for identical products, the higher cost firm must set the same price as the
lower cost firm or else no one would buy from it. If firms have different cost functions, there is a bargaining
problem about what price to set that makes the usual coordination problems even worse.
-Coordination becomes more difficult as the number of firms increases. Also, tough to tell if someone's cheating
when you can't accurately judge a rival's output.
5) Collusion: Railroads in the 1880s (chap 5, problem #4)
Figure 5.8, the saw-toothed looking thing)- Shows the changes in the grain rate over time. When there's
breakdowns in collusive behavior, price drops, then the firms realize that sucks, so they collude and increase prices
again.
-Note that it's hard to collude when there's a larger number of firms. It's also hard to check when firms
cheat.
Cartels were legal in the US before 1890, so in 1879 RRs formed cartel to stabilize price, created the Joint Executive
Committee (JEC) to set rail rates.
-Porter studied the set-up in two distinct ways: 1) Punishment for cheating is reversion to the Cournot solution (a
breakdown of collusion) for a limited amount of time, and 2) it was assumed that JEC could only imperfectly
monitor the firms' actions.
-Because of imperfect monitoring, one would assume periodic reversions to the Cournot solution. One
would expect to find periods of collusion and high prices, and periods of collusion break down where firms revert to
Cournot price.
-Porter found two periods, one where price was high and one where price was low. In the low price
periods, collusion appears to have broken down. Collusion was intermixed with periodic breakdowns, resulting in
intense competition and lower prices.
6) Antitrust law toward price fixing- Sherman Act sec 1 covers combos in restraint of trade. Two tests:
-Per se illegal: when a practice can have no beneficial effects but only harmful effects, the "inherent
nature" of the practice is injuriously restraining trade. Only have to prove that the behavior existed, and there's no
allowable defense.
Economic Analysis of Mergers:
a) Bork's view- the good things that come from mergers are getting rid of overlapping departments in the two
companies which increases efficiency. More efficient R&D. They also lead to higher prices.
-Rule of Reason- should be to ask if efficiency gains from the merger exceed the losses. (see graph in
notes). This view ignores the transfer to the company (in the form of profits) from the consumers. Look to
inherent effect (market share of parties involved) and evident purpose (intent).
-(chap 5, question #5) These categories are consistent with economic analysis. See Figure 5.9 on page 124.
Merger leads to price increase, cost reduction, and output reduction. The merger results in a DWL because of
higher price and lower output(area A1), but also results in a benefit to society because of lower costs and efficiency
gains (area A2).
-Not all mergers produce both gains and losses. Some produce just gains, some just losses. Courts need
to investigate and weigh costs and benefits, rather than just declaring mergers illegal per se.
-Ideally, courts will declare mergers with only benefits legal, and mergers with only costs illegal.
-Where merger has both benefits and costs, courts don't follow net benefit approach. If there's any damage
to competition (shown by the existence of Area A1), the merger will be declared illegal regardless of benefits.
-Note that cartels (unlike mergers) only lead to area A1 losses, and cost savings are quite unlikely. So,
cartels should be dealt with in per se manner.
Chapter 6: Market Structure and Strategic Competition
1) Intro.- Two key sources of competition in markets: existing firms and potential entrants. Chapter 5 looked at
behavior of existing firms. This chapter extends analysis in two ways: consider the determinants of the number of
sellers (scale economies and entry conditions), and consider the role of potential competition (the effect that the
threat of entry has on price-cost margin).
2) Market Structure- Two key elements of market structure are concentration and entry conditions.
Determining whether or not the merger should go through- DOJ actually calculates HHI.
a) Concentration- Firms are quite heterogeneous in reality, in that they have different products and cost functions.
This results in their having different market shares. So, a count of the number of firms can be a misleading measure
of the degree of concentration.
-Need to develop a statistic that measures the concentration of an industry: should measure the ability of firms to
raise prices above the competitive level. But can't fully assess the competitiveness of a particular industry because it
doesn't measure potential competition.
-Definition of the market- To measure concentration, first need to define the limits of the market. Which products
and sellers should be included?
-Economists say the ideal market definition must take into account substitution possibilities in both
consumption and production.
-Also a problem of where the market stops and potential entry begins.
-We want a measure of how concentrated the industry is, a measure with some worthwhile economic content.
Concentration ratio- how concentrated is the industry? The most widely used measure of concentration is the
concentration ratio. The more you score on this concentration measure, the higher should be the price-cost ratio,
and the more collusion we'd expect to see.
-First: need to define the market- is it cellophane, or all wrapping materials.
-Stigler's model of the market is the one we will use. A single industry is all products with a strong cross elasticity
of supply or demand. (Long-run substitution).
-Cross elasticity of demand is [the % change in demand for good i] / [% change in price of good j]
-Need to decide whether the measure of concentration should or shouldn't include potential entrants? Do we
include potential entrants in the market?
-Scherer said only existing capacity in the short-run is counted as substitutes.
-Second: Define concentration measure.
-The m-firm concentration ratio: what's the share of the output by the m biggest firms/the share of total industry
sales accounted for by the m largest firms.
Firm
Industry X
Industry Y
1
20
60
2
20
10
3
20
5
4
20
5
5
20
5
-The 1-firm conc. ratio for X is 20, for Y is 60.
-The 2-firm conc. ratio for X is 40, for Y is 70.
-The 4-firm conc. ratio for X is 80, for Y is 80. (note that they flip flop as m changes, one industry seems more
concentrated than the other)
p. 148 -(Chap 6, #1) It's just a list of each of the m firms in each industry, by how much market share they have.
Can graph this on a concentration curve, which rises from left to right, generally at a diminishing rate. Where the
m firms have the same share, the curve will be a straight line. Curves reach 100% where m = the total number of
firms in the industry.
-If the curve of industry Y is everywhere above the curve of industry X, then Y is more concentrated than
X. When the curves intersect, it's impossible to state which is more concentrated, unless we use a different
definition.
-Basically, industries with steeper sloped curves are more concentrated.
-Note that the Census Bureau issues concentration ratios for various industries but often doesn't include close
substitutes in an industry, and ignores regional markets and foreign competition.
-The HHI is the main measure of concentration, p. 149 (chap 6, #1, 2)- the Herfindahl and Hirschman Index. Used
by DOJ in merger guidelines. It incorporates more info than the simple concentration ratio does.
-Si = firm i's percentage of total industry sales (its market share)
-n = # of firms.
-HHI = (100S1)^2 + (100S2)^2 + . . . (100Sn)^2
-Example: 10 firm industry, where each firm has 10% share.
-HHI = (10)^2 + (10)^2 + . . . (10)^2 = 10(100) = 1000.
-The HHI is the weighted average slope of the concentration curve. The weight for the slope of each segment of the
curve is the corresponding Si for that segment.
-The HHI declines with increases in the number of firms and increases with rising inequality among a given number
of firms.
p. 151 (chap 5, #3)- DOJ regards HHI of 1000 as critical: if a merger leaves the industry with an HHI of 1000 or
less, the merger probably won't be challenged.
Concentration curve (figure 6.1)- The slope of any point on the curve is equal to that firm's share of the market.
-The more concentrated the industry, the higher it's curve relative to another industry.
-Aside: if it's an oligopoly, constant MC, and Cournot prices are used, then:
HHI/eta = S1[(Pc -MC1)/Pc] + S2[(Pc - MC2)/Pc] + . . . Sn[(Pc-MCn)/Pc]
-Note: (eta= market elasticity of demand)
-What this means is that higher HHI scores are positively associated with increased price- cost margin.
-Also, higher market elasticity of demand means higher HHI.
-Note that high price-cost margin could be due to collusion (bad) or highly efficient firms (good)
-Note that the DOJ doesn't care, and if you have a high HHI, you're in trouble no matter what the reason is.
-Benefits of HHI: it has foundations in oligopoly theory. Suppose that firms have homogeneous products
and engage in Cournot competition, and have different cost functions. Ci will denote the MC of firm i, where i = 1,
2, . . . x. One can show that the Cournot solution has a firm's market share being negatively related to its marginal
cost: the lower firm i's marginal cost, the higher is its profit maximizing output and thus the higher is firm i's share
of the market.
-The important result is that the HHI is directly related to a weighted average of firms' price-cost margins
from the Cournot solution:
S1[(Pc -C1)/Pc] + S2 [(Pc - C2)/Pc] + . . . Si[(Pc -Ci)/Pc] = HHI/n (where n = is the market demand
elasticity, Pc= Cournot price, Si = firm i's market share).
-Basically, the higher the HHI, the higher is the industry price-cost margin.
-Empirical evidence has shown that a high concentration index for an industry is a signal of a high price-cost
margin. There are two main hypotheses for why concentration and price-cost margin are positively related:
a) Collusion hypothesis: says that the more highly concentrated an industry is, the less competitive are
firms and thus the higher the price-cost margin. This is true under the Cournot discussion. Also, collusion is
easier as number of firms decreases.
-A policy implication of this theory is that highly concentrated industries should be broken up.
b) Differential efficiency hypothesis: (Demsetz) says high concentration doesn't cause high price-cost
margin. Instead, high concentration tends to be observed with high price-cost margins. In some industries, a few
firms will have cost or product advantages over their competitors, and these few firms will come to dominate the
market and thus increase concentration. These firms will be able to price above cost
-Policy implication is that highly concentrated industries shouldn't be broken up because that would
penalize the superior firms and deter them from doing what we want them to do: provide better products at a lower
cost.
-The empirical evidence supports the differential efficiency hypothesis, that a firm's profit is strongly positively
associated with its market share. There is typically a weak positive association between industry profit and
concentration.
-Scale economies - do firms really need to be bigger (through mergers generally) to lower their costs. How big
does a plant have to be as a % of the market to achieve economies of scale? It depends on the market.
- Perhaps the most important explanation of why some industries are more concentrated than others is the magnitude
of economies of scale relative to total market demand. In other words, what fraction of the market's output is
needed by a firm to achieve minimum long-run average cost.
-Specialization of labor and equipment that can be achieved as a result of larger size is an important source
of economies of scale (as in the car industry).
-Diseconomies of scale may result as firms get so big that top management loses control over the entire
organization. Can avoid this through decentralized management.
p. 153 (chap 6, #3)- Engineers studied the minimum efficient scale plant and scale firm as percentage of total market
to utilize scale economies. Generally, the actual market shares of leading firms are considerably greater than
necessary to attain efficient size.
-In only a few industries do estimated efficient shares approximate the actual shares of the leading firms.
p. 154 -According to McGee, business problems aren't just engineering problems, engineers don't always run
businesses. There are questions of management and control also.
b) Entry Conditions- Competitiveness of industry isn't just measured by concentration, but also by ease of entry.
Entry conditions are important because: (1) the number of active firms is partially determined by the cost of entry as
well as other factors like economies of scale, and (2) entry conditions determine the extent of potential competition
(ease of entry induces active firms to compete vigorously).
-Equilibrium under Free Entry- Entry into an industry means acquiring the ability to produce and sell a product,
and there's generally some cost associated with this.
-In an industry where all active and prospective firms have access to the same production technology and input
prices so that each firm has the same cost function and each firm produces the same product, assume the Pi(n) is
each firm's profit per period when there are n active firms.
-Pi(n) will decrease as the number of active firms increases.
-A free entry equilibrium is defined by a number of entrants n(e), such that entry is profitable for each of the n(e)
entrants and entry would be unprofitable for each of the potential entrants who chose not to enter.
-If K is the cost of entry, the free-entry equilibrium of firms is defined by:
[Pi(n(e))/r] - K > 0 > [Pi(n(e) +1)/r] - K
-The relationship between the cost of entry and the number of active firms at a free-entry equilibrium is quite
straightforward: if the cost of entry rises, fewer entrants find entry profitable.
-Barriers to Entry- Barriers to entry, along with concentration, help explain deviations of price from cost. Some
say barriers are created by the market (high start up costs, advertising). others barriers to entry are only gov't
barriers (like patents or licensing requirements), others say every startup cost is a barrier. Entry into marketspeople won't enter if the costs of entering are higher than expected returns.
-Stigler v. Bain- See notes for graph of the situation they discuss.
-Stigler says new firms could come in, produce slightly lower q at a lower price and take all of the existing
firm's profits.
-Bain says that's unrealistic, because consumers are unwilling to switch brands. The new firm would have
to offer considerable price discount to lower consumers away, or give away samples.
(**)- think about the issues between Bain and Stigler, think of examples of consumer loyalty as barrier to
entry
-Real Lemon example- RL had to allow other firms to license to use its name because it so dominated the market.
-Joe Bain defined a barrier to entry as "the extent to which, in the long run, established firms can elevate their
selling prices above minimal average costs of production and distribution without inducing potential entrants to
enter the industry."
-These include scale economies, capital cost requirements of entry, gov't restrictions, and cost advantages
of existing firms.
-A barrier to entry, as defined by Bain, need not imply that its removal would raise welfare.
-George Stigler said "a barrier to entry may be defined as a cost of producing (at some or every rate of output)
which must be borne by firms which seek to enter an industry but is not borne by firms already in the industry." (like
heavy introductory advertising)
-This definition emphasizes differential costs between existing firms and entrants. Stigler's definition is
narrower than Bain's.
-von Weizsacker says "barriers to entry into a market can be defined to be socially undesirable limitations to entry
of resources which are due to protection of resource owners already in the market."
-Like Bain, defines a barrier to entry by a particular outcome.
-When thinking about barriers to entry, first consider the assumptions underlying the particular argument that
something is a barrier. Determine whether it's true that existing firms can maintain price above cost while deterring
entry. Second, consider whether there is a policy that could remove the barrier and improve social welfare.
3) Contestability and Sunk Costs- (Chap 6, # 5) Baumol et al developed the theory of contestable markets.
Contestable markets- Three requirements: -A market is perfectly contestable if three conditions are satisfied:
(1) new firms face no disadvantage vis-à-vis existing firms. New firms basically have access to same
production technology, input prices, products, and info about demand.
(2) there are no sunk costs. That is, all costs associated with entry are fully recoverable. A firm can leave
an industry at no cost. Firms can recover all entry costs with costless exit.
(3) Entry lag (the time between when a firm's entry into the industry is known by existing firms and when
the new firm is able to supply the market) is less than the price adjustment lag for existing firms.
-The central result is that if a market is perfectly contestable, then an equilibrium must entail a socially efficient
outcome.
-This is similar to Stigler's definition.
-Once these requirements are met, there's an efficient outcome. Don't need lots of firms in an industry to be
efficient. It just needs to be perfectly contestable.
-Example: Bill Gates probably can't argue he's in a perfectly contestable market: new firms can't use same
technology, high sunk costs/investment costs for new firms.
Problem #2, chap 6: Suppose an industry has ten firms with given percentages. Be able to derive 4-firm conc. ratio,
derive the HHI, derive the effect of a merger between the 5th and 6th firms.
Chapter 7: Mergers
1) Introduction- Horizontal mergers: mergers in which rivals in the same market merge (like two steel companies
merging). Not all horiz mergers harm competition, but there's a clear potential to harm competition b/c they reduce
the number of rivals.
-Vertical mergers: mergers between two firms w/ potential or actual buyer-seller relationships like steel maker
merging with iron ore producer).
-Harm competition not by reducing number of competitors, but by closing out other rivals (steel-makers)
from the now-merged producer (ore).
-Conglomerate mergers: neither horizontal nor vertical. Divided into 3 categories by FTC:
(a) product-extension merger: between firms who sell non-competing products but use related marketing
channels or production processes (like Pepsi merging with Pizza Hut)
(b) market-extension merger: between two firms selling the same product but in separate geographic
markets. (Florida supermarket chain merging with Illinois chain).
(c) Pure conglomerate mergers: between firms with no relationship of any kind (Tobacco company merging
with oil company).
-Conglomerate mergers can be harmful b/c of agreements to remove potential competitors. (like Proctor
and Gamble merging with Clorox).
2) Antitrust Laws and Merger Trends- (question #1) 4 main merger waves:
1) Merger for monopoly wave, 1890-1904 (the largest in terms of size relative to GNP). Many
oligopolistic or nearly-competitive industries became monopolies through merger. Main one was steel industry,
where over 200 iron and steel makers merged into 20 firms in 1880s, and JP Morgan merged 12 of those 20 into
U.S. Steel in 1901, had 65% of market. Big price increases.
-Sherman was passed in 1890, but wasn't used against mergers until Northern Securities case in 1904.
Blocked one merger, and broke up Standard Oil and American Tobacco.
-Clayton Act was passed in 1914 to block mergers that didn't just create monopolies (which was all
Sherman covered), but also stock mergers that substantially damaged competition. Could still damage competition
through asset mergers (which were still legal)
2) Mergers to oligopoly, 1916-1929. Couldn't merge to monopoly any more, so the industries merged into
oligopoly (like Bethlehem Steel).
3) Celler-Kefauver Act, 1940s-1968. CK Act blocked damaging asset mergers by amending Clayton to
include both asset and stock mergers.
-Tougher laws have reduced horizontal and vertical mergers sharply, but conglomerate mergers increased.
4) Big acquisitions/LBOs, 1980s. Acquisitions increased from $50 bill in 1983 to $200 bill in 1988.
-LBOs work by an investor group putting up 10% of bid price in cash, then borrowing against company's
assets to raise 60% in bank loans and 30% in junk bonds. The investor group then buys all of the company's
outstanding stock, making the company private. The owners sell off parts of company to reduce debt, also cut costs
and fire workers. Eventually, they hope to take the company public again and make huge money.
-Antitrust concerns arise when the parts of the LBO companies are sold off.
3) Reasons for Mergers- Not all motives for mergers are anti-competitive.
a) Increase market power/Monopoly- Firms like mergers leading to higher degrees of market power.
b) Increase distance between cost and price/Economies- Profitabilities from combining two firms may lead to
greater profitability. Two types of cost savings:
1) Pecuniary economies: monetary savings from buying goods or services more cheaply. Increase
bargaining strength relative to their suppliers.
2) Real economies: increased specialization or scale economies.
-Real economies are socially desirable, and pecuniary economies merely reflect redistributions of income between
buyers and sellers.
c) Reducing Management Inefficiencies- Takeovers of one firm by another can lead to savings by replacing an
inefficient management with a more efficient one. In modern corps, there's a problem of separation of management
and control, a conflict between the objectives of management (power and profits) and s/h’s (profits only).
p. 201 -(Question #2) The Principal-Agent Relationship: Principal/owners hire agent/management to run the biz
and earn max. profits for the principal. Owners aren't sure about profit possibilities that result from managerial
decisions. Figure 7.1 shows the "profit possibility frontier" as a relationship between profit (p) and output (q).
Think of q as representing output and also variables managers care about in addition to profit (size of firm, salaries).
-Management has good info about the frontier, owners don't.
-Where management's indifference curve intersects the frontier is the point representing the p and q
management prefers (with a lower p and higher q than owners would prefer).
-There's an agency cost (AC) which is what the principal must give up because it has to contract with an
agent to manage the firm, and the principal's lack of info makes it impossible simply to require the agent to use the p
the owners would prefer if they did have info.
-Principal tries to use profit-sharing plans, stock option, to induce the agent to operate closer to principal's
preferred p, but the basic conflict in objectives means that AC will never be zero.
-Jensen says this conflict between managers and owners explained the large number of acquisitions in the
1980s, especially in oil industry: management wanted more exploration and investment despite excess capacity,
s/h’s wanted cash flows paid out as dividends which would reduce the size of the firms (against the management's
wishes).
-The management invested in exploration with negative net present values to avoid paying
dividends. This reduced the stock market's valuation of the company, and so lowered the cost of buying the
companies. Hence all the mergers.
c) Other Motives- Firms' owners may wish to sell for many other reasons: financial distress, retirement, estate and
income tax advantages, diversification, or just a desire to build empires.
4) Horizontal Mergers- Again, HMs provide the biggest threat to competition, by reducing the number of
competitors and raising the possibility of creating market power. But they also have the potential to create socially
beneficial cost savings by integrating the firms' productive facilities.
Judging whether horizontal merger makes sense (figure 7.2)- This is basically the Bork-Williamson diagram for
mergers. Same curve we've seen before.
-Need a small cost reduction to offset the DWL. (because the cost reduction is a rectangle on the graph,
and the DWL is just a triangle).
-It's pretty tough to actually show there will be a cost reduction though. Are Mercedes and Chrysler really
going to share that many parts, or duplicate research efforts.
p. 203, 204 a) Benefits and Costs- (question #3) Figure 7.2 shows p and q graph of effects of horizontal merger.
HM results in lower AC for firms, but increased market power leads to higher price.
-The merger results in DWL for consumer surplus, equal to shaded triangle A1. There's a gain to society because
of cost savings, equal to the shaded rectangle A2. A2 is the cost savings in producing at a lower AC.
-The result of this is that a relatively small percentage cost reduction will offset a relatively large price increase,
thereby making society indifferent to the merger (because benefits is a rectangle and DWL is a triangle).
-For example, if a merger expects to increase price by 20%, only a 2.4% cost reduction is required to make
areas A1 and A2 equal in figure 7.2 (assuming elasticity of demand).
-It should be noted that the model presented here assumes a merger that creates both market power and economies.
That doesn't always happen in HMs.
-This model requires hard empirical evidence on the extent to which the HM would lower costs, raise prices, reduce
output, etc. This evidence may be hard to gather. Also, the firms would have to provide the evidence, and they'd
have incentive to overstate cost savings.
Question #1- based on figure 7.2, work with a graph.
-- D: q = 100- p, p = 100 - q
--AC(pre-merger) = 50, AC (post-merger) = 44
--P0 = 50, P1 = 70, Q0 = 50, Q1 = 30
b) Preexisting Market Power- If price exceeds average cost in the pre-merger period, rather than equaling cost as
assumed above, the analysis must be modified. In that case, larger economies are needed to offset the welfare
losses of a post-merger price increase.
c) Timing- It may be the case that the economies can be realized through internal expansion if not by merger. So,
preventing the merger may only delay the achievement of economies rather than eliminating them forever. This
will occur most easily if the market in question is growing, thereby enabling the firms to grow to efficient sizes w/o
necessarily increasing their market shares at the expense of rivals.
p. 206 -In other words, view A1 and A2 as changing over time. This means that if we judge the merger with an
eye to the future, DWL may be larger and cost reduction may be smaller than we might think based on just the
present situation.
-This means internal growth may be a good alternative to HMs.
d) Industry-wide effects- The market power effects of an HM may lead to price increases by other firms as well.
So, the DWL of an HM may be understated if we don't consider the impact of the HM on prices of other firms.
-Rivals are likely to oppose an HM that leads to efficiencies and real economies, and support an HM that
leads only to market power.
e) Technological Progress- Also need to consider the effect of an HM on technological progress.
f) Income Distribution- The monopoly profit that's created comes at the expense of consumers' surplus and is
therefore a transfer from one group to another.
5) HM Cases- Courts don't necessarily evaluate merger using the cost-benefit analysis above.
-Brown Shoe (1962)- Proposed merger between Kinney and Brown. - what was the relevant market? Women's
shoes in Omaha, low cost shoes nationwide, etc.?
-DOJ defined the relevant market as a product group in a geographic market where a firm would increase price if it
were the only producer.
-This isn't a hard and fast rule (unlike HHI)
-HHI rule doesn't lead us to challenge all mergers, just those that raise industry HHI above 1000.
Court defined the product market as men's, women's, and children's shoes. Defined geographic market as every city
with over 10,000 people where Brown and Kinney sold shoes at their retail stores. In some market's, the HM would
grant B/K more than 5% market share.
-Court focuses on relatively small market shares that would be unlikely to lead to monopolistic price
increases. But Court wanted to prevent a trend toward concentration.
-Efficiencies that would result from the merger weren't as important as maintaining a decentralized
industry.
-Court later said in 1967 that "Possible economies cannot be used as a defense to illegality".
-Possible interpretations: 1) Too hard to measure cost savings for them to serve as a defense.
2) Cost savings are harmful because they lead to the failure of small, inefficient retailers.
3) Antitrust has multiple objectives, and economic efficiency is only one. Balance the presence of many
small retailers against the higher costs to consumers.
-Continental Can and Alcoa (1964) showed that the Court is willing to define markets in a way that resolves
inherent doubts on the side of preventing mergers with possible anti-competitive effects.
-Von's (1966)- two supermarket chains merged and together had 7.5% of the market, became the second largest
firm. Court found it illegal, mainly because of the decline of independent grocery stores and the rise of chains.
-Under current law, market shares at the time of the merger are a basis for determining illegality. In 1960s, a
combined share of 10% was too high. Now, a combined share of 25% may be O.K.
6) 1992 Merger Guidelines- A key point is how the relevant market will be defined. Geographic market is basically
the largest area in which customers would shop in response to a price increase by their current supplier. Product
market includes the products consumers would buy if the price of the product in question increased.
-The higher the price increase, the broader the potential geographic and product market.
-DOJ uses HHI to determine which mergers are safe from challenges because they're unlikely to have adverse
competitive effects.
-Those that don't fall within HHI safe harbors are analyzed further with respect to entry conditions,
efficiency considerations, etc. to determine if they'll be challenged.
Table 7.5- Viscusi said this was really important.
Post-merger HHI
1800 and above:
Safe
Unsafe
Unsafe
1000-1800:
safe
safe
unsafe
0-1000:
safe
safe
safe
Increase in HHI
0-50
50- 100
100-up
(*)-need to be able to use this table for answering questions on test.
(Question #4) -There are three categories of market concentration:
p. 214 a) Unconcentrated (HHI < 1000)
b) Moderately Concentrated (1000< HHI < 1800)
c) Highly Concentrated (HHI > 1800)
-All mergers with post-merger HHI values of 1000 or less are safe.
-Mergers that produce an HHI increase of less than 100 in Moderately Concentrated markets are safe.
-Mergers that produce an HHI increase of less than 50 in a Highly Concentrated market are safe.
-Mergers that aren't in the safe harbor would be challenged or not challenged depending on the other factors.
-The cost savings emphasized by Williamson’s model may be one of the other factors considered, but
they're tough to prove. Also, if the savings could come about through internal expansion, the cost savings from the
merger won't do much for the pro-merger argument.
7) Conglomerate Mergers- Again, CMs involve firms that aren't sellers in the same market, and aren't in a
buyer-seller relationship. Pure CMs are less likely to be challenged than product extension CMs and market
extension CMs, because the latter two are more likely to reduce potential competition.
a) Potential Benefits- In 1960, ITT made mainly telecomm equipment, then it diversified into insurance, hotels, etc.
Top management of such conglomerates may be very hands on in the different divisions, or just concerned with
profits overall.
-One benefit of CMs that lead to big conglomerates is that they serve as "miniature capital markets," meaning that
central management can assign cash flows to high yield uses, subjecting cash flows to internal competition.
-Another benefit of CMs is that management is constantly pressured to perform efficiently by the threat of a
takeover by another firm.
b) Anti-competitive effects and cases- One problem is that they create opportunities for reciprocal dealing, the
practice of buying from a supplier only on the condition that the supplier buys from you.
-Another problem is predatory pricing, deliberately pricing below cost to drive out rivals, and raising the price to
the monopoly level after their exit.
-Another problem is eliminating potential competition. DOJ Guidelines: "An actual competitor is one that has
existing productive and distributive facilities that could easily be used to produce and sell the relevant product
within one year in response to a small but significant price increase."
-Potential competitors are those that "must construct significant new facilities in order to produce and sell
the relevant product."
p. 218 -(Question #4, cont'd)- P & G was biggest soap and detergent maker, bought Clorox (with only $40 mil in
sales) that was biggest bleach producer. Court found the merger illegal b/c it eliminated P & G as a potential
competitor of Clorox. P & G could have entered the bleach market on its own, and its presence as a potential
competitor kept bleach prices low.
-Merger Guidelines provide criteria for challenging a potential competitor merger:
a) The HHI must exceed 1800.
b) Entry must be difficult.
c) The eliminated potential competitor must have been one of only three or fewer firms having comparable
advantages in entering the market.
d) The acquired firm's market share must be at least 5%.
Clinton merger policies- Are Clinton's merger policies tougher than previous merger policies? How would we
judge this?
-Check the number of mergers that don't go through, and what industries are most impacted.
- Check the HHIs of the ones that were approved and the ones that were rejected.
-Check the trends on this data over time.
-Divestitures or other actions that are needed before approval.
Merger trends
-We're seeing lots of mergers now because the nature of the economy is changing. Many industries are becoming
deregulated (electricity, utilities) so mergers will be more possible. Or people will merge to consolidate in tough
times.
Staples and Office Depot- The two wanted to merge, FTC got injunction against it in 1997. Look at arguments of
each side:
Issue 1: The market
-FTC- the two pre-merger entities had large market shares in the relevant market (office supply super
stores), and FTC said the merged entity would have a monopoly of office superstores. The only other big guy out
there is Office Max.
-Staples/Office Depot- everybody sells office supplies (like Walmart), and it's wrong to define the relevant
market as office superstores. 80% of all office supplies come from non-office superstore sources.
Issue 2: Efficiencies
-Staples/Office Depot- there will be cost savings because of fewer competing, overlapping stores, save as
much as 6%.
-FTC- not impressed with this argument, because much of the cost savings is due to inefficient
management in the existing structure. 43% of the cost savings are available before the merger. Also, not all of the
savings will go to consumers, much of it will go the company.
Issue 3: Increased prices due to decreased competition
-FTC- predicts a 7.3% price increase due to monopoly. There's no way cost savings will offset that price
increase.
-Staples/OD- Excluding the cost savings, they thought prices would go up 1.3%.
Issue 4: Bottom Line
-FTC- All that matters to the FTC is the consumer surplus. Efficiency gains reaped by the firm are
irrelevant. What matters is whether or not consumers get shafted.
-Staples/OD- everybody benefits: consumers get lower prices, companies make more money.
Conclusion: FTC won, regardless of efficiencies and firms' arguments.
Problem #1, chapter 7- Williamson problem, based on figure 7.2. Before merger, company set price equal to
average cost (this is very nice, there was no market power before merger). After merger, price is raised to 70 (now
they have market power, so can afford to set price above cost)
-Need to calculate DWL, cost savings from merger, transfer from consumer to producers.
-Judge whether this is a good merger by: Bork method- compare DWL and cost savings.
-Clinton method- look at transfers
Problem #5, chapter 7- Pre-merger HHI was 1538, merging firms' market shares were 22.7 and 15. Post-merger,
the merged entity had 37.7 market share.
-Would this merger be blocked? Post merger HHI is above 1000, so it's not totally safe, need to check the
post merger HHI.
-Pre-merger, the HHI of the individual firms was 515.3 + 225 = 740.3
-Post-merger, the HHI of the merged entity is 1421.
--(S1)^2 + (S2)^2 +2(S1)(S2) = the increase in the HHI after the merger.
-Could also figure out the increase by subtracting the pre-merger HHIs from the post-merger HHIs.
-In this case, the increase in HHI is 681, the post-merger HHI is 2219.
-If the increase in HHI is over 50, you're in trouble, so this merger is clearly not in the safe harbor.
President's Report on Regulation:
-Pros and cons of regulating on the job injuries (from NY Times)- says it's good for employers b/c they'll lose less in
sick days and workers compensation. The regulation in question aims at making the work place safer. UPS
lobbied against it.
-Does it really pay for itself?
-Pro- increased productivity of workers by reducing injuries; decreased workers comp costs; lower wages
because workers won't get hurt as much and won't be able to demand extra pay for extra risks.
-Con- Hurts productivity b/c the work place and processes will have to be redesigned to accommodate the
new regulation; for some industries, it's an externality if self-insured; if it was going to increase profits for the firms,
they would have done it already.
-Viscusi says it's silly to try to prove that the regulation's benefits outweigh its costs to the firms, that's a hard
argument. It's much easier to prove that there's a net benefit to society in general.
Chapter 8: Vertical Mergers and Restrictions
1) Introduction- Here we analyze how firms can harm competition by inflicting injury on their rivals. Focus on
vertical arrangements between buyers and sellers. For example, by buying or merging with some of its
customer-firms, a competitor can exclude its rivals from selling to those customer firms (this is called
"foreclosure").
-Another type of vertical restriction is "tying". Tying one product to another is viewed harshly by courts, like using
market power in one market to lead to market power in a second market.
2) Vertical Restrictions- Consider various types of vertical restraints.
a) Exclusive Dealing- a contract between supplier and dealer stating that the dealer will buy all of its supplies from
that supplier. It's basically an alternative way of accomplishing vertical integration. May lead to foreclosure of
rivals.
-Benefits may include lower selling expenses by the supplier and lower search costs by the dealer; Supplier may
invest more in helping the dealer's sales skills; Supplier may find it worthwhile to promote the products nationally if
he knows that the dealers won't substitute a lower-priced non-advertised brand when consumers flock to their stores.
-In Standard Fashion (1922), Court found an exclusive dealing arrangement between a manufacturer of dress
patterns and a dry-goods store to be illegal, because rival pattern manufacturers were foreclosed from the market.
-In Nashville Coal (1961), Sup Ct refused to strike down an exclusive dealing arrangement between a coal company
and an electric company. It only involved .77% of total coal production, so didn't qualify as a substantial lessening
of competition.
-In Standard Oil (1949), an exclusive dealing arrangement involving a 6.7% market share was struck down.
-So, whether exclusive dealing is likely to be illegal seems to depend on the market shares involved.
b) Tying- is the practice of a seller conditioning the sale of one product on the sale of another.
-Typically, the customer buys or leases a "machine" and then must purchase the inputs that are used with the
machine from the same supplier.
-Viscusi gives the example of a loan agency requiring you to buy credit life insurance from them in order to get a
loan from them. Is credit life insurance a good plan? We'd want to know how much we pay for it (premiums), the
market price of such policies, whether people would want to buy the insurance if they weren't required to do so,
profit margins for the company on the insurance policies (in reality it was 80 cents on the dollar).
-This raised the FTC's eyebrows, because the credit life insurance was basically theft.
-Courts view tying as a device for extending monopoly over one product to the tied product. This is known as the
"leverage theory" of tying.
c) Leveraging or Extension-of-Monopoly- Firms don't tie products in order to get a monopoly, according to
Burstein, in part because we see many firms with small market shares engaging in tying. He says price
discrimination is the true explanation of many of these practices.
-Also, extension of monopoly is weak because the tying firm could maximize its income not just by tying,
but also by raising the price on the tying product/the product he has monopoly power over.
-Other possible explanations for tying are efficiency, quality control, evasion of price controls, and to prevent
substitution away from a monopolized input.
d) Price Discrimination- (Question #2) Figure 8.8 (page 250) depicts the usual profit-maximizing monopolist
equilibrium where the monopolist is permitted to select a single price. It's in the monopolist's interest to try to
extract a larger profit by price discrimination.
-Think of tying as a price scheme designed to extract more of the consumers' surplus.
-Recall that block-booking is where a movie distributor requires a theater owner to take movie B if he wants movie
A. Here's an example:
-Two theater owners have different maximum values for 2 different movies:
-Fox Theater values Movie A at 100, Movie B at 70.
-York Theater values Movie A at 60, Movie B at 80.
-To obtain the max. revenue, the movie distributor has several options:
-(1) Perfect price discrimination - here the monopolist would charge each consumer at each individual
consumer's maximum willingness to pay (this is called their reservation price). There would be no consumer
surplus here, all profits would be sucked up by monopolist.
-Is perfect price discrimination efficient? Yes- charge every customer what they're willing to pay, no DWL, but no
consumer surplus. Maybe not great from an equity standpoint, but it is efficient.
(-Consider the Internet here- won't that allow companies to achieve nearly perfect price discrimination?
What are the antitrust ramifications of this?)
Here, perfect price discrimination would mean charging separately the maximum value for each movie to each
individual: 100 + 70 + 60 + 80 = 310.
-(2) Uniform/Normal Pricing - If distributor can charge only one price per movie, the best he could do
would be to charge 60 for Movie A, and 70 for Movie B: 70 + 60 +70 +60 = 260.
-(3) Block Booking - Offer a bundle of movies for one price. The best he could do would be to charge
140 to both, because that's the maximum the York would buy for. 140 + 140 = 280.
-(**) The point is that block booking yields higher revenue than normal pricing. This doesn't always work
though. If Fox would pay more for both movies, block booking gives results identical to normal pricing.
-Courts frown on block booking (Loew's, 1962). Court is concerned that tying can foreclose rivals from the tied
market and this is harmful to competition.
-Baseball season ticket packages are like block booking, but Sup Ct hasn’t done anything about it.
-(**) Viscusi likes the block booking example, and it will probably be on the test.
-Underlying assumption is that tying is form of price discrimination. Price discrimination can have both
positive and negative effects.
p. 252-Copying machine example (question #3)- This example illustrates variable-proportions tying. A
monopolist of copying machines has two potential customers with different preferences for copying services. The
difference in preferences is an essential part of the rationale for tying. The general idea is that tying gives the
monopolist the ability to tailor his prices to fit his customers better if he could charge only a single price to
everyone.
-The profit maximizing scheme would be a tied sale of paper to the machines.
-Could just sell the machine by itself, and give away the paper.
-The monopolist has constant costs of producing copying machines of 1000 per unit. MC of paper = 0.
-The customers get no utility from the machines, but only from the copying services they produce in
combination with paper.
-The number of packages of paper measures the quantity of services consumed by the customers.
-Figure 8.9 shows the demand curve of copying services for the two consumers.
-Consumer 1's demand: q1 = 100 - p1
-Consumer 2's demand: q2 = 200- 2p2
-The areas under the demand curves and above the horizontal axes represent the consumer
surpluses. 1's consumer surplus is 5000 (willing to pay that), 2's consumer surplus is 10,000 (willing to pay that).
-So, monopolist could sell both consumers a machine for 5000, or sell a machine only to consumer
2 for 10,000 (assuming he can't separate the prices).
-Profits of selling 2 machines @ 5000 = 2(5000-1000) = 8000.
-Profits of selling 1 machine @ 10,000 = 10,000 -1000 = 9000.
-Monopolist would do better selling 1 machine at 10,000 & forcing consumer 1 out of the market.
-Now the monopolist can decide to tie paper to the machines.
-Figure 8.10 illustrates the profit-maximizing solution that the monopolist would sell the machine
for 2812.50, and paper for 25. The first consumer will by 75 packages of paper at that price, and the CS would be
2812.50, which is extracted as the price of the machine.
-The second consumer would buy 150 packs and would also pay 2812.50 for the machine.
-In this case, profits under tying = 2(2812.50 - 1000) + 25(75 + 150) = 9250.
-The point is that tying permits the monopolist to extract an overall profit. Tying here permits the
monopolist to be flexible, in that he can lower the machine price to attract customer 1, and make up for lowering the
machine price by making profits on paper sales. The monopolist is also no longer limited to obtaining equal
revenues from both customers- the customer with higher demand will buy more paper.
-Now judge effect of tying on social welfare.
-In non-tying situation, monopolist would choose to charge 10,000, at a cost of 1000. Total
surplus is 9000. The consumer surplus was 10,000, captured entirely by the monopolist.
- In tying situation, total surplus equals the two consumer surplus triangles (2812.50 for customer
1 and 5624 for customer 2), plus the two areas representing payments for paper (1875 for customer 1 and 3750 for
customer 2), minus the costs of the two machines (2000). Total surplus is 12,062.50.
-So, in this particular example, tying leads to higher total surplus. However, the results can vary
depending on the situation, and surplus could be lower under tying.
e) Efficiency- In many cases, it's just more efficient for the product to be sold as a tie-in. The classic case is an
automobile (selling the battery and the engine included in the car).
f) Quality Control- A defense often given in tying cases is that the tied good is necessary for the satisfactory
performance of the tying good (like saying inferior salt would ruin the salt machine). Damages caused by inferior
goods would cause the company a loss of good will from its customers.
-In Chicken Delight (1971), Cir Court said the firm could have achieved the necessary quality control by
specification of the appropriate cooking equipment and supplies. It was therefore unnecessary in the court's view for
Chicken Delight to require purchases of these items from itself.
-There are problems with the view that simply stating the required specifications is a perfect substitute for the tie in:
-a) It may be costly to convince buyers of the need for the stated specifications when cheaper alternatives
exist.
b) There may be a free rider problem- reputation of the franchisor could be damaged if only a few of the
franchisees decided to use inferior inputs and equipment.
-A successful quality control argument was made in Jerrold Electronics (1960). Jerrold sold antennas in
packages: installation, equipment, and maintenance all together. The plan was legal only because the industry was
in its infancy.
g) Evasion of Price controls- Tying can be a way to avoid price controls. If gas price is regulated, tie the sale of
gas to buying a can of motor oil.
h) Prevent Substitution- Tying could be used to prevent substitution away from a monopolized input. Monopolist
could tie the sale of input K (monopolized) to the purchase of input Q (non-monopolized).
i) Current Law toward Tying- In International Salt (1947), defendants said the tie-in of high-grade salt for salt
machine purchases was necessary to maintain good will. Court didn't agree, b/c high grade salt could be provided
by other firms.
-In Northern Pacific (1958), Court said tying is per se illegal when a party has sufficient economic power with
respect to the tying product to appreciably restrain free competition in the market for the tied product, and a "not
insubstantial" amount of interstate commerce is affected.
-In Hyde (1984), and DOJ guidelines, per se illegality of tying was dropped. DOJ wouldn't pursue charges against
a firm with a market share of less than 30% in the tying product market.
If the share were over 30%, the DOJ would follow rule of reason analysis. Those guidelines were dropped in 1983.
-In Kodak (1992) Court maintained the per se approach. The rule is that tying is illegal when the seller possesses
sufficient market power in the market of the tying product and the amount of commerce involved is substantial. It
is possible for the tying to be reasonable in the sense of the Jerrold case.
Chapter 9: Monopolization and Price Discrimination
1) Intro.- Dominant firms are a more realistic concern than monopolies. Few real monopolists exist. Note that the
law forbids the act of monopolizing but not monopoly itself. It's O.K. to achieve dominance through efficiency or
superior quality, but not through predatory tactics. Monopolies are rule of reason cases: look to inherent effect
(possession of monopoly power) and intent. When does a firm have monopolistic power when there's more than one
firm?
-We care about two things: Possession of monopoly power and whether firms exercise monopoly power. We'd like
a numerical test for whether or not a firm has monopoly power.
Diagram of a monopoly - When you're producing a good there's two types of costs:
a) Fixed costs- plant and equipment.
b) Variable costs- use of power, etc.
-Average cost = (fixed cost + variable cost)/total output
-Marginal cost = variable cost of last unit of output
-See notes for graph. Note that the AC curve goes through the MC curve at the minimum point on the MC curve.
-To the left of minimum MC, AC>MC, pulling down AC.
-To the right of minimum MC, AC<MC, driving up AC.
-The output for the monopoly will be where MC = MR, the price for monopoly will be the corresponding price to
the quantity on the demand curve.
-The per unit profit = (P - AC). Total profit = (P-AC)xQ
Measures of a monopoly- Look to price-cost margin (P/MC), could use spreads, ratios.
-Also look to how much profit there is.
--P - AC spread, a.k.a. "average total cost"
--"average variable costs" = total variable cost/total output
p. 266
2) Possession of Monopoly Power- (** Questions 1 and 2) Monopolist chooses profit-maximizing output Q*
(where MR = MC). Two strong assumptions here are: (1) that the product is homogeneous and thus the market is
given and (2) entry is ignored or blockaded.
-Monopoly won't be an absolute yes or no. It's a matter of degree.
-Again, for monopolies the relevant market should include all firms and products that a hypothetical cartel would
need to control in order to raise the existing price in a permanent way. (see chapter 7)
a) -Lerner Index- A simple definition of monopoly power is the ability to set price above marginal cost. One
index (the Lerner Index, AKA "L")to measure this ability is dividing the price-marginal cost difference by price.
-- L = (P-MC)/P = 1/e, where P=price, MC= marginal cost, and e = elasticity of demand. All
values are measured at the firm's profit-maximizing output.
-If e = 1, L =1
-If e =3, L = 1/3
-If e =.5, L= 2
-Note that the Lerner Index equals the reciprocal of the elasticity of demand. If L=.5, e = 2. This implies
that very large elasticities imply very little monopoly power. Competitive firms have large elasticities.
b) Bain Index- Essentially a profit index. looks at profits to see if you have a monopoly. (** Question 3)
-- B = R - C - D -iV, where R= total annual sales revenue, C = currently incurred costs for materials, wages
and salaries, D = depreciation of capital investment, i = interest rate for capital funds, and V = owners' investment.
-So, the Bain Index measures economic profits, in that it subtracts all costs from revenues, including the
opportunity cost of the owners' investment (iV). Excess profits suggest the existence of monopoly.
-Schmalensee- how you define the market determines whether or not you have a monopoly.
c) Demand-side substitution- Three types of substitution: existing firms increase output, outside firms convert to
production of relevant products, and entry of new competition.
3) Intent to Monopolize- Hard to distinguish predatory pricing from hard price competition. A monopoly that arises
from efficiency isn't a violation of Sherman, but a monopoly arising from predatory tactics is.
4) Cases- 3 main eras of Sherman sec. 2 interpretation.
a) 1890-1940: Standard Oil and US Steel- In 1911 SO was found guilty of monopolization and was dissolved into
33 companies. SO was built by acquiring 120 rivals, and predatory pricing, foreclosure, and business espionage.
-Sup Ct said monopoly requires two elements: The firm must have monopoly position, and there must be evidence
of intent to acquire the monopoly position (which can be inferred from predatory tactics).
-p. 272 (** Question 4)- Predatory pricing is pricing at a level calculated to exclude from the market an equally or
more efficient competitor. Competition is supposed to exclude less efficient competitors. Pricing to exclude more
efficient competitors is irrational because it implies a loss to the predator, unless the predator's intent was to gain in
the long run by forcing the rivals out.
-Three regions:
P > AC (O.K.)
AC > P > AVC (somewhat predatory, not breaking even, but at least you're charging enough to cover some
fixed costs)
AVC > P (most predatory, because you're losing money, you're not even covering your costs, you're clearly
just doing it to undercut the other guy, trading off short run loss for long run gains)
-In figure 9.2, a price below where MC = average total cost (ATC) would drive a firm from the market in
the long run. That is, if a firm believed that price would never cover its ATC, it should exit the industry.
-To drive a rival out even more quickly, set price below where MC = average variable costs (AVC).
-Need to make the rival think the predatory pricing is a long-term strategy and not just a short-term bluff.
-Figure 9.3 illustrates that once the rival leaves, the monopolist will jack up prices and more than recoup its
losses from the predatory pricing. Factors make predatory pricing seem unwise: (1) The predator needs deep
pockets for this to work; (2) If the rival doesn't match the low prices, the losses will get worse b/c the predator will
have to supply for the whole market (its own customers plus those of the rival); (3) A dollar lost to predatory tactics
today is worth more than a dollar gained from monopoly tomorrow; (4) Can't sustain a monopoly unless there are
high barriers to entry.
-Elzinga has shown that an investment in predatory pricing (even with an eventual monopoly that would
last forever) would fail to break even.
-Using predatory pricing to drive out rivals and eventually buy them out rather than bankrupting them makes more
sense. This prevents rivals from reentry, and predatory pricing could convince rivals to sell out cheaply.
-Now, Clayton sec 7 prevents monopoly through acquisition.
-US Steel was challenged in 1911 for monopolization. Controlled 65% of steel market. US Steel didn't use
predatory pricing. Rather, it served as the price leader for the market, set high prices across the market. US Steel's
own market share shrunk to 52% b/c of this tactic, and was found not guilty.
-(**)Main point from these cases is that dominant firms would violate Sherman section 2 only if they engaged in
predatory or aggressive acts toward rivals.
b) 1940-1970: Alcoa and United Shoe- Alcoa was sole producer of aluminum prior to 1940, and in 1945 was found
guilty of monopolization even though it hadn't acted aggressively or predatorially. Alcoa had many aluminum
patents in late 1800s, and used the patent protection and tariffs to develop a monopoly in the US. Made entry less
attractive by limit pricing. Sup Ct defined the aluminum market from among 3 alternatives so that Alcoa's share
was 90%.
-Alcoa case signaled change in legal definition of monopolization. Predatory and aggressive acts were no longer
necessary. Simply building capacity ahead of demand could be sufficient to indicate intent to monopolize by a
dominant firm.
-United Shoe came in 1953, it's leasing practices were found to be exclusionary, and thus were evidence of illegal
monopolization. Had 75-85% of shoe machinery market. Wouldn't sell, would only lease machines. Restricted
entry by providing free repairs for its machines.
c) 1970- Present: Kodak, cereals, IBM and others- Plenty of cases were brought by DOJ and privately, but none
made it to the Supreme Court.
-Kodak case went to Circuit Court in 1979. A photo-finisher (Berkey) charged monopolization against Kodak,
which had 60-90% share of most segments of photo industry. Kodak intro'd a mini-camera that only took special
film, and Berkey said Kodak should have shared this innovation with its competitors. Court said Kodak had no
duty to pre-disclose info about the innovation. Don't want to discourage innovation.
-FTC in 1982 brought "joint-monopoly" claim against the highly developed cereal oligarchy of Kellogg, General
Mills, and General Foods. Together they had 81% of the market. Kellogg itself had 45%. FTC said they
saturated the market with "brand proliferation" to leave no room for potential rivals to move in. Judge said
introducing new brands was a legitimate form of competition, and there was no evidence of conspiracy.
-IBM case began in 1969, was dropped by gov't in 1982. Gov't said IBM had 70% of market for general purpose
computer systems. IBM said the market was much broader, included sales of components, where IBM had only
50% of market.
-Gov't also said IBM engaged in aggressive practices such as bundling, leasing, tying, etc. IBM used
reconfigurations ("fighting machines") to screw the competing component makers. Many rivals filed private
antitrust suits, but IBM won every one.
5)Predatory Pricing: Proposed Legal Definitions- (Question #5)
Could think of predatory pricing as pricing below average cost (where average cost = (fixed costs + variable
costs)/quantity). AC = (FC + VC)/Q
-Or could think of predatory pricing as pricing below average variable cost (where AVC = Variable
cost/quantity). AVC = VC/Q.
-Areeda and Turner say where P < AVC, it's presumed illegal.
-A and T say where P= or > AVC, it's presumed legal.
a) Areeda and Turner proposed a definition of predatory pricing that has been accepted by some judges, though
others call it too permissive. They say price greater than average variable costs is presumed lawful, and price lower
than average variable costs is presumed unlawful.
AVC>P (presumed unlawful)
AC>P>AVC (presumed lawful)
P>AC (presumed lawful)
Definition of predatory pricing is important because it can help lead to charge of monopoly. If the definition is too
permissive, monopoly may be allowed.
-Figure 9.5 demonstrates the definition. Areeda and Turner say that any price below MC will cause the monopolist
to lose money on some units of output, which is consistent with the predatory pricing strategy.
-Also, pricing below short-run marginal cost is well known to be economically inefficient.
-For these reasons, they would classify such price as predatory and therefore illegal.
-However, on the graph, outputs to the right of minimum AC, AC is less than MC. Because prices above
average cost (but below marginal cost) wouldn't exclude equally efficient rivals, Areeda and Turner would allow
such prices.
-Finally, since MC is impossible to measure, Areeda and Turner propose to use AVC instead of MC.
-(**) Their conclusion is:
a) A price at or above reasonably anticipated AVC should be conclusively presumed lawful.
b) A price below reasonably anticipated AVC should be conclusively presumed unlawful.
b) ATC Rule. An alternative to Areeda-Turner, proposed by Greer. It's illegal when P<AVC, AVC<P<AC
-(**) Greer says illegal pricing would be shown by :
a) Pricing below ATC, plus
b) substantial evidence of predatory intent.
-Evidence of intent could include pricing below AVC and ATC and documents revealing intent
for predatory pricing.
-Bork: says there's never any predatory pricing, so all the above stuff is nonsense.
Microsoft- (**)Be able to write an essay about Microsoft.
Round 1:- 1994 DOJ complaint that Microsoft monopolized the market of PC operating systems applications (90%).
-Also complained that Microsoft had anti-competitive agreements with PC makers. Microsoft required makers to
pay a Windows royalty for every computer they sold, even if they didn't put windows on the computer.
-This practice was ended by a consent decree.
Round 2 (1995): DOJ said Microsoft violated the consent decree by bundling Explorer with Windows. MS was
using its operating systems monopoly to build a monopoly in browsers.
-MS won here: it's O.K. to bundle browsers.
Round 3 (1997): Windows has monopoly in operating systems, and now tries to build a monopoly in browsers.
-Network effects: benefit of using the thing increases as the number of users increase (it's good to use MS Word
because everybody else does too.)
-There are externalities here of the use.
-Tipping- Found in racial segregation: once a black family moves in, there goes the neighborhood. Once
one software package starts to take off (like it gains 52% of the market), everybody switches over to it.
-Software production: there are big fixed costs here, and low marginal costs.
-Average costs decrease as the market increases.
-It becomes more attractive for software makers to write programs for the most popular operating system.
Windows- it has over 90% of the operating system market. It got to be a monopoly (claims Microsoft) because
they're just so good. They're a natural monopoly, just like an electric power company; we wouldn't want to break
up a natural monopoly just for being efficient like Microsoft, would we?
-DOJ doesn't disagree that there are naturally high entry barriers, but they say that Microsoft has used these
naturally high entry barriers to create artificial barriers (e.g. monopolize the browser market to forestall
competition).
Hardball tactics by Microsoft: DOJ says Microsoft has committed numerous anti-competitive actions:
-Told Compaq if they removed Explorer from their computers, they would lose their Windows license.
-Prescriptions: What should Microsoft stop doing? Need to think about what we're trying to accomplish.
Chapter 10: Introduction to Economic Regulation
1)What is Economic Regulation?- In its role as regulator, gov't restricts the choices of agents. Regulation is "a state
imposed limitation on the discretion that may be exercised by individuals or organizations, which is supported by
the threat of sanction." Gov't can't regulate every decision because it's physically impossible for a gov't to perfectly
monitor firms and consumers, so market forces will play a large role regardless of the degree of gov't intervention.
2) Instruments of Regulation- Three main variables are controlled by regs:
a) Control of Price- Price regs may set a price that firms must charge, or restrict firms to setting prices within a
range. Can set maximum price in order to prevent monopoly pricing by monopolist. Can also set minimum price
to avoid predatory pricing.
b) Control of quantity- Can use restrictions on quantity of a product sold either with or without price regs.
Generally set maximum production limits.
c) Control of entry and exit- Price and number of firms are the main targets of regulation. Gov't regulates number
of firms through restrictions on entry and exit. Gov't may control entry by new firms, and also entry by existing
regulated firms into a new industry or new geographic market. Gov't regulates exit from an industry if it wants
services provided to a broader number of customers than would occur in a free market.
d) Control of other variables- Quality of product may be regulated, but it's expensive to do so. Hard to measure
and define quality, unlike price and quantity. Firm investment may also be regulated.
3) Brief History of Economic Regulation-First started in 1890s, when state of IL began regulating grain elevator rates.
-Railroad price wars of 1800s gave rise to ICC.
-Only be generally familiar with the regulatory trends and main regulatory agencies, different areas that are
regulated.
-Independent Regulatory Commissions- they don't have to go through OMB like other regulatory commissions do.
President appoints the members.
a) Formative Stages- Econ reg. started in 1870s, arose out of Munn v. Illinois and railroad regs.
-Munn case in 1877, Sup Ct said IL could regulate rates for grain elevators. If there's public interest in the
property, the property is subject to regs.
-Interstate Commerce Act of 1887 addressed price discrimination and price instability of railroads. ICC was
formed to regulate rail rates.
-Nebbia v. NY (1934), Sup Ct said a state can adopt any appropriate economic policy to promote the public welfare.
b) Trends in Regulation- Early regs focused on RRs and public utilities. Expanded during 1910s and 1930s.
Deregulation came in 1970s. Vietor says the changes in regs are due to changes in people's perception of how gov't
interacts w/ the economy.
-1930s: Wave of Regulation- Because of Nebbia decision and Depression, regs expanded during 30s. ICC
expanded to cover all interstate transport, not just RRs. FCC was created in 1934 to regulate broadcasting and
long-distance communications. Fed began to regulate electricity and natural gas. FDIC was created to regulate
banks. SEC monitored securities industry.
-1940-1970: Continued Growth of Regulation- Continued steady path of expansion of regulation. Energy and
communications were particularly affected. FCC began regulating cable TV. in 1968. Fed Power Commission
took over regulating natural gas, and price of oil.
-1971-1989: Wave of Deregulation- Airlines became deregulated, so did RRs, trucking, and passenger buses.
Entry regs of long-distance phones were undone, and cable TV. was deregulated at fed level.
-Current Regulatory Policy- mix of re-regulation and further dereg. Cable regs are required now, but
telecommunications is becoming more and more deregulated. Some want re-reg. of RRs and airplanes.
4) The Regulatory Process
a) Overview of Regulatory Process:
-Stage 1: Legislation- There are two stages in the regulation of an industry. The first stage is when a gov't body
(state, local, or fed) passes a piece of legislation that establishes regulatory powers over a particular industry.
Lobbyists for firms, consumers and workers influence the decisions of legislators.
-Stage 2: Implementation- Next the legislation is implemented. The regulatory agency has the main
responsibility.
-Stage 3: Deregulation- This can be achieved by the legislature, the judiciary, or the agency.
b) Regulatory Legislation
-Selecting the regulatory agency: Legislation decides which agency will have jurisdiction, and may even create an
agency if one's needed.
-Powers of the regulatory agency: Legislation outlines the powers of the agency. Two key powers are control of
price and entry/exit.
-General policy objectives: Legislation will outline policy objectives for the agency to follow.
c) Independent Regulatory Commissions- An IRC at the fed level has appointed members for a fixed term. They
have independence from the executive branch, can only be removed for cause.
-Members of a Regulatory Agency- 3 kinds of members: Careerists- wants long-term career in agency, and wants
the agency to grow. They frown on dereg; Politicians see the agency as a stepping stone to a political career;
Professionals are more identified with certain skills than with the agency, seeks to maintain professional esteem.
-Motivations of the regulatory agency depend on what type of employees they are.
d) Regulatory Procedures- Agencies are generally left with plenty of discretion about how to regulate the industry,
although sometimes the legislative mandate for the agency is very specific.
-Rule-making Process- Two basic approaches for regulatory agencies: (1) case by case approach, individually
considering each proposal; and (2) substantive rule-making, formulating general rule through hearings.
-Delay and strategic manipulation of Regulatory Proceedings- Regulatory procedures are biased toward maintaining
the status quo. Changes can only come through due process, and litigation will often slow down the process.
Agents may also limit the flow of info to the regulators.
e) Important agencies- ICC (1887), FCC (1934), SEC (1934), Fed Power Commission (1935), Fed Energy
Regulatory Commission (1977), Civil Aeronautics Board (1938),
5) The Theory of Regulation- Theories of reg. seek answer to questions "why is there reg.?" and "why limit choices
of agents in a free market economy?". Seek to predict benefits of regs, which industries would be regulated, and
what form the regs would take. There have been three stages in the evolution of regulatory theory.
a) Public Interest Theory/Normative Analysis as a Positive Theory (NPT)
Good gov't theory- independent reg. commissions seek to maximize the public interest, make the public as well off
as possible.
-This is what we'd like them to do. Normative analysis (what we'd like them to do)
-Positive political theory what they really do (how they actually behave, maximizing welfare by
maximizing benefits net of costs).
-In reality, they don't always maximize welfare, they may sacrifice welfare to some extent in following
their legislative mandate. So no one believes good gov't theory.
-Normative Rationale for Regulation: Regulation occurs in industries plagued with market failures. Unrestrained
competition doesn't work well in industries that are natural monopolies, or that's plagued by externalities.
-A market is a natural monopoly if, at the socially optimal quantity, industry cost is minimized by having only one
firm produce. They're likely to exist when there's a large fixed-cost component to cost (like public utilities, local
telephone). Fixed costs are large relative to marginal costs, so average cost declines for a wide range of outputs.
-In natural monopoly, there's a fundamental conflict between productive efficiency (which requires that
only on firm produce to avoid overlapping fixed costs) and allocative efficiency (need enough firms so that
competition drives prices down to marginal cost)
-An externality exists when the actions of one agent (Agent A) affects the utility or production function of another
agent (Agent B), and Agent A doesn’t care about Agent B's welfare. Perfect competition won't result in an optimal
allocation of resources.
-They hurt welfare because I'll engage in an activity if it's profitable to me, even if the activity causes a
harm to someone else that's greater than my profit. That's a transaction that shouldn't take place, but in a free
market it will take place, and there will be a net reduction in social welfare.
-Externalities include negative externalities (noise and water pollution), common pool problems (agents
don't take into account how their activity reduces the resource and thus raises the cost of production for other
agents), and positive externalities (these are good, because the activity I engage in raises social welfare by more than
the size of my profit).
-In the case of a natural monopoly, price and entry regs may allow both allocative and productive efficiency. In the
case of externalities, imposing a tax/subsidy on an activity that produces negative/positive externalities can result in
a socially preferred allocation.
-Description of Theory: Understanding when a regulation should occur is normative analysis. Explaining when
regulation does occur is a positive theory. NPT uses normative analysis to generate positive theory by saying that
regulation is supplied in response to the public's demand for the correction of a market failure or for the correction
of highly inequitable practices (generally in natural monopolies, or industries plagued by externalities).
-Public will demand regs to generate welfare gains.
-Critique of NPT: First, it's incomplete, because it doesn't discuss the mechanism through which the public brings
about the regs that increase social welfare. How does the public use net social welfare gains to induce legislators to
pass regulatory schemes and regulators to pursue proper regs.
-Second, there's a large amount of evidence that refutes NPT. Many industries are regulated that aren't natural
monopolies or plagued by externalities. Also, firms often lobby for regulations, which doesn't sit well with NPT
because they wouldn't lobby for something that would help social welfare at the expense of profits. Also, regs in a
natural monopoly may not constrain firm pricing behavior and profits very much.
-Reformulation of NPT: The reformulation says that regs are originally put in place to correct a market failure but
are then mismanaged by the regulatory agency. But this is unsatisfactory because like the original version it only
states a hypothesis rather than developing a hypothesis as a conclusion from a model (it doesn't explain why the reg.
is mismanaged). Also, it still doesn't fit with evidence that regulated industries often aren't natural monopolies or
plagued by externalities.
b) Capture Theory
Stigler said regs aren't out to serve public interest, but rather to serve the interest of the industry that's being
regulated. Industry manipulates the regulation to boost its profits.
-Could use regs to squeeze out competition.
-Genesis of Capture Theory (CT): History shows that regs aren't strongly correlated with the existence of market
failures. Regs actually tend to raise industry profit (by preventing entry or setting too-high max. prices).
-CT states that either regulation is supplied in response to the industry's demand for regulation (legislators are
captured by the industry) or the reg. agency comes to be controlled by the industry over time (regulators are
captured by the industry).
-Critique of CT: CT is better than NPT, but still subject to same criticisms. First, it doesn't explain how regulation
comes to be controlled by the industry (why shouldn't it be controlled by consumer and labor groups as well?).
Second, not all regs add to industry profit, and many regs were passed with the support of small firms over the
objections of large firms. Third, there's plenty of regs that were opposed by the industry and that resulted in lower
industry profits.
c) Economic Theory of Regulation (ET)
(Stigler, Peltzman, Becker)- The world isn't as bad as capture theory says, and not as good as public interest theory
says. I think the regs are made to help individuals in the industry, but firms are better able to use them to their
advantage.
-Uses example of peanut quotas, shows how firms are better able to make use of them. The reg. would
restrict the output that could be sold, in order to prop up prices. The loser would be the consumers (there's a DWL,
and also a transfer from consumers to producers).
-So, there's a concentrated, small # of beneficiaries.
-Evidence shows that regulation isn't strongly associated with the existence of market failure (as in NPT) and isn't
exclusively pro-producer (as in CT). Depending on the industry, regs help different groups.
-Stiglerian Approach: Stigler put forth a set of assumptions and generated predictions about which industries
would be regulated and what form regulation would take as logical implications of these assumptions (unlike NPT
and CT).
-Initial premise is that the basic resource of the state is the power to coerce. An interest group that can convince the
state to use its power of coercion on that group's behalf can improve its well-being.
-Next premise is that agents are rational in the sense of choosing actions that are utility maximizing.
-These two assumptions result in the hypothesis that regulation is supplied in response to demands of
interest groups acting to maximize their income.
-Stigler/Peltzman Model: They laid out factors that determined which groups could control regulation. 3
elements:
1) Regulatory legislation redistributes wealth.
2) The behavior of legislators is driven by their desire to remain in office, so legislation is designed to
maximize political support.
3) Interest groups compete by offering political support in exchange for favorable legislation.
-The result is that regs are biased in favor of groups that are better organized (so they can deliver more
political support) and that benefit more from favorable legislation (so they'll invest more to acquire political
support).
-Regs will benefit small interest groups with strong interests at the expense of large interest groups with
weakly felt interests. The higher the per capita benefit of the reg. for the group's members, the more likely they'll
succeed. Big groups are at a disadvantage in terms of delivering political support b/c of the free rider effect.
-US Peanut Program: An example of small groups benefiting from regs at the cost of large groups is the
peanut-quota system. Gov't regs limit number of domestic peanut farmers, and limit imports. Gov't price supports
ensure that peanut farmers can cover production costs. The result is huge consumer to producer transfer, and DWL.
Average peanut farmer gains $11,100 from the program, and average consumer only loses $1.23.
-Predicting the Type of Industry to be Regulated: Legislators consider which consumers will be helped, which
consumers are hurt, and how much firms are benefited.
-Peltzman model- figure out political support function for legislators ("M").
M(P, Pi) (where P = price and Pi = industry profits).
Increased P = decreased political support (assuming you don't increase Pi).
Increased Pi = increased political support (assuming you don't increase P).
-What you get from this is neat graphs. A graph of political support for different combos of P and Pi.
-Indifference curves for levels of political support, i.e. the set of all (price, profit) combinations
leading to some political support.
p. 332 (** Question 2)- Figure 10.2. A legislator chooses price so as to maximize political support. M(P,Pi) is
assumed to be decreasing in price because consumers increase their political opposition when price is higher while it
is increasing in industry profit because firms respond with greater support.
-Profit depends on price where Pi(P) denotes the profit function. Pi(P) increases in P for all prices less
than the monopoly price, and is decreasing in P for all prices above the monopoly price.
-For P< the monopoly price, if a legislator raises price, he raises consumer opposition, but also raises
industry support. (see figure 10.2)
-The political support indifference curves increase in a Northwest direction. The optimal price for the
legislator, P*, is that which achieves the highest level of political support subject to the constraint that profit equals
Pi(P). P* lies between the competitive price and the monopoly price. So, a legislator won't set a price to
maximize industry profit.
-(**) This suggests which industries are likely to be regulated. If the equilibrium price an industry would
achieve in the absence of regulation is close to the price that would exist under regulation, P*, then regulation is
unlikely. The interest group that would benefit from regulation wouldn't benefit very much because price wouldn't
change much, so it won't invest much to get the industry regulated.
-This also suggests that the industries most likely to be regulated are either relatively competitive or
relatively monopolistic. In both cases, some interest group will gain considerably from regulation.
-Things to note from the graph (see notes):
(1) We prefer lower prices and higher profits, which means we prefer curves that are
higher in the upper left corner of the graph. Basically, we prefer curves that give us same prices and more profits,
or same profits and lower prices.
(2) This assumes that there will be a curvature.
(3) Tradeoff is not constant, e.g. as you increase prices more and more, you need greater
additional profits to offset price increases. It gets more expensive to gouge consumers the more you gouge them.
-The regulatory price will be set so that Pi(P) curve (price as a function of profits) falls on an
indifference curve that is as far to the upper left as possible.
-Becker Model: Becker focuses on competition between interest groups. He agrees that regulation is used to
increase the welfare of more influential interest groups. Interest groups can raise their welfare by influencing
regulatory policy. The wealth transfer that Group 1 gets depends on both the pressure it exerts on legislators and
regulators, and the pressure exerted by Group 2. The amount of pressure is determined by the number of members
in the group and the amount of resources used.
-An assumption is that aggregate influence is fixed. What's important for measuring the amount of
regulatory activity (measured by the wealth transfer) is the influence of one group relative to the influence of
another.
-A political equilibrium occurs if, given group 1 applies a certain pressure P2, P1 is the amount of
pressure that maximizes group 1's welfare, and vice versa. It's found at the intersection of the two best response
functions (see figure 10.3, page 335).
-Note that the political equilibrium isn't pareto optimal- both groups could invest fewer resources and
achieve the same level of relative influence.
-Taxation by Regulation: Cross-subsidization is the use of revenue from the sale of one product to subsidize the
sale of another product. Specifically, the price of one product is set to exceed its average cost while the price of a
second product is set below its average cost. This seems to be in conflict with both profit maximization and welfare
maximization.
-Posner explains that one of the functions of reg. is to assist the gov't in its role of redistributing resources.
-Cross-subsidization is interpreted as a means for redistributing wealth from one group of consumers to a
second group of consumers. Consumers in less densely populated areas tend to be subsidized at the cost of
consumers in more densely populated areas.
-Summary of Results: 4 main predictions based on Stiglerian approach:
1) There's a tendency for regulation to be designed to benefit relatively small groups with strong
preferences over regulation at the cost of relatively large groups with weak preferences over regulation. The
implication is that regulation tends to be pro-producer (firms do better under price regulations).
2) Even if regulation is pro-producer, policy will not be set so as to maximize industry profit.
3) Reg. is most likely in relatively competitive or relatively monopolistic industries because in those
industries reg. will have the biggest impact on some group's well-being.
4) The presence of a market failure makes regulation more likely because the gain to some groups is large
relative to the loss to other groups.
-Critique of ET: Modeling the Regulatory Process: An important assumption of ET is that interest groups directly
influence regulatory policies. In reality, there are numerous actors that influence regulatory policies. ET may
ignore some important aspects of the regulatory process by assuming interest groups adequately control legislators
and legislators adequately control regulators.
-Legislators want to get re-elected, but they also have ideologies that influence their view of regulation.
-Regulators aren't puppets of legislators. Regulators are tough to control b/c doing so requires info and
drafting new legislation.
-ET also ignores the role of the judiciary
-Does the Empirical Evidence Support the Economic Theory of Regulation?: NPT says dereg would occur
when there are changes in cost or demand conditions such that a market failure is either eliminated or sufficiently
reduced so as to make dereg socially optimal. ET would predict dereg when the relative influence of interest
groups that are benefited by regulation is reduced.
-The dereg of the RRs in the late 1970s seems to agree with ET. Regs hurt industry profits, so the industry
used influence to push for dereg.
-Dereg of trucking industry seems inconsistent with ET. Regs boosted industry profits at the time of
dereg.
-ET is valuable, but much empirical evidence is inconsistent with it.
d) Taxicab Regulation (** Question 3)
-Regulatory History: In 1920s, price regs for taxis started. During Depression, also began to restrict entry b/c lots
of people began to drive cabs. This is a good example of ET, because each cab company would gain a lot from the
entry regs, and each consumer would only be harmed a little. Also, easy to organize the relatively small number of
cab companies. Regs still are in place.
-Entry Restrictions: Basically, cities have kept the same quota limit on the number of cabs for 40 years.
-The Value of a Medallion: (**)Perhaps the best method for assessing the value of entry restrictions is to determine
how much a firm is willing to pay in order to operate in the industry. If the price for medallions is positive, then the
number of competitors must be less than the number that would exist under free entry.
-A medallion's price tells us exactly what the most informed agents believe to be the discounted stream of
above-normal profits from economic regulation.
-The price of a medallion in a regulated market must equal the additional profits that can be earned by
operating in a regulated market as opposed to an unregulated market. It's equal to the discounted sum of future
excess profits that are earned by a taxi operating in a regulated market.
-The total market value of taxicabs medallions available in a regulated market equals the total
above-normal profits achieved through fare and entry regulation. This helps explains why cabs are still regulatedif they were deregulated, the medallion owners would lose their big medallion investment to free entry.
-In comparison, the value of dereg to each consumer of taxicab services is much lower. Also, the
ownership of medallions is concentrated in a few large cab companies, making for an easily-organized interest
group.
Taxis: there's a limit on quantity b/c they limit the number of taxi medallions. Medallions cost 32,000 in Boston,
and 210,000 in NY. They'd pay so much for the medallions b/c they give you a share of a economic rent due to
restricted entry.
Equation: -The value of $1 each year forever at interest rate r = 1/r.
-So if r = 5%, PV = 1/.05 = 20
-Problem #10, 11, 12.
10) Taxi cab medallion had value of 210,000, interest rate of 6%.
-210,000 is the PV of the future stream of above-normal profits.
-- 210,000 = annual above-normal profits/.06
-- 210,000 x .06 = 12,600
11) -If you gained 12,600 a year, would you gain above-normal profit?
12) 10,000 annual above-normal profits at 5%. Medallion price = 10,000/.05 = 200,000.
-If gov't says they'll allow free entry in two years, how will the medallion price change, assuming you're
paying the 10,000 at the end of each year? It's like an annuity for 2 years, so the 2 year value would be:
(10,000/1.05) + (10,000/1.05^2) = 2 year value
change in value = 200,000 - 2 year value.
Chapter 11: Theory of Natural Monopoly
1)Introduction- The most important and widely accepted argument for economic regulation is natural monopoly.
This chapter focuses on the natural monopoly market failure argument: Single firm is cost minimizing in a natural
monopoly. It's better to have one firm engage in the industry than to try to divide it up among several firms.
2) The Natural Monopoly Problem- An industry is a natural monopoly if the production of a particular good or
service by a single firm minimizes cost. Typical example is production of a single product where long-run average
cost (LRAC) declines for all outputs. Since LRAC is declining, long-run marginal cost (LRMC) always lies below
it. Long run AC and long run MC decline with quantity of the product produced.
a) Permanent and Temporary Natural Monopoly: There's a distinction between permanent and temporary natural
monopoly.
-For permanent NM, LRAC falls continuously as output increases. No matter how large market demand
is, a single firm can produce it at least cost.
-For temporary natural monopoly, LRAC declines up to a certain quantity, and then becomes constant. So,
as demand grows, a temporary natural monopoly may give way to a competitive market. An example is inter-city
telephone service, where the ability to spread costs decreases as volume grows.
b) Sub-additivity and Multi-product Monopoly: In reality, a single-commodity producer is rare (e.g. electric
utilities supplying both high and low voltage). Multiple product natural monopoly is more realistic, but more
complicated.
-(** Question #1) The definition of natural monopoly is that the cost function is sub-additive. Look at figure 11.3
on page 354. On the curve AC, average cost declines until quantity reaches Q*, and it increases after that. So, an
economy of scale exists at all outputs less than Q*, and a diseconomy of scale exists at all outputs greater than Q*
- Sub-additivity refers to whether it is cheaper to have one firm produce total industry output, or whether
additional firms would yield lower total cost. For outputs less than Q*, one firm is the least-cost solution, and
therefore cost is sub-additive for that range of outputs
-To determine the least cost solution for outputs greater than Q*, add a minimum average cost function for
two firms, AC2 (figure 11.4, page 355). For any given point on the AC curve, double the output rate to find a
corresponding point on the AC2 curve. The minimum point on AC2 will equal 2Q*.
-The intersection of AC and AC2 defines sub-additivity. For quantities less than the quantity at the
intersection point, one firm yields the least-cost sub-additivity.
-When we turn to multiple-product NMs, the NM is still defined by sub-additivity. In the multiple-output case,
economies of scale are neither necessary nor sufficient for costs to be sub-additive. There would be an economy of
scale if the total cost of producing an X% greater quantity of each commodity increased by an amount less than X%.
-Economies of scale aren't necessary or sufficient for sub-additivity in this case because the
interdependence among outputs becomes important. Economies of scope are a good way to measure these
interdependencies.
-Economy of scale- the bigger you are, the more efficient you are at producing something.
-Cost function is "sub-additive".
-Cheaper to have one firm produce the industry output:
-- c(Q1) + c(Q2) > c(Q2 + Q1). In other words, it's more expensive for two firms to produce the
industry output than for one firm to do it.
-Economies of scope- means it's cheaper to produce two products within a single firm than it is for specialty firms
to produce the required outputs. cheaper to produce X cars and Y trucks at one plant than at two plants.
-Note that AT&T did most of the funding for studies that finds that economies of scope are good.
-The problem for regulators is that even though economies of scale and scope are efficient, they're still monopolies.
-Sharkey's example of a cost function that possesses economies of scale for all outputs, but which is
nowhere sub-additive is: C(Q1Q2) = Q1 +Q2 +(Q1Q2)^(1/3)
-In summary, the definition of NM in a multiple-output case is that the cost function must be sub-additive (meaning
that the production of all combinations of outputs is accomplished at least cost by a single firm.) This depends on
both economies of scale and economies of scope. Economies of scale alone, however, can be outweighed by
economies of scope. So, although economies of scale in the single-product case imply natural monopoly, this
doesn't hold true for the multiple-product case.
3) Alternative Policy Solutions- There are various possible solutions to the problem of NMs. Doing nothing,
bidding for monopoly rights, regulation, public enterprise.
-Ideal Pricing- the firm is to be operated in the public interest, and the only issue is what prices produce economic
efficiency. One possible choice for the efficient price is marginal cost.
p. 359 (** Question #2) A monopolist that charges MC for each product practices linear (or uniform) marginal
cost pricing. In other words, a customer's expenditure for a product is a linear function of price and quantity sold,
PQ.
-On the other hand, if a firm charges a fixed fee, F, regardless of amount bought, and also a per-unit charge
P, nonlinear (or non-uniform) pricing would be in effect. Then the customer's expenditure would be a nonlinear
function, F + PQ.
Examples- Figure 11.6. What would monopoly do on its own?
-There are two steps: 1) Monopolist sets monopoly quantity where MR = MC; 2) Monopolist sets monopoly price at
the point where demand curve corresponds to the monopoly quantity.
-Ideally, we want the monopolist to set quantity where price/demand = MC.
-How do we do this? Look at options below.
a) Linear Marginal Cost Pricing- (** Question 2). Figure 11.6 shows a single-product natural monopolist with
decreasing average costs over the relevant output range. Regulators set price where MC = demand. However, that
means the firm will lose money, because MC will be below AC, so he can't cover all his costs.
-There will be a loss here, equal to the region between the MC and AC prices on the graph.
-Could try to cover the monopoly's losses with a subsidy. Is this good? Not really, and this proposal isn't really
used. Could set price and quantity where demand intersects MC. However, the firm wouldn't be able to cover
average costs because price would be less than average cost, and so the firm would need a subsidy to operate.
-The gov't would have to provide the subsidy. Best to use a lump-sum tax that wouldn't distort other
decisions throughout the economy, but those taxes aren't used very much. Income taxes and sales taxes aren't good
because they create inefficiencies and distortions.
-Arguments against using a lump-sum tax to pay for the subsidy:
a) If total costs aren't covered by consumer expenditures, it's possible that total consumer benefits (given by
the area under the demand curve) are less than total costs, which means the good shouldn't be produced at all.
b) Because management knows losses will be subsidized, the incentive and capacity to control costs is
weakened.
c) On distributional grounds, it can be argued that non-buyers of the natural monopoly good should not be
required to subsidize the marginal cost buyers.
Issues: 1) Bad incentives if we always reimburse the monopoly's subsidy.
2) Why should all taxpayers pay for monopoly's product even if they don't use it?
3) Subsidy could potentially exceed consumer surplus. So, we're not really better off using the subsidy
-The overall point is that enterprises should price so that their revenues cover costs. It's also unrealistic to expect
gov't to subsidize private firms.
b) Non-linear Pricing - Charging people a fee + price per unit. Like charging an entry fee, and then charging for
use.
-If the firm is losing k dollars, could charge n customers k/n dollars as an entry fee.
Issues: 1) The entry fee may screen some people out.
2) When charging a fee per customer, people could join together to pay the entry fee as "group customers".
(** Question 3, page 362) A two part-tariff is nonlinear and consists of a fixed amount or fee, regardless of
consumption, plus a price per unit. If the price per unit equals marginal cost, then it's possible to have efficient
pricing and have total revenues of the firm equal to its total costs.
-For example, if the loss under linear marginal cost pricing is estimated to be K, the fixed fee of the two
part tariff could be set so that the sum over all customers equals K. There are various ways for this to be true. The
simplest is to set the fixed fee equal to K/N, where N equals the number of consumers.
- There are problems with the non-discriminatory two-part tariff. Because consumers will vary in terms of their
demand for the good, it's possible that some customers will be driven from the market if K/N exceeds their
consumer surpluses at price equal to marginal cost.
-Efficiency losses will occur if these excluded consumers would have been willing to pay the marginal cost.
-This is more a problem for luxuries than for necessities.
-Another possible problem is that in some markets it may not be feasible to enforce a fixed fee for the right to buy at
a price per unit. Consumers would have an incentive to buy in a group, to only pay one fee.
-The way to avoid excluding consumers is to charge different fixed fees to different consumers, or classes of
consumers. Basically, discriminatory two-part tariffs could tailor the fixed fees to the consumers'
willingness-to-pay where the sum of the fixed fees should add up to K. This solution might be best from an
efficiency perspective, but it might be illegal to discriminate.
-If all consumers must legally be charged the same fixed fee, it will still be more efficient to use a twopart tariff than to use linear pricing. This is true because by using a fixed fee to make a contribution to revenues,
the price per unit can be lowered toward marginal cost, thereby reducing DWL.
-The optimal two-part tariff will generally involve a price per unit that exceeds marginal cost and a fixed fee that
excludes some consumers from the market.
-Like with phone company charging per month fee, and 10 cents for 0 - 100 calls, 5 cents for 100-200 calls,
and free calls above 200 a month.
c) Ramsey Pricing- Viscusi likes this. Linear prices that minimize the deadweight social losses subject to the
constraint that total revenue = total cost. Helps determine which products we should tax and to what extent.
Again, we want to minimize deadweight social loss. It seems that if we're choosing between raising price
on two goods, we want to raise the price on the one that would lead to the smaller deadweight loss triangle.
-We do this by raising the price on the product with the inelastic (steeper) demand curve.
Ramsey came up with a pricing and taxing method applicable to a multi-product natural monopolist that would
generate losses if linear marginal cost were used as price.
-Ramsey prices are those linear prices that satisfy the total-revenues-equal-total-cost constraint and minimize the
DWL. Ramsey prices are linear prices (one for each product) so that rules out multi-part tariffs.
-(**) Ramsey Pricing Rule: rule that gives the prices that minimize the DWL is to raise prices in inverse proportion
to demand elasticities:
-- (Pi - MCi)/Pi = L/ei
-Where Pi = price of good i, MCi = MC of good i, L(lambda) = some constant, ei = elasticity of
demand for good i.
-Sample problem: 2 products Q1 and Q2 with same elasticity of demand = -1, but different MC. MC1 = 10, MC 2=
20. What is the ratio of the optimal Ramsey prices?
-Q1 might be electricity for consumers, and Q2 might be electricity for businesses. You want to hit the
party with higher prices that has the more inelastic demand.
-- (P1 -10)/P1 = L/-1, and (P2 - 20)/P2 = L /-1, so (P1 - 10)/P1 = (P2 -20)/P2,
P1P2 - 10P2 = P1P2 - 20P1, 10P2 = 20P1, P2 = 2P1, P1/P2 = 1/2
-(**)We could re-express the Ramsey Rule another way: Cut all goods' output by the same proportion until total
revenues = total costs.
-Need a sense of what marginal costs are in order to do the Ramsey equation.
(** Question #4)= see figure 11.10 on page 366.
d) Loeb-Magat Proposal - The above pricing schemes could be required by regulators if they had perfect info about
the monopolist's costs and demands. But that's not the case.
-LM proposal assumes that monopolist knows costs and demand info perfectly, but regulator only knows demand.
It concerns differential of information. Monopolist's objective is to maximize profit, so LM scheme is based on that
and the asymmetry of info (monopolist will have incentive to overstate costs).
Give the monopolist a share of consumer surplus. That will give him incentive to maximize the size of consumer
surplus. Normally he'd act to maximize profits, setting price where MC = MR.
-Monopolist captures all the consumer surplus it generates.
-The society will have great efficiency gains (equal to consumer surplus + profits)
-This system works great. Why don't we do this? It's very expensive, taxpayers would basically pay the entire
consumer surplus to the monopolist. TPs break even, monopoly gets a really sweet deal.
-Ways around the problem: Franchise out the deal. We'll pay the monopolist the consumer surplus forever, but we
have to auction off the right to have the monopoly.
p. 368 (** Question 5) -See figure 11.11. Monopolist has average cost AC and demand curve AR. Total cost
function is K + vX. So, MC is constant and equal to v.
-The LM proposal is to allow the monopolist to choose its own price- this differs from the usual practice of
the regulatory agency setting the price. However, they propose to have the agency subsidize the firm by an amount
equal to consumer surplus at the selected price.
-Basically, the monopolist gets to keep the consumer surplus its prices create. So, the monopolist has
incentive to maximize surplus. To deal with the possible equity problem here, you could use a franchise bidding
scheme (or a tax scheme) to recover some of the subsidy for the general treasury. But the total subsidy isn't
recovered. There will be a net subsidy remaining equal to fixed cost (K).
-Info problems about the demand curve and the existence of subsidy make it an unlikely substitute for the present
regulatory process. But it is a useful way to think about incentives for monopolists.
Chapter 12: Natural Monopoly Regulation
1) Intro.- This chapter looks at regulation, esp. electric power. Need to keep in mind the costs and benefits of regs.
The benefit is reducing the DWL inefficiency that would exist under unregulated monopoly. The costs include the
direct costs of regulatory agencies, and the unintended side effects of reg. (esp. higher costs because of changed
incentive structures of regulated firms).
-Early on, common form of reg. was granting exclusive franchise by communities. Then state regulatory
commissions took over in 1907. There are also fed reg. agencies.
-Reg. commission usually appointed by Gov. or Pres. 3-12 members with large staffs. Commissions usually focus
on prices charged. Rates are set in civil rate cases.
2) The Rate Case (** Question 1, page 378)- Company initiates rate case by seeking increase in allowed prices.
--Expenses: fuel, wages and salaries, taxes, depreciation.
-Regulator monitors poorly, no idea if the firm is efficient.
-- Rate base = (the amount paid for plant and equipment) minus (depreciation).
-Think about: If we're regulating firms based on their rate of return, won't that screw up their incentives.
Select a "test period" to study the company's rate of return.
Typical type of rate case focuses on relationship of sum of Pi x Qi on interval from i = 1 to n:
(**) -This accounting equation describes the process (equation 12.1) , (SUM) = the sigma symbol)
-- n (SUM) i=1 p(i)q(i) = p(1)q(1) + p(2)q(2) . . . + p(n)q(n) = Expenses + s(RB), where p(i) = price of "i’th
service, q(i) = quantity of "i’th service, n = number of services, s = allowed or "fair" rate of return, and RB = the rate
base, a measure of the value of the regulated firm's investment.
- The main idea is that the company's revenues must just equal it's costs, so that economic profit is zero.
Note that economically efficient prices aren't required by the equation, only prices that cover total costs.
-Natural monopoly is a two-part problem:
a) The rate-level problem- concerned primarily with finding "s" so that the company will have the
appropriate level of earnings on its investment (or rate base).
b) The rate-structure problem- deals with issues of price discrim among customer classes and products,
that is, the p(i) on the left side of the accounting equation.
-In a rate case, company will try to show that at the current prices its rate of return on its rate base for the test period
is too low. It will argue that its true cost of capital is such that it needs a higher return in order "to continue to
attract capital." (Basically, it argues that the prices are too low).
-Assuming the commission chooses an "s" value that is higher than the current "s", the price on the left
hand side of the equation will be adjusted to yield the new rate of return.
Regulatory Lag: Once prices are fixed, the company has incentive to be cost efficient. It can earn higher rates of
return than are actually allowed if it can reduce costs. Regulatory lag means that if the commission were able to
continuously adjust prices to keep the company's rate of return always equal to "s", there would be no lag and thus
no incentive for cost efficiency.
-Note that the rate of return equals net operating income divided by rate base.
3) The Rate Level- Rate-level problem is concerned with right side of equation 12.1, what are legit expenses of firm,
including required return on investment. Expenses (fuel costs, wages, salaries, taxes, depreciation) usually account
for 80% of firm's total costs, and the remainder is return on investment.
a)Rate Base Valuation- Recently, some commissions have been tough on whether to allow certain investments into
rate base. This and regulatory lag lead firms to be cost-efficient.
-In rate cases, main concern is what the proper return to investment should be (basically, what should be the values
of s and RB in equation 12.2).
- "k" is the cost of equity capital/the cost of common stock, and equals the percent cost of the common stock. "s" is
the weighted total cost of bonds, preferred stocks, and common stocks (see page 383).
b) Cost of Equity Capital- (** Question 2, page 383)- The cost of equity capital = k. The best way to measure it is
the discounted cash-flow method. This is represented in equation 12.2:
-Cost of equity capital to the firm using discounted cash flows.
-Suppose firm pays dividends D(i) at end of each year i.
-Price of Stock = [D(1)/(1+k)] + [D(2)/(1+k)^2] + . . . [D(i)/(1+k)^n]
-where k is the interest rate.
-Problem: Suppose price of stock is 1000 and dividends D(i) = 100 paid out at end of each year. what is r?
-- 1000 = 100/(1+r) + 100/(1+r)^2 + . . . 100/(1+r)^n,
-- 1000 = 100[1/(1+r) + 1/(1+r)^2 + . . . 1/(1+r)^n] (this equals 1/r, according to the Taxi Medallion
Formula.)
-- 1000 = 100(1/r), 10 = 1/r, r = 1/10 = .1 = 10%
-Basically, k is the discount rate used by investors, the rate of return investors can obtain on their next best
opportunity at the same degree of riskiness.
-If investors expect dividends to grow at some constant rate "g", where g is less than k, equation 12.2 can be solved
for the unknown k:
k = [D(1)/P] + g
-This is tough, though, because how do we measure D, and how do we choose g?
-The value of k will depend on the riskiness of the firm as perceived by investors. To the extent that uncertainty of
return for the firm depends on the behavior of the regulatory agency (like setting rates), the cost of equity will be
higher than necessary because investors will require a risk premium.
-Commissions can set s below the cost of capital in order to penalize firms for inefficient behavior. Firm could still
raise new capital by borrowing, but that would lower the stock price and damage the owners. This would
encourage efficient behavior.
c) The Sliding Scale Problem- Joskow and Schmalensee came up with the sliding scale plan for incentive
regulation of electric utilities. We want to set the S for the regulated natural monopoly in order to reward
efficiency.
-The one theory here is not very good, but it's the JS theory. They propose a sliding scale that encourages the firm
to innovate, but not let the firm all of the savings from innovation itself.
-The theory says to adjust prices so that the actual rate of return, r(a), at the new prices would be:
r(a) = r(t) + h(r* - r(t)).
-- where r(a) is the actual rate of return at the new prices, r(t) equals the actual rate of return at the prices
that prevail initially in year t, h is the fraction of the increase in return that the company gets to keep (h is a constant
between 0 and 1), and r* equals the target rate of return.
-Special cases:
-If h = 1, regulation is cost-plus, and prices are always adjusted to guarantee the firm a return of r*. The
firm wouldn't benefit from being efficient, and wouldn't be hurt by being inefficient.
-If h = 0, regulation is fixed-price, and all gains from efficiency accrue to the firm, and all cost increases
beyond management's control also affect the firm alone. Probably best to have h somewhere in the middle.
-- h between 0 and 1, sharing of gains.
-How should we set h? The theory doesn't say.
d) Price caps and performance standards- FCC uses price caps sometimes instead of rate of return regs. Sets price
cap so that the phone company can raise its prices at the rate of inflation minus some amount selected to reflect
expected productivity. Provides incentive for regulated firms to be cost efficient. Builds regulatory lag into the
process. (** this might be question 3 too)
e) -The Averch-Johnson Effect:
-Firms subject to rate of return regulation will choose too much capital and will operate at inefficiently
high cost.
Averch Johnson effect- (** Questions 3, 4, page 387) Rate of return regs can create perverse incentives. Averch
and Johnson found that firms under rate of return regs choose too much capital relative to other inputs. As a result,
the output would be produced at an inefficiently high cost. The key idea is that because allowed profit varies
directly with the rate base (capital), the firm will tend to substitute too much capital for other inputs. The solution
is to choose the quantities of labor and capital to maximize profit (revenue minus cost of the inputs, labor, and
capital).
-Need to maximize Pi = R(K,L) - wL - rK, subject to
[R(K,L) - wL]/K = s, where Pi = profit, R = revenue function, K = quantity of capital, L = quantity of labor,
w = wage rate, r = cost of capital, s = allowed rate of return.
-Revenue = sk + wL, where s =rate of return, k = capital stock, w = wage rate, and L =Labor .
-- sk = Revenue - wL,
-- s = (revenue - wL)/k
-The regulated firm will use too much capital and too little labor as compared to the least cost solution.
The excess cost can be measured in units of labor by the distance between the two demand curves intersecting with
the labor/vertical axis.
-In other words, the regulated firm perceives that its cost of capital (r - delta), is less than the true cost (r).
If s = 10%, r =8%, and r - delta = 5%, the regulated firm can earn a bonus of 2% on each dollar of new capital
(costing 8%) because it's allowed to earn 10%. The bonus of 2% can be interpreted roughly as a 2% discount,
making its perceived cost of capital only 6%. Take a look at figure 12.1 on page 389 for a graph.
4) Rate Structure- Rate structure has to do with how prices vary across customer classes and products.
a) Fully Distributed Cost pricing- It says don't bother with Ramsey pricing, just go by units of the product sold.
Viscusi says that's bad b/c we'd like to vary prices based on the elasticity of demand of the product in question.
-Fixed costs include consumers variable cost and firms variable costs.
- (** Question 5, page 392)- Begins by allocating all of the utility's costs to various customer classes and services,
because the utility provides a variety of services to different customer groups.
-Consider a two-product natural monopolist that sells electricity to two class of customers. Electricity sold to
residential buyers is denoted by X, and to industrial customer is Y.
-Producing X alone: C(X) = 700 + 20X
-Producing Y alone: C(Y) = 600 +20Y
-Producing both: C(XY) = 1050 + 20X +20Y
-Note that the joint production of X and Y is sub-additive (it costs less because of lower fixed costs when
produced together). Need to divide the fixed costs between consumers.
-Assume it's reasonable to allocate 75% of common fixed costs to the X consumers, and 25% to the Y
consumers. FDC average costs would be: AC(X) = 787.5/X + 20 and AC(Y) = 262.5/Y + 20.
-Assuming demand is: P(X) = 100 - X and P(Y) = 60 -.5Y, set P(X) = AC(X) and P(Y) = AC(Y). The
result is P(X) = AC(X) = $31.5, X = 68.5, and P(Y) = AC(Y) = $23.6, Y = 72.8.
-Thus, the FDC prices satisfy the requirement that total revenues equal total costs. But these prices may
not be economically efficient and may lead to DWLs. The efficient prices here will be the Ramsey prices (the
prices that minimize DWL, see Chap 11).
-Also, no clear way to set what the reasonable allocation of fixed costs is.
Judging if rates are fair: Criteria to judge unfairness by:
a) Are you charged more than your stand alone average cost? (this means charged above all fixed costs plus
your variable cost).
-If you're the consumer, and you're charged more than you would be charged if the firms didn't
exist, then you're obviously getting screwed.
b) Are you charged more than your average incremental costs of output?
5) Peak Load Pricing- Like pricing for electricity in the Summer when demand is high. We want people to adjust
their habits when demand is high, so that there won't be too much of a burden on the production system.
- (** Question 5)Used in electric utilities. Basically vary prices by time of use. More electricity is demanded
during the day, so the marginal cost of electricity is higher in the day. Keep in mind that electricity can't be stored,
so sufficient capacity must be on hand to supply the demand at all times. See figure 12.4 on page 398 for graph of
electricity demand that starts flat and then shoots upward.
-There will be two demand curves, one for the price during peak cost periods, and one for off-peak periods. (See
figure 12.7 on page 402)
Chapter 13- Franchise Bidding and Cable Television (p. 429 ff.)
1) Cable Television- First cable was used in 1940s, only for local broadcasting. Now it's both a substitute and a
complement of local TV broadcasting. Now, each cable system has a local monopoly over cable service. Cable
must compete with local TV, satellite dishes.
a) History- FCC was born in 1934, given reg. control over TV and radio broadcasting. At first it didn't regulate
cable. As cable began to take customers from local stations (by bringing in non-local stations), the local TV
stations pressured FCC to regulate it (as predicted by the economic theory of reg.). FCC began full reg. in 1966.
Still, the number of subscribers and cable channels has grown. In 1970s, only 1/3 of houses had access to cable,
now it's 96%. Ads on cable have increased from $60 mil in 1980 to $4 bill today.
b) Cable TV as natural monopoly- Technological Background- 3 components to cable system: the head-end (big
antenna that gets signal), the distribution plant (sends it out to homes via cable), and the subscriber interface
(connects subscriber to distribution plant). Fiber optics allows for 100s of channels.
-Major source of cost is laying cable. Marginal cost is low for subscribers already in the distribution plant's area,
cost of distribution plant and head-end are fixed, so a cable system has a declining average cost per subscriber.
-Estimates of scale economies- Is it cheapest for cable systems to not overlap in a particular geographic area? If
yes, there are economies of density. If there are economies of density, cost efficiency requires that there be no
duplication of cable services. This doesn't mean that a geographic market can't be subdivided so that a different
firm supplies each sub-market.
-(** Question 1, page 434)- Figure 13.8 shows that average cost per subscriber declines as the market penetration
increases. This indicates that cable TV experiences economies of density. If two cable systems fully wired a
geographic area and each served half of the market, the average cost per subscriber would be higher than if there
were only one supplier. Cost efficiency thus requires that the distribution plant of cable systems not overlap.
-A second, and related issue is whether it's more efficient to have one cable system for an entire geographical area or
instead to subdivide the area with each cable system serving a sub-market (and have no overlap). There would be
weak economies of scale for the one supplier situation.
-Thus, it appears that there are large economies to having cable systems not overlap, but there are only
slight cost savings from having a single cable system serve a geographic area, as opposed to subdividing it and
having different cable systems serve these different sub-markets.
c) Performance after the Initial Award of a franchise- (** Question 2, page 442)- It's common for franchise owners
to renege on their proposal once they win the franchise- they end up offering less for more money than originally
promised. It doesn't seem that this is opportunistic behavior in most cases, though. The franchise owners
generally wait a long time in-between requesting rate increases (like a couple years). Also, franchise owners that
seek franchises in multiple markets have to worry about their reputations and how it might hurt future chances for
franchises. So, multiple system operators are less likely than single system operators to have construction delays,
and more likely to provide voluntary improvements in the cable system. Reputational effects deter opportunistic
holdup.
-Also, in 3500 re-franchising decisions, only 7 resulted in the local gov't removing the current franchise
owner. This could be due, however, to either good performance, or a lack of competition at the renewal stage.
-Renewal contracts tend to be better for the cable operators than the original contract. Channel capacity is
lower, fewer community channels, monthly basic price per subscriber increases slightly. However, monthly pay
channel price is lower, and franchise fee is lower.
Thinking on Deregulation
1) Deregulation intro.- Start with example of radios. The question is should we allow deregulation of advertising
by radios, thereby allowing radios to broadcast as many ads as they want?
Pros:
-Increase consumer choice (because programming will be matched to consumer preferences
- Increase ad revenue for radio stations.
-Ads allow more stations to get in business, increase competition.
Cons: -Radio stations would only play ads (that's obviously wrong, though, because no one would listen to an
all-ad station, so no one would advertise there to begin with).
-Also, should radios be regulated to broadcast a minimum amount of public service broadcasting?
Pros:
-Public service advertising is in the public good.
Cons: -If people liked it, wouldn't it come on anyway.
2) Deregulating Cable Television- Rising cost of cable TV has outpaced inflation since deregulation. The current
plan is to regulate basic cable rates, and deregulate the rest of the cable channels.
-We do have good innovations now. Many more stations are available. Fiber optic cable lines allow for
much better reception now.
- There's been a rise of satellite-dish systems that compete with cable now. However, many people (like
students, or Cambridge citizens) can't use the dishes. Cable TV is still pretty strong, not too much competition.
-First of all, we have to define exactly what "basic cable" is, because that's all that's regulated now.
-Cable TV is a low marginal cost, declining average cost market, so it is a natural monopoly.
-We don't have multiple cable providers in each market (even though it is feasible now because the cable wires can
carry more than one signal now) because when the industry started out, each company would have to lay its own
wire, and that would cause much duplication of costs.
-Overlapping is costly, and uncommon.
-There are economies of density, in that the more people in the market who get cable (i.e. the more cable
TV penetrates the market) the more average costs decline. The economies are lost to some limited extent when the
market is subdivided among different cable TV providers.
-The local gov't will auction off the cable rights within a market to the highest bidder. They'll probably attach
requirements that the highest bidder must follow (like having to show the St. Patty's Day parade in South Boston).
-That's O.K. if the local gov't is acting on our behalf. However, there's a danger of the local gov't engaging
in rent-seeking behavior.
-This does hurt price competition, because the highest bidder will be totally dominant.
-Table 13.4- there are some increases in cable price during deregulation. But there have been quality
improvements.
More on Cable TV.- Don't want too much overlapping, because of economies of density. Hard to regulate cable
TV. Recall that regulators basically regulate basic rates, but not premium channel rates. If you do have an
overlapping system, it does lead to competition, lower prices, more choice. Maybe from a consumer standpoint,
overlapping is good (but probably not from an overall economic standpoint because of high fixed costs).
Chapter 17- Economic Regulation of Transportation: Surface Freight and Airlines (only p. 574 ff.)
Airline regulation and de-regulation- A success story of deregulation. They were regulated to begin with because
they'd have destructive competition without regs. They'd underbid to win routes, and when they couldn't meet
costs, they'd go bankrupt.
-Is it good to protect firms from this sort of undercutting? Bankrupt firms might cause disruptions. Also, firms
only bid like that because they think they'll make money (they won't bid in a way that they know will bankrupt
them).
-In 1970s, Ted Kennedy got behind the dereg movement, pushed it through. His idea was that not only would firms
make more money, but also consumers would be better off.
1) Airline Regulatory History- (** Question 1) First commercial airline use was postal service in 1920s. Passenger
service began in 1930s. ICC took authority over mail rates in Airmail Act of 1934. ICC set up bidding so that the
airline offering the lowest cost per mile got the route franchise. Airlines underbid each other, and many faced
bankruptcy for not being able to cover costs.
a) Civil Aeronautics Act of 1938- Airline industry came under fed reg. Civil Aeronautics Board regulated
everything (new routes, # of firms exiting from routes, max. & min. prices on routes, price cutting). Safety reg.
was handled by FAA. CAB basically regulated entry and exit, and decided how airlines could enter or exit routes.
Also regulated airline safety until FAA got it in 1958. CAB made broad use of its powers.
b) Path to Deregulation- By mid 70s, there were calls for dereg. Academics said regs hurt competition and caused
welfare losses. Ed Kennedy pushed for dereg in Senate, and CAB supported dereg. CAB chairman Robson
relaxed entry restrictions. Kahn took over as CAB chair and increased dereg, esp. over fares.
-Alfred Kahn was the big de-regulator in the 70s, got rid of entry restrictions, fare restrictions, pricing
restrictions. All sorts of new airlines were able to pop up (Southwest, Value Jet). Also, many airlines merged.
-Profits did go up, and prices went down. Everybody was happy.
c) Airline Dereg Act of 1978 (ADA)- CAB reforms led to lower fares and higher industry profits. Congress passed
ADA which deregulated airlines in phases. CAB lost authority over routes in 1981, over fares in 1983, and was
dismantled totally in 1985.
2) Description of Regulatory Practices- CAB's main objectives were to keep the airline industry financially sound
and promote air service.
a)Price Regulation- CAB's fare-setting was characterized by four properties. 1) fares were set to allow airlines a
reasonable rate of return (between 10.5 and 12 %). 2) Prices were generally set independent of cost. Fares were set
above cost for routes more than 400 miles, and below cost for routes below 400 miles. This subsidy promoted air
service to less dense routes. 3) Fare changes were generally across the board rather than selective. 4) CAB
discouraged price competition
-It used to be that for short routes, cost was greater than price. For long routes, price was greater than cost.
b) Entry and Exit Regulation- In 1938, the 16 existing trunk carriers became certified carriers, and no other
carriers could enter until 1978. 79 applications for entry were denied. At time of dereg, only 10 remained, 6 had
left through merger. Long, expensive process for a major carrier to enter a route. Local carriers could enter freely.
Since 1978, there's more price competition, less quality competition.
c) Comparison to Motor-Carrier Regulation- Airline reg. is similar to motor carrier reg. Prices were set to allow
reasonable profits and cross-subsidization. Entry was controlled. Effects of entry were different, as we'll see.
3) Effects of Regulation- One common effect of reg. is it reduces productivity growth (both for RRs and airlines).
Prevention of entry maintains inefficient firms in an industry, and dereg led to bankruptcies and new entrants.
-Airline regs illustrate two classic regulatory effects: 1) if gov't takes away price as a competitive instrument, then
firms will compete in other ways; 2) it's tough to predict the effects of regulation, because it's tough to predict the
new and innovative means of providing a better product at a lower cost that competition would have brought about
(like the hub and spoke system after dereg).
a) Price and Quality of Service- Look at effect of CAB regs on airfares by comparing air fares in regulated and
unregulated markets over the same period. Compare fares on intrastate and interstate routes that are of similar
length and density. Fares in the interstate markets were 50 to 100% larger than the fares for similar length routes
in the unregulated intrastate markets. How do we know airline fares were too high under regulation?- Within the
states under old regime, there was competition (not regulated by CAB, because not inter-state).
--Interstate (regulated) had a higher cost per mile than intrastate (not regulated).
-After dereg, prices increased on short flights, decreased on long flights. Also, fares have increased for lower
density routes and decreased for higher density routes.
-This shows that cross-subsidization is being dismantled.
-From a cost perspective, it's optimal for airlines to use large aircraft for longer distances. Because the cost of a
seat rises with distance, the optimal load factor (% of filled seats) increases with distance. But reg. induced
non-price competition, so we'd expect load factors to decrease as distance increased. Evidence shows load factors
fell with distance under reg. and increased with distance after dereg. Consumers would have preferred lower fares
and higher load factors than resulted under regs.
-Dereg also led to airlines spending less on meals, and smaller flight crews, because non-price competition
isn't so important any more.
-(** Question 2, page 582) Table 17.8 shows customers' willingness to pay extra for non-price factors such
as on-time flights. They'd pay the most for safer airlines, and much less for shorter travel times, transfer times, and
increase in % of flights on time. Of course, those results are kind of arbitrary, and whether or not I'd pay more
depends on my particular situation.
-Table 17.8: How much would we be willing to pay for a 10 minute reduction in travel time?
-To answer this, we need to know how long the trip is, are we connecting, the time of day.
-What about a 10 minute reduction in transfer time? (people would pay more for this)?
-We'd want to know if we'd just sit longer in the terminal, how close a call the transfer was to begin with.
-What about a 10% increase in the percent of flights that are on time?
-We'd want to know what we were going to miss if it's late, what % of the flights to begin with are on time,
what do we mean by "on time".
-What about what we'd pay for the carrier to have no fatal accident in the previous 6 months?
-Does the lack of previous accidents make my flight in the present any safer, how often do we fly?
b) Development of the Hub and Spoke System- Reg. led to high price, high quality product. Dereg leads to lower
price, lower quality. Regs did lead to lower quality in the realm of flight frequency, and after dereg consumer had a
wider array of departure times to choose from due to the hub and spoke system. Concentrating traffic in hubs
allows for larger planes (which are more economical) and more flights (because of greater traffic at hubs).
-Hub system is now wide-spread. CAB regs prevented the sort of route restructuring necessary for high-traffic
hubs. Hub system trades longer flights for more frequent departure times. Hub system also leads to more
departures from low density areas than under reg. Very efficient system.
c) Welfare estimates from changes in price and quality- After dereg, avg. coach fare fell by 5%, and first class
fares shrank from 150% to 120% of coach fares. Load factors increased from 55 to 61%. Estimated gain to
consumers was about 10% of a pre-deregulation fare. Undesirable changes in travel time and delay were more than
compensated for by the reduction in fares (but this doesn't account for lower welfare due to fewer on-board services
and higher load factors).
-Consumer gain $12.4 bill each year from dereg's lower fares, and $10.3 bill for more frequency. Net
gains are $15 bill (counting losses from more travel restrictions, travel time, load factors, connecting flights).
-Reg. also hurt industry profits.
d) Dynamic Productive Inefficiency- Airline regs reduced productivity growth, compared to foreign airlines.
e) Airline safety- (** Question 5, page 589)- Non-price competition may have raised safety beyond the levels
required by law. Dereg might have led airlines to cut the former excess costs of safety (like using fewer pilots), and
more flights could lead to more accidents. But dereg also led to more planes, which decreased the average age of
fleets. Competition could also lead to more efficiency and technological progress.
-Could measure safety by looking at fatalities, accidents, near-misses. Look at them per-passenger,
per-flight, per mile.
-The result is that passenger safety has increased under dereg. ?
Under dereg, the planes are more full on long routes. So the planes are being used more efficiently.
Unfortunately, the quality of food has declined.
-The use of air travel has increased, and prices have gone down.
-What about safety? Essentially, de-reg. hasn't impacted safety (positively or negatively).
- What about market structure? Using the HHI measure of effective competition, we see that if n competing firms
have the same size, they each have 1/n of the market.
-- Effective competition = 1/(1/n) = n = the effective number of firms.
- If we have n firms of different sizes, the effective number of firms will be less than n. -see Figure 17.9.
After de-reg., we see many new firms entering, but also mergers and bankruptcies. The mergers and bankruptcies
led to a net decrease in competitors, which makes us start to worry (because of the hub and spoke system).
- May not want to go into somebody else's hub, because they could undercut you with predatory pricing.
The jury's still out on hubs.
4) Competition and antitrust policy after Dereg- When there's price & entry regs, antitrust policy isn't needed. With
dereg, antitrust becomes important. Worry about firms colluding to raise prices, deterring entry, predatory pricing.
a) Re-concentration of the Airline Industry- Measure industry competition using "effective competitors," the
inverse of the sum of each firm's market share squared. Shows how market share is distributed, the number of
equal-sized firms that would give the same level of HHI.
-Immediately after dereg, entry and expansion led to increase in number of effective competitors
(concentration went down). In mid 80s, consolidations and bankruptcies led to fewer effective competitors.
-DOT and DOJ have been lax, allowed too many mergers.
b) Deterrents to Entry- Several factors limit entry: 1) threat of predatory retaliation from industry leaders; 2)
Airline reservation systems used by travel agents, because they're provided by the airlines themselves; 3) difficulty
in gaining airport access (both runways and gates); 4) frequent flyer programs (flyers stick with programs that offer
broadest flight range).
c) Concentration and Air Fares- The number of potential competitors has fallen. But number of active
competitors has increased under dereg on average. However, hub system leads to domination of individual routes
by one carrier.
-Passengers whose origin or destination is a hub with a single dominant firm may be forced to pay higher prices.
Increasing concentration leads to higher fares.
-To continue consumer gains under dereg, need more vigilant antitrust policy.
Chapter 18- Economic Regulation of Energy: Crude Oil and Natural Gas (p. 605-610)
1) Intro.- Energy is critical to the economy, and its price and availability have huge impacts on the economy.
Recessions tend to follow increases in oil prices. Energy market is international. US has subjected energy to
varying degrees of regulation. Oil and natural gas had different regulatory regimes, but both regimes led to price
ceilings. History of Oil industry price regs- In 70s, market price for oil shot up, production regs were used. Carter
got rid of supply regs, but used price regs to reduce oil industry's profits from price mark ups. US energy
companies had long term contracts with oil suppliers at the old, lower prices, so they took the cheap oil, routed it
through Amsterdam where they set a new price for it based on the higher world oil prices, not the contract prices. It
comes into US at the higher price, where the price ceilings were in effect, and energy companies still got a huge
profit because of the transfer through Amsterdam.
-Basically, it's hard to avoid the world price through regulation.
-Now, energy prices are largely deregulated. Nothing we do to tinker with the price will affect the world market
price anyway, because the energy market is truly international.
2) Theory of Price Ceilings
Main thing is diagram of effect of price ceiling, seen in oil industry, ticket scalping, auto insurance, gas prices in
70s. Leads to rationing of quantity, waiting in lines. - (** Question 1, page 606)- Consider the market for a
product, and it's competitive in the absence of gov't regs. (See figure 18.3) Price and quantity will occur where
demand intersects supply.
-If price ceiling is at or above competitive equilibrium price, it won't matter because no one will be bound by it.
-If price is below competitive equilibrium price or if market demand shifted out or market supply shifted in, the
competitive price would rise, in which case the price ceiling would become binding.
-In that case, market demand will exceed supply, and output will be reduced. Consumers will gain
because of the lower price, but will lose part of original consumer surplus because of lower quantity sold. Figure
out the net gain to consumers by comparing triangles.
-Firms will clearly lose out due to the price ceiling. Producers lose part of their producer surplus due to
efficiency loss, and part transferred to consumers. There will be a net welfare loss, because firms' losses will
outweigh consumers' gains.
-Net efficiency effect = a total deadweight loss that equals the lost consumer surplus, and the producer surplus lost
due to efficiency loss. Consumers might like rationing, because their gains outweigh their losses. But on the whole
there's a deadweight loss due to rationing.
Assumptions and complications of rationing- Rationing is costly in ways that don't show up on the graph: (1) it leads
to lines, lost time, annoyance; (2) the people who don't buy the product under rationing are the ones who care the
least, who value it the least (people who value their time but who still the value the product a lot will have to put up
with the hassle of getting it); (3) Could do random rationing instead of rationing by who's willing to spend time
getting it, but this solution will exclude some people who value the product the most, and give the product to some
people who value it very little; (4) we assume there's no secondary resale market in the random rationing example
(because then the low value lottery winners could sell to the high value lottery losers); (5) if we leave it to sellers to
decide who gets the product, it will lead to bribery.
-Economists generally don't like price constraints, unless there's a really good reasons (like raising cigarette
prices because of negative externalities of second-hand smoking).
-Because supply is lower, how that supply is distributed among consumers is important. Consumers will be willing
to pay different top prices for the good (their reservation price). With a price ceiling, the consumers who value the
good the most won't have total access to it, because they'll have to share with all the consumers whose reservation
price equals the ceiling price. This results in a welfare loss.
-For the good to be properly allocated, it should go to the consumers who are willing to pay the higher price
that corresponds to the lower quantity on the demand curve (see figure 18.4).
-This analysis assumes that consumers can't resell the good to the consumers with the higher reservation
prices. That would help to avoid the welfare loss.
-Otherwise, the high reservation price consumers might try to bribe suppliers to get the good, or they might
just wait in line. However, in such a situation if the consumers use real resources to secure the good, that's a waste
of those resources. If they only use pecuniary resources (money), that's just a transfer and there's no welfare loss.
-Two basic points: (1) the imposition of a binding price ceiling reduces social welfare by decreasing the amount
exchanged in the market; (2) in light of there being excess demand, how the good is allocated to consumers can
create additional welfare losses (because the consumers who value the good the most may not end up with it).
Chapter 19- Introduction: The Emergence of Health, Safety, and Environmental Regulation
1) Intro.- Regs are important because in many cases, there is no market in operation (no market for breathable air, no
market-based compensation for pollution victims)
- People tend to overestimate risks associated with low-risk causes of death, and underestimate risks associated with
high-risk causes of death. The deviation from actual results is bigger for jurors than for judges.
Judges' estimates of mortality risks are a little more accurate than jurors' estimates. However, they both tend to
follow the pattern of overestimating small risks and underestimating large risks.
-Publicity of a risk results in increased overestimation (like with tornadoes). This is contrary to what economists
would think, because the more info that people have about something, the more accurately we'd expect people to
understand the thing.
-However, media focuses on the people who die, rather than on the actual probability of dying.
2) Risk in Perspective- Accidents only account for 5% of total deaths, so regs will play small role in reducing
overall mortality rate. Even very effective regs couldn't eliminate all accidents because of individual choice.
Other causes of death are also due to personal choice in many respects (diet, exercise).
3) The Unfeasibility of a No-Risk Society- (** Question 1)We need regs that foster better risk-avoiding decisions.
We won't have a risk-free environment no matter what regulations we pursue. Given the role of individual choice,
it's unreasonable to expect regs to produce a risk free environment. Society needs regs with a net benefit (risk
reduced minus cost of reg.). The key is that there must be some market failure to warrant gov't intervention.
a) Wealth and Risk- (** Question 2, page 660) Demand for more regs has come from society's increased affluence.
With more wealth, we place greater emphasis on physical well-being. There's been a general decline in the accident
rate over the 20th century. The only major form of accidents that's increased is motor accidents. It's stayed about
the same, although there has been a slight decline over the past 60 years. There have been big improvements in
motor vehicle safety, and the risks per mile have declined. But total death rate per population hasn't changed much
because of increasing frequency of driving.
-Even w/o regs, accident death rate would probably decline. Judge effectiveness of regs on whether they
lowered risk below what it would have been anyway. Much of the decline can be attributed to better technology.
Figure 19.1, risk trends- Accident risks have declined over time. Reasons:
(1) technology improvements that reduce the cost of providing safe products
(2) regulation helps reduce accident risks (although before regulatory agencies were introduced there was
already a downward trend in accidents).
(3) people have gotten richer, more wealth to spend, so people demand greater safety. The richer you are,
the more you value your health.
b) Irrationality and Biases in Risk Perception- (** Question 3, page 662) Risky situations tend to lead to
irrational decisions. Individuals may sometimes not beware of the risks at all. Individuals tend to overestimate the
risks associated with lower-probability events (like tornadoes and floods). They tend to underestimate the risk
associated with higher-risk events (cancer, heart attack). This suggests that market decisions will seldom be
optimal, but additional regs may not be needed (because if risk perceptions are excessive, the safety provided by the
market will be responding to exaggerated risk perceptions).
-Overestimating low-probability events impacts gov't policy, in that society may end up devoting too many
resources to small risks that are not of great consequence if there's an alarmist reaction to small risks.
-How should gov't respond to public mis-perceptions? Should it follow society's wishes, or do what's best
in light of actual risks?
Assessing Risk- What did we learn from our risk assessment exercise/
(1) people underestimate high probability events. They're so common that they're under-reported and
people underestimate it because it gets less coverage. After all, newspaper doesn't cover everybody who died of
heart disease.
(2) people overestimate low probability events. This is largely because of press coverage of freakish
deaths like lightning. So, you overestimate based on the coverage.
-You have to at least know about the existence of risk in order to overestimate it. If you don't
know about it, you can't overestimate it.
(3) Small risk vs. no risk. If you reduce a small risk of an event to no risk at all, people will overestimate
the risk reduction. People will think the reduction was greater than it was, meaning that the original risk was
greater than it was.
(4) Perceived risks flattened out on the graph, decreasing perceived risk change of safety improvements.
Underestimate effect of safety improvements.
Asides:
(1) Strict liability for product liability is usually based on belief that people underestimate risks. However,
that's not true for low probability events (they actually overestimate those risks).
(2) Extent of info is important.
(3) Highly publicized risks are overestimated.
4) Policy Evaluation- Balancing of costs and benefits is inevitable in regs.
-Scare words: Producers don't like to see words like "cancer" and "birth defects" on their products.
-Info overload: too many warnings will lead people to ignore warnings altogether. People can generally process 4
to 5 pieces of info. More than 4 or 5 warnings won't be remembered.
-Certainty premium: if you reduce the risk of a product to zero, people will pay a lot more for that than the
reduction is really worth.
-Basically, life will never be risk free.
a) Regulatory Standards- Executive branch imposes strict cost-benefit analysis requirements on regs.
-Rationale of Benefit-Cost Approach- (** Question 4, page 665) At the very least, society shouldn't pursue
policies that do not advance our interests. We want to maximize the benefit-minus-cost difference of regs.
-Figure 19.3 illustrates that the cost of providing environmental quality rises, and it does so at an increasing rate
because improvements in environmental quality become increasingly costly to achieve.
-Also, the initial gains from improved environmental quality are the greatest. The eventual benefits of
environmental quality improvements eventually diminishes. We want to achieve the largest spread between the
total benefit and total cost curves. The largest gap gives the maximum value of the net benefits less costs that are
achievable with environmental quality regs.
-Figure 19.4 shows the marginal cost and marginal benefit curves. Marginal costs rise because of the decreasing
productivity of additional environmental-enhancing efforts, and the marginal benefits decline because the greatest
incremental benefits from such improvements come when environmental quality is very bad.
-The optimal policy level is at the environmental quality level where the marginal benefits and marginal
cost curves intersect.
-The optimal quality choice can be characterized by equation 19.1: marginal benefits = marginal costs.
Arsenic regulation- shows cost per life saved. What we care about is the marginal cost per life saved. (Table 19.3)
Basically what the table says is that there are 3 levels of stringency for the reg.: loose, medium and tight. Marginal
cost per life explodes 60 times from loose stringency to tight stringency.
-Average cost per life = (total costs up to that level of stringency)/(lives saved up to that level of
stringency). Average costs can partially hide the explosive cost increases that come with higher levels of
stringency.
-Marginal cost per life = (incremental costs from previous level of stringency)/(incremental lives saved).
E.g., how much extra are we spending to increase stringency from medium to tight.
-Need to tighten stringency standards until the marginal cost per life saved = our valuation of human life.
b) Role of Heterogeneity of Standards - some standards should be heterogeneous. If the marginal costs of
regulating an industry are high, we'd accept less safety (chemical industry). If the MC for regulating the industry
are low, we'd require more safety (manufacturing industry).
-Efficiency gains from heterogeneous standards:
(1) cost heterogeneity- we want standards that are as tight as possible given the costs of the standards. We
want tight standards for an industry that is cheaper to make safe.
(2) benefit heterogeneity- there may be differences in risk, and differences in willingness to bear risk.
EPA tends to make decisions based on high benefit people, which means a higher degree of safety will be required
despite higher costs. OSHA and other agencies use lower benefit people in their evaluations of regulations.
- Restrictions on newer firms are usually tighter than for older firms, because it's cheaper for new firms to use safer
technology. Use differentiated standards instead of uniform standards wherever possible.
c) Discounting Deferred Effects- Consider the costs and benefits over the long term, make sure that costs won't
outrun benefits in the future. One will generally discount in a way that reduces the present value of future impacts.
5) Uncertainty and Conservatism- In judging risks, there's often an element of uncertainty regarding the extent of the
risk to humans. EPA uses the upper end of the 95% confidence level around a particular risk. Gov't agencies err
on the side of conservatism, which may distort risks (they may over regulate a low risk because less is known about
it)
-Need to consider the population that could be affected by a risk when deciding how much to spend regulating
it/fixing it. (like toxic waste in a swamp no people live near- clean it, but don't spend tons of money on
over-cleaning it when no one would benefit from the extra cost).
-The net result of the conservatism biases is to generate a risk assessment that may bear little relationship to the
actual risks posed, making it difficult for policy makers to determine which sites pose risks and which don't.
a) The Role of Risk Ambiguity - Ellsberg paradox:
-Choice 1- 50 white balls and 50 red balls.
-Choice 2- unspecified mixture of red and white balls.
-Generally, people prefer choice 1, the known probability. They might even accept a lower probability than 50-50
for having a known probability.
-Bottom line is that soft probabilities (the unknown mixture) should be equivalent to the same hard probability
values (50% of something happening for sure should be the same as 50% chance of an uncertain result). However,
people prefer the hard probability choice.
- (** Question 5, page 675)- Uncertainty shouldn't be a concern in one-period decisions (Ellsberg Paradox). For
one-shot decisions, the precision of the risk doesn't matter. But in sequential decisions in which learning is possible
and in which you can revise your decisions over time, it's preferable to have a situation of uncertainty rather than to
have a precisely understood risk. In situations of uncertainty we can alter a course of action if the risk turns out to
be different than we had anticipated originally.
-This result implies that the stringency of our regulation may depend in large part on uncertainty, but we won't
necessarily respond in a conservative manner to this uncertainty. If we have to take action now to avoid a
catastrophe, uncertainty is irrelevant and we should act according to the mean risk. But if we can learn about how
serious the problem is and take effective action in the future, it will be better to make less of a regulatory
commitment than one would if this were a one-shot decision.
-Risk analysis shouldn't be confused with risk management. Need to be aware of true risks posed by
different exposures so that we can make comparative judgments across different regulatory alternatives.
6) The Role of Political Factors- Economic interests tend to determine how congressmen vote on environmental
regs, more than the actual social welfare costs and benefits of the reg.
-Economic self-interest (illustrated in the capture theory models of regulation and ideology) influences how reps
vote regarding environmental regs.
- lots of the time, Congress doesn't vote on regs based on what they ought to do to maximize net benefits. Rather,
they'll vote in a way that helps their political interests. Typically, various environmental regs have been used for
political purposes, namely, "non-degradation requirements", meaning you can't make your air any dirtier than it
already is. The nat'l reps from areas that are already industrial would like this type of regulation, because the areas
of the country that aren't very industrialized wouldn't be able to take industry away from the industrial rust belt
because the industries would degrade their air. This keeps industry in places that already have it.
-Crandall is one of the guys who has studied political factors. What matters most in terms of political factors is your
political party, the amount of pristine land in your state (if you've already got a nat'l park in your state, you don't
want more land to be taken up by nat'l parks), income (the wealthier the state, the more pro-environmental it is),
income growth (states with income growth oppose environmental regs because the regs might hurt the growth.
-Kalt and Zupan studied strip-mining, saw that capture theory played a big role in how reps vote (could be business,
or environmentalists depending on your state), also ideology of rep.
-The Capture Theory assumes your politicians are on the brink of losing an election, so they're always
trying to maximize their votes. Ideology assumes that reps are free to act on their ideology (like Ted Kennedy
who'd get re-elected regardless of how he votes).
-Logrolling- reps are willing to vote to support a plan that helps another region in return for the reps from
that region supporting them on a critical issue.
-Basically, keep in mind that not everything the gov't does is to maximize social benefit.
Chapter 20: Valuing Life and Other Non-monetary Benefits
1) Intro. - To evaluate regs, we need to know the value of the benefits the reg. will produce (like saving a human
life), as well as the costs (scrubbers for factories). We need a systematic basis for establishing tradeoffs between
resources expended and benefits achieved through social reg. efforts, because social regs usually deal with
commodities not explicitly traded in the market.
- we're worried about prevention. How much cash would we require in order to accept a 1 in 10,000 chance of
dying. Everyone would accept some amount of money, even if it's a lot. It's not true that no amount of money
could compensate us for accepting such a risk.
-Two approaches have been used: (1) estimate implicit prices for social risk commodities that may be traded
implicitly in markets; and (2) ask people how much they value a particular health outcome.
Summary of 3 measures of the value of life- (a) Worth of the value of a certain death. Here you get really big
numbers, especially when it's my own death. This is not what we are valuing from the economic perspective.
(b) Present value of lost earnings (Human capital approach). This is what they use in wrongful death suits.
(c) Value of a statistical life (Hedonic damages, quality adjusted). Based on risk reduction figures.
-Should these value of life numbers be used in court? US gov't and risk reg. agencies use them. They're official
gov't policy.
-Each of the three measures is intended to be used for different purposes. (c) is a deterrence measure, helps us
figure out how much injurers should pay in order to give them proper safety incentives. (b) is an
insurance/compensation measure, all of the future earnings will be covered.
Altruism- all the numbers used by the regulatory agencies are based on how much the worker values his own life.
What about how much people will miss him if he died? There will be some altruistic measure there (like with
Exxon Valdez).
2) Policy Evaluation Principles- (** Question 1, page 686) Gov't needs to consider how much people would be
willing to pay to reduce the risk in question by a certain amount when deciding how far to push social regs. The
main matter is society's total willingness to pay for eliminating small probabilities of death or health risks. Gov't
shouldn't care about future earnings, or how much we'd pay to avoid certain death. An individual's future earnings
will, however, be relevant to how he thinks about the calculation of how much he'd pay himself.
-(equation 20.1) Value of Life = willingness to pay/size of risk reduction. This gives the amount I'd be
willing to pay per unit of mortality risk.
-The "size of the risk reduction" will equal the reduction in probability of death. If the reduction in
probability is 1/10,000, then:
-value of life = willingness to pay/(1/10,000) = 10,000 x willingness to pay
-If a person would pay everything he has for such a reduction in risk, he places infinite value on his life. If
you'd pay less than everything you could, that means you'd accept a risk-dollar tradeoff.
-When thinking of how much they'd pay to avoid a risk, people tend to think in terms of their immediate
resources rather than their lifetime resources.
3) Willingness to Pay versus Other Approaches- There are alternatives to the willingness to pay approach. Take the
present of one's lifetime earnings. Or present value of lifetime earnings net of the consumption of the deceased.
Or look at the taxes the person would pay over the course of his life.
-OSHA took the hazard-communication approach in the 1980s, prepared regulatory analysis so that the value of the
risk reduction was assessed in terms of the lost earnings of the individuals whose deaths would be prevented. OMB
rejected this proposal, and OSHA was allowed to use it with the willingness to pay measure for the value of life.
-Willingness to pay gives larger benefit estimates. But it doesn't actually measure human life, rather it measures
how much we'd be willing to pay for small risk reductions.
4) Variations in the Value of Life-People attach different costs to bearing risk. The wealthy will require a higher
price to bear any particular risk. Smokers are more willing to bear a variety of risks other than smoking in return
for less compensation than would be required for a nonsmoker. People who wear seat-belts are especially reluctant
to incur a job-related risk.
-It's probably best to disregard irregular risk preferences when making policy decisions.
-Although discounting benefits of regulatory programs may reduce their present value, the fact that wealth will
increase in the future counteracts the impact of the discounting.
-What's the implicit value of a lost workday injury? The order here is neat. People who wear seat-belts valued such
injuries about 50% more than the full sample did. Smokers valued it about 50% less than the full sample did, and
about 67% less than the seat-belt wearers did.
-Smokers will work on riskier jobs than nonsmokers, they'll get paid less per unit risk, they're more likely
to get injured at home, less likely to floss teeth and check their blood pressure. Smokers are just big risk-takers.
-Which group would we save if we had the power to save all of one group: smokers or seat-belt wearers? Viscusi
seems to say we should save the seat-belt wearers because they value their lives more.
-How about a situation like the Titanic, where poor people know going in that they'll die in a wreck, while the
expensive tickets will allow a person to be saved. No real answer there.
-How about airlines? Average flyer has more money than the average American. Should we spend more per
person on airline safety than on highway safety? Viscusi says the guard rails on the highways are paid for out of
tax money, while airline safety is paid for in the ticket price. Where it's public money spent, it's controversial to
send money based on different values of life for the rich and poor. However, gov't can just require airlines to
increase safety and the customers will have to pay for it. Viscusi says the value of life the DOT uses is way too
low.
-What about old people vs. young people? Young people have longer to live.
-How about internationally? Equalizing the risk of death in all countries. One proposal is that if it doesn't meet
our safety standards in the US, we can't export it to another country. Others say we shouldn't import anything that
wasn't produced abroad in accord with our safety regs.
-Exports: We should be able to export, let the other countries worry about their own risk regulation. If
it's good enough for them, who cares if it's not good enough for us.
-Imports: You have to be fairly rich to meet these standards, so cut them a little slack. Best thing to do for
poor countries is to raise their incomes. Imports do that.
5) Labor Market Model - Wage Equation theory- On a curve where wage is y-axis and risk is x-axis, if companies
lower risk from the original level, they'll also have to offer lower wages because the firm will have to spend more on
safety. Can draw the curve based on all the various wage-risk tradeoffs.
-Workers indifference curve measures all the wage-risk combos that will provide the same level of utility.
-Overlap the indifference curve on the firm safety curve, and the worker will take the job where the firm
safety curve and the worker's indifference curve overlap.
-That will reflect the wage-risk trade off for both the worker and the firm. Reflects value of safety to
workers and cost of safety to firms.
- Most value of life estimates come from labor market data, estimating the wage-risk tradeoff that workers implicitly
make as part of their jobs and use implications of this tradeoff as an estimate of value of life.
-Figure 20.1 shows that workers' utility is higher with high risk, high wage jobs than with low risk, low wage jobs.
-Figure 20.3 shows that firms' profits may increase with higher wage, higher risk jobs, and for other firms, the
profits may level off after a certain wage and risk level.
(** Question 2, page 696)- Figure 20.3 shows the equilibrium in the market for risky jobs. The expected utility
curves show a worker's utility for jobs offering various risks and wages. From those curves, we can estimate a
linear relationship between wages and risks. The slope of that linear relationship gives the estimated wage-risk
tradeoff. This curve indicates the terms of trade that workers, on average, are willing to accept between risk and
wages.
6) Empirical Estimates of the Value of Life - This is the value of a statistical life. Not talking about identified lives.
If there's a 1 in 10,000 chance of dying, then for 10,000 statistical people one person would die. From the above
example, my value of life would be $5000 x 10,000 (= $50,000,000). For this to work, the increased chance of
death must be small.
-Another way to calculate it would be that the value of life = value/unit risk = $5000/(1/10,000) = $50,000,000.
- (** Question 2, cont'd) Valuing life depends on whether you're talking about annual earnings or hourly wages.
One way to estimate is using equation 20.5:
(p. 698) --Annual earnings = d + B(1)Annual Death Risk + [n SUM i=1] V(i)Personal Characteristic(i) + [m SUM
i=1] L(i)Job Characteristic(i) + E
-B(i) indicates how annual earnings will be affected by an increase in the annual death risk. IF annual
death risk were 1.0, then B(i) would give the change in annual earnings required to face one expected death. B(1)
is the value of life estimate. It represents the tradeoff that workers exhibit between earnings and the risk of death.
-The other characteristics in the equation are designed to disentangle the premium for job risks as opposed
to compensation for other attributes of the worker and his job (because generally, people in higher wage jobs will
face fewer on the job risks, mainly because of more skills required in mental jobs than physical, risky jobs).
-Economists look at this based on how wages people accept for risky jobs:
-- Wage= constant + b(education) + c(gender) + d(race) + e(experience) + f(risk).
-By isolating risk away from all the other factors in wages, the economists found that a blue collar worker
gets from $300 to $700 extra a year to accept risky jobs (1/10,000 risk). This puts their value of life from $3
million to $7 million.
-Value of life estimates range from less than $1 mil to more than $6 mil. This difference is due to the mix of
workers and their preferences across samples and occupations. The degree to which risk variables measure the true
risk associated with the job may differ substantially across risk measures.
-Historically, the federal gov't valued life based on present value of future earnings.
7) Value of Life for Regulatory Policies - Table 20.4- they use a value of life cutoff of $5 million, and show which
regs pass the benefit cost test and which ones fail. Lots of stuff that passes the benefit cost test comes out of the
DOT. DOT reg. has never failed the benefit cost test. EPA is the agency that's most likely to fail the benefit cost
test.
- (** Question 3, page 699)- Table 20.4 summarizes key aspects of major regs. Cost per life saved is the most
important info. Sometimes it's below $1 mil. If we assume the value of a life is $5 mil, all regs that cost less than
$5 mil per life saved would pass the benefit-cost test. For the most part, rough judgments about the efficacy of a
regulation can tell a lot.
-From the graph, we can see that some regs pass the cost-benefit analysis by a wide margin. Some fail by a huge
margin (Like Formaldehyde reg. that cost $72 bill per life saved).
-Calculating the costs, benefits and appropriate reference values for the value of life often highlights gross policy
distortions such as this.
Voluntary v. Involuntary Risks- Different sources estimate the value of life in the $3 million to $7 million range.
OSHA and DOT, which regulate voluntary risks, put the value at $3 million. EPA regulates involuntary risks, puts
the value at $7 million.
-Note that there is heterogeneity in estimates of the value of life.
OSHA Chemical labeling standard- a very expensive reg. by OSHA's standards. Did the benefits exceed the costs?
This was back in the 80s before the value of life numbers were used. OSHA said life was too sacred to be valued.
Instead they used the cost of death to figure out how much they should spend on saving a life. Cost of death is just
present value of future earnings plus medical bills. This was a very small number, just a few hundred thousand
dollars.
-However, OMB found the benefits were still less than the costs.
-Viscusi looked at the issue, and found that OMB was right. The problem was that OSHA underestimated the value
of life in its assessment. If they used the value of life numbers, the benefits would have been much larger, so that
the policy actually would have created a net benefit
8) Survey Approaches to Valuing Policy Effects- When there's no actual data on the various risks in question (like
global warming), one way to measure benefits is to run a survey. The surveys focus on how much we'd pay to
reduce risks. This is called contingent valuation, because they represent values that are contingent on a
hypothetical market.
-The studies should be assessed in terms of how well they replicate market processes in a meaningful manner.
9) Sensitivity Analysis and Cost Effectiveness- Basically, try to put various risk outcomes in similar terms. Like
saying that lost-workday job injuries cause 1/20 the economic impact of cancer.
10) Risk-Risk Analysis- OMB came up with this to measure how gov't regs could actually injure people sometimes.
-There are plenty of such risks: (a) Substitution risks- if we ban a particular product, what will people use in its
place. e.g., what if we required babies to have their own plane ticket? A baby seat might be safer for the baby, but
if we require people to buy the extra plane ticket, some of them might have to drive instead, and that's more risky
than flying. It's also like banning saccharine- without it, more people would get fat and that brings health risks too.
-Auto recalls: if the DOT tells you to bring your car in for a recall, you could get killed on the drive over.
(b) Production risk- All manufacturing processes injure people, so building smokestack scrubbers will kill some
workers.
(c) Richer is healthier- taking money out of consumers products leaves them with less money to spend on food,
clothes, medical care.
-The regs must produce enough benefits to outweigh the indirect risks they create. There is an opportunity cost to
regulation.
(** Question 4, page 705)- Legislative mandates for regulatory agencies often require risk reduction regardless of
cost. What are the costs associated with this, other than wasting societal resources? There's two:
(1) There's a direct risk-risk tradeoff arising from regulatory efforts. Like having to drive the car to the
auto dealership after an auto recall (there's risks in driving). Risk regs also stimulate economic activity (like
manufacturing pollution control equipment), and all economic activity is dangerous.
(2) There's also indirect risk-risk tradeoffs. The newest form of risk-risk analysis has drawn on the
negative relationship between individual income and mortality. Regulatory expenditures take resources from other
uses, like health care. So, there's a mortality cost associated with these regulatory efforts. Some of the more
expensive regs may actually cause more deaths than they prevent. Somewhere between $10 mil and $50 mil in
gov't regulatory expenditures causes a death.
Guns- Russian Roulette. How much would you pay to reduce the number of bullets from 6 to 5. If you could
borrow against future earnings, how much would you pay? A lot, like a million dollars. You'd spend less to buy
back the second bullet than you did the first, because if you don't buy the first, you're automatically dead, and you'll
spend tons of money to avoid that. You'll spend less on the second bullet because you don't have as much money
left and there's already a chance that you'll be alive anyway.
Saving identified lives- what if we were dealing with saving identified lives, instead of random lives. So, if we
could save one life for $8 million that would otherwise be gone. There, the probability of death is either 100% or
zero. So, if it were my life, I'd definitely pay more than the economic value of my life.
Chapter 21: Environmental Regulation
- should we focus on saving lives in the present or lives in the future? We'll be richer people in the future, and
richer people value the environment more. From an economic standpoint, lives in the future are worth more than
lives in the present. This is one of the concerns that the EPA addresses in the areas of climate change and
endangered species.
1) The Coase Theorem for Externalities- (** Question 1, page 712) The result of the Coase Theorem in externality
situations is that from an economic efficiency standpoint, the outcome will be the same regardless of the assignment
of property rights, because parties will bargain to the mutually beneficial solution.
-From an equity standpoint the results will be different. The outcome will be the same, but the well-being
of each of the parties and the cash transfers that take place will be quite different under the two regimes (property
rights to Farmer A or to Farmer B).
-The ultimate objective from an efficiency standpoint is to avoid the more serious harm.
-Table 21.1- The Coase Bargaining Game:
-Feasible Bargaining Requirement: Maximum Offer > or = minimum acceptance.
- Bargaining Rent: Bargaining rent = maximum offer - minimum acceptance.
-Settlement with Equal Bargaining Power: Settlement outcome = (maximum offer + minimum
acceptance)/2 = Minimum acceptance + (Bargaining rent)/2.
a) Coase Theorem as Bargaining Game- (see table 21.1) Coase didn't really explore the bargaining process.
-If the pollution victim gets the entitlement, the polluter will have to pay for the damage or to control the effects.
The polluter's maximum offer will be the minimum of either the control costs (like scrubbers) or the penalty that
will be imposed if the firm inflicts the externality.
-The victim's minimum amount he's willing to accept in return for suffering the pollution will be the amount of
compensation that restores their level of utility to what it would have been in the absence of pollution. This is the
minimum acceptance value.
-There will be a feasible bargaining range if the polluter's max. offer exceeds the victims minimum acceptance
value. If that's not the case, there will be no bargain. Firms will select the minimum cost alternative of either
installing the control device or paying the legal damages.
-The bargaining rent represents the net potential gains that will be shared by the two parties as a result of being able
to strike a bargain. The objective of each side is to capture as much of the bargaining rent as possible.
b) Long-run efficiency concerns- In addition to short-run equity issues, need to consider long-run efficiency issue.
Ideally, we want incentives for entry of new firms into the industry to be governed by the full resource costs
associated with their activities. If firms are being subsidized for their pollution by citizens who don't have the
entitlement and have to pay for pollution control equipment, there will be too much entry and too much activity in
the polluting industries.
c) Transaction Costs and other problems- Coase said there will be big transaction costs in carrying out the
bargains. Reaching the efficient outcome may be expensive (like if there's a large number of pollution victims
involved). And difficult (like trying to identify the polluter).
-Imperfect info of courts about the situation and costs involved can be a problem too.
-In reality we generally don't turn the market loose and let people contract out of the externalities that are imposed.
-However, Coase theorem is useful because by assessing the outcome that would prevail with an efficient market
given different assignments of the property rights, one can better ascertain the character of the impact of a particular
regulatory program. Coase helps us evaluate gov't regs that aim at correcting market failures and ensuring
efficiency.
d) Smoking Externalities- (** Question 2, page 718) This example assumes nonsmokers would pay money to avoid
being exposed to second-hand smoke, and smokers would be willing to pay to be able to smoke in public places.
The externalities arise in that smoking makes the smoker better off and the non-smoker worse off, while restricting
smoking will make the smoker worse off and the nonsmoker better off.
-When people can patronize different restaurants, the restaurant's smoking policy will influence whether people eat
there. The same is true for peoples' workplaces.
-Insurance costs are influenced by smoking too. Health costs will be higher, but smokers will die sooner and their
early departure will save society pensions and social security costs. There are high per-pack costs generated for
health insurance. However, there are offsetting savings arising from the higher mortality rates of smoking, mainly
the lower pension and social security costs. Because smokers die sooner, they're also less likely to get long-term
diseases like Alzheimer's, so that decreases some of their medical expenses later in life.
-On balance smokers save money for society in terms of net externality cost. Need to consider all of the
effects of the externality, not just the obvious ones.
2) Selecting the Optimal Policy: Standards versus fines- Lawyers say we should set standards that outlaw the risky
behavior. Economists say we should try to replicate the market by establishing a pricing method. Both methods
can lead to the efficient solution.
a) Setting the pollution tax- Need a pollution tax that raises the cost of gasoline to the socially optimal level (so
that people will use less gas and produce less pollution). Tax scheme works better than gas output restriction,
because gov't gets to keep tax, while gas companies get to keep increased profits from output restriction.
-Ideally, we want to equalize the marginal benefits and marginal costs of pollution reduction.
b) The role of heterogeneity- Where it costs more for Firm 1 to reduce pollution than it does for Firm 2, the optimal
solution is to have a differential standard. Set a tighter standard for firms where the marginal cost curve for
reducing pollution is lower. It's cheaper for some industries to reduce pollution than others, and also cheaper for
newer firms than for older firms. This is called new source bias.
c) The role of Uncertainty- It's easy to set optimal standard when compliance costs and benefits arising from
policies are known. In reality, there's much uncertainty about the marginal cost curves of pollution reduction.
There are potential losses from both excessive regulation and inadequate regulation.
-If the uncertainty with respect to cost is greater than with respect to benefits, as is usually the case, then a fee
system is preferable to a standards system.
d) Pollution Taxes- (** Question 3, page 727) Figure 21.6 illustrates using a pollution tax to promote optimal
pollution control. Set fine equal to marginal benefits (a horizontal line) of pollution control. This leads the firm to
install pollution-control equipment needed to achieve the optimal pollution control level.
-Firms will prefer the standards system, because under the standards system, the only costs incurred are compliance
costs. Under the fine system, firm must pay both the compliance costs and also the fine for all of the pollution that
remains above the optimal control point. Two observations on this:
(1) The fines may be desirable from a long-run efficiency standpoint, because we want all economic actors
to pay the full price of their actions. If they don't do this, the incentive to enter polluting industries will be too
great. Under a standards system, society will provide a subsidy to these polluting industries equal to the value of
the remaining pollution.
(2) The imposition of costs on firms can be altered to make its impact more similar to that of a standard by
making the fine asymmetric. Fines are more able to respond to heterogeneity in compliance costs than standards are.
3) Current Market Trading Policies- (** Question 5, page 731) 4 different policies are available: Netting, offsets,
bubbles, and banking.
-Under the EPA's bubble concept, a firm doesn't have to meet compliance requirements for every particular
emissions source at a firm. The plant is surrounded by an imaginary bubble. Total emissions that emerge from the
bubble must be limited to a certain level. This gives the firm some flexibility in terms of what sources it will
choose to control at the plant. Firms will choose to reduce pollution from sources where it's cheaper to do so. This
results in compliance cost savings for firms.
-It didn't say why environmentalists didn't like it. A possible reason is that the bubble policy negates the
environmental benefits from lower emissions that would arise if the firm had reduced the pollution from the one
source while not simultaneously increasing the pollution from the other source. There's no net environmental gain
from the bubble policy, just a cost-savings for firms.
4) The Enforcement and Performance of Environmental Regulation
a) Enforcement Options- EPA has to monitor to make sure that regs are complied with. Easier to monitor some
risks than others. Two kinds of financial penalties can be used: (1) administrative penalties that are usually modest
in size and limited in terms of the circumstances in which they can be levied; and (2) Civil or criminal legal
penalties through the DOJ.
b) Enforcement Trends- (** Question 4, page 741) Superfund was established in 1980 to deal with toxic waste
cleanups and disposal. EPA has made more use of litigation to deal with Superfund violators.
-There are signs that environmental policies are having an impact on enterprise pollution decisions. There have
been fairly dramatic declines in various forms of air pollution since 1970, and that's evidence of some payoff to
society from the regulatory and compliance costs that have been incurred.
Chapter 22: Product Safety
1) Emergence of Product Safety Regulations- Social regulation agencies first started out in 1960s. Courts began to
award big damages in product liability cases, and business had to adjust.
- Many factors impact product safety: the producers' safety decisions, consumer complaints and decisions to buy
(influenced by media, gov't, producers), consumer safety decisions. Safety is the outcome of the joint influence of
producers' safety decisions and user actions.
-1970s regs focused on technological safety solutions. 1980s focused on right-to-know policies and user
responsibility (like drunk driving, mandatory safety belt use). 1990s was marked by corporations adjusting to huge
liability problems from product safety.
2) Pre-manufacturing Screening: The Case of Pharmaceuticals- Regulations that hit products at an early stage are
those that relate to pre-manufacturing product screening (like FDA approval of new drugs). This is due in large
part to the Thalidomide disaster in 1950s.
-The benefits of stringent screening arise from the decreased risk of approving a drug that might have adverse side
effects. Stringent screening also brings costs:
-a) testing costs and foregone opportunity to market a potentially profitable drug.
-b) society may be deprived of potentially beneficial drugs with life-extending properties.
a) Weighing the significance of side effects- FDA will often approve drugs with negative side effects, but only if
the value of the drug outweighs the side effects.
b) Drug Approval Strategies- (** Question 1, page 757) Table 22.1 reflects the nature of the tradeoff. FDA may
review a beneficial drug, but could reject it because of misleading test results. Or the sponsoring firm could
abandon the drug because of the costs of the lengthy approval process. Situations where the FDA review process
leads to rejection of potentially beneficial drugs are designated Type I errors.
-If the FDA adopted a more lenient drug approval strategy, there'd be the danger of approving dangerous drugs that
shouldn't be in the market. Errors of this type are Type II errors. Ideally the FDA wants to approve all beneficial
drugs and reject all unsafe drugs. But limits on resources and info make that impossible. So, FDA has to strike a
balance between Type I and Type II errors.
Table 22.1:
State of the World
New drug is safe and effective
New drug isn't safe and effective
FDA Policy
Decision
Accept
Correct policy decision
Type II error
Reject
Type I error
Correct policy decision
-FDA tends to place too great an emphasis on Type II errors, seeking to avoid approving drugs with potentially
adverse consequences. One factor in this is that victims of Type II errors are more readily identifiable than victims
of Type I errors.
-FDA should seek to minimize the total costs of the approval process. Two parts to this: (1) the expected health
costs from use of a drug that's not safe and effective, which decline with increased testing, and (2) the R & D costs
to gain FDA approval and costs of delay in introducing safe and effective drugs, which increase with increased
testing.
c) Accelerated Drug Approval Process- In 1987, FDA started an accelerated drug-approval process for drugs that
address life threatening diseases like AIDS. This is because such drugs could help a well-defined constituency that
might lobby for faster approval, and also because there's less potential health risk because of the high risk of
mortality from such diseases anyway.
3) The Behavioral Response to Product Safety Regulation- (** Question 2, page 761) Peltzman argued that advances
in auto safety technology would influence the behavior of drivers. They might drive faster and take more risks if
safety technology would protect them better.
-Figure 22. 3 illustrates this. Initially the driver is at point A, where the line 0A gives the relationship between
driving intensity and driver's risk of death before regulation. With safety belt use, the risk curve drops to 0BC. If
the driver still takes the same precautions, his death risk will drop to point B. However, because the marginal
benefits to the driver of taking the precaution have been reduced, he'll increase his driving intensity so that his death
rate will end up at C (higher than B), thus muting some of the effect of safety belts.
-He'll increase his intensity when there are safety improvements because the marginal benefit of driving
slowly has been reduced. As a result, the optimal slowness of the driving speed has been reduced, meaning that the
driver now finds it desirable to drive faster once he is using devices that will decrease his risk of injury or property
damage from an accident (see fig. 22.4).
a) Consumer's Potential for Muting Safety Device Benefits- Basically, once individuals have better safety
technology, they'll have incentive to drive faster, thus muting and possibly offsetting the beneficial effects of the
safety device. Like driving slow when streets are icy, and then increasing speed when ice melts.
-Results seem to be that: (1) auto safety regs have reduced the risks to drivers and motor vehicle occupants. (2)
Drivers wearing seat belts tend to drive faster. (3) On balance, safety regs have a risk-reducing effect, although
there's a muting of the impact of safety regs by the decrease in the care exercised by drivers.
b) The Lulling Effect- (** Question 2, cont'd) Safety caps have been used to protect kids from aspirin, other drugs.
They reduce the benefits to parents of putting medicines in a place that's tough for kids to reach, and they also lull
parents into a false sense of security. Over-trusting the safety caps may lead to an additional decline in safety
precautions- "the lulling effect".
-Figure 22.5 illustrates the lulling effect, page 765. Basically, safety caps lead to lower expected loss curves. But
they also lead to less safety-related effort. The lower effort may offset the lower expected loss curve and actually
result in higher expected losses.
c) Effects of Consumer's perception of Safety Device Efficacy- (** Question 2, cont'd) If consumers don't
accurately perceive the efficacy of the safety device, if they believe the device reduces the perceived risk more than
it really does, then the safety device is much more likely to produce a counterproductive result.
-Some studies show that safety devices like safety caps have actually resulted in more poisonings. This doesn't
mean all regs are bad. It just means that we should view safety as a product of both engineering controls and
individual behavior.
4) The Costs of Product Safety Regulation: The Automobile Industry Case- Autos were hit with many safety regs,
beginning in 1970s. Safety and environmental regs for cars led to higher prices.
5) Trends in Motor Vehicle and Home Accident Deaths- These account for most accident deaths and disabling
injuries.
a) Accident rate influences- With increasing affluence, society has demanded better safety. Should test safety
improvements in terms of accidents or deaths per mile driven, not absolute totals, because of increased driving
frequency today. Auto death rates have declined from 21.65 per 100 mil vehicle miles in 1923 to 2.55 per 100 mil
vehicle miles in 1987, while the overall accident death rate has risen from 16.5 per 100,000 population in 1923 to 20
per 100,000 population in 1987.
b) The decline of accident rates- (** Question 3, page 773) From the above numbers, we see that motor accident
death rates per mile have been declining for the past 60 years or so. The decline started before the gov't regs began
in the 1970s. The same is true for home accident rates. The rate of decline in auto accident deaths has increased
since the 1970s, and much of this decline is due to the impact of safety regs.
6) The Rise of Product Liability- Strict liability is used more than Negligence in product liability cases, leads to
much greater liability for firms.
-Under Negligence, a firm is liable if it doesn't meet the due care standard. Under SL, a firm is liable for all
accident costs incurred by the consumer even if the firm took due care.
7) Risk Information and Hazard Warnings- (** Question 4, page 781) One of the rationales for market failure is that
consumers don't have perfect info regarding the safety of the products they purchase. Where consumers know the
average product risk, but not the risk posed by the individual product, there will be a phenomenon like the classic
lemon problem. Table 22.11 represents this in the auto-safety context.
-Suppose there are 3 classes of cars with safety ranging from low to high. If consumers had perfect info, they'd pay
up to $30,00 for the safe car, and as little as $20,000 for the low safety car. Since they can't distinguish the
different degrees of safety, they'll make judgments based on the average safety of the entire group, which produces
an average value to consumers of 23,500.
-The losers from this approach are the producers of high-safety cars, and the winners are the producers of low-safety
cars. This kind of redistribution is a standard property of lemons markets.
a) Self-certification of Safe Products- Firms can avoid this pooling problem by identifying themselves as
producers of safer products. High-end producers would have the most incentive for this sort of identification.
Could do this by issuing warranties and guarantees.
b) Gov't Determination of Safety- Gov't has tried to meet the informational needs of consumers. Like warning
labels on cigarettes, saccharin, alcohol.
-Info regs are better than command and control regs because: (1) we don't want to ban the possibly risky activity
altogether and (2) consumers can act on the info based on their own risk preferences. Info regs don't interfere with
market processes as much as command and control regs do.
Chapter 23: Regulation of Workplace Health and Safety
1) Intro.- Workplace health and safety levels are governed by 3 influences: the market (less risk = lower wages),
direct reg. of risk levels by OSHA (less risk= lower regulatory penalties for noncompliance), and safety incentives
created through workers' comp (less risk = reduced workers' comp premiums). In 70s, OSHA took a lot of crap for
silly regs.
-OSHA should aim at providing regs that reduce the most risks per dollar of regulatory cost.
2) How markets can promote safety- Markets can allow workers to make rational judgments about the risk-level of
the jobs they take, but workers must be informed. If workers don't perceive the risks, they won't demand extra pay
to work on a hazardous job. In reality, workers have better info about safety risks than health risks on the job.
-The market level of safety will be the level where the marginal cost of safety curve intersects the marginal value of
safety curve. Note that the market level will be below the no-risk level of safety, because that is prohibitively
costly.
3) OSHA's Regulatory Approach- (** Question 2, page 804) OSHA Act of 1970 authorizes OSHA to set standards
and to do so in a manner that will ensure worker health and safety, but it didn't specify what form the standards
should take, or the nature of enforcement.
a) Setting OSHA standard Levels- OSHA's general approach is that of adopting technology-based standards whose
stringency is limited only by their affordability.
-Figure 23.3 illustrates how OSHA's approach differs from a benefit-cost approach. Assume that the marginal
safety-benefit curve is flat, so that there's a constant unit benefit value. The marginal cost of providing safety is
rising, as it becomes increasingly more expensive to promote safety. OSHA looks for the point where added safety
becomes prohibitively expensive, the point after which the marginal costs skyrocket. OSHA doesn't care about
costs too much, except when they get so high that they'd put the firm out of business.
-In cotton-dust standard case of 1981, Sup Ct said the feasibility requirements of the OSHA Act meant
"capable of being done," not "cost-effective".
-OSHA will continue to focus on the level of risk reduction rather than the associated costs until Congress
amends the OSHA Act.
-Economists say OSHA should take a more balanced approach that recognizes the necessity of taking into account
both the costs and risk-reduction benefits in a comprehensive manner. Look to relationship of marginal benefits
and marginal costs, not just where the costs shoot up. Maybe some inefficient firms should go out of business if
they can't function in a safe manner.
b) The nature of OSHA standards- Ideally, OSHA should let firms achieve any given level of safety in the least
expensive manner possible, consistent with having well-defined regulations that are enforceable (this would be a
performance-oriented approach). Instead, OSHA adopts uniform standards that apply to everyone.
c) The reform of OSHA standards- Shift emphasis from safety to health, let firms find less expensive techniques
for promoting safety, set standards in a more balanced fashion that attempts to recognize the health benefits to
workers and the costs to firms.
4) Changes in OSHA Standards- Carter overhauled safety standards, got rid of the most ill-conceived standards.
a) Chemical Labeling- Main change was OSHA's chemical labeling regulation. Provided workers with info in
order to let market forces promote safety. Main form of info was labels on the chemicals, and training workers in
using chemicals. Focused on health hazards over safety risks.
b) The Economic Role of Hazard Warnings- (** Question 3, page 808) Hazard warnings are attractive from an
economic standpoint because they provide info to workers and aid market forces. Also promotes safe user
behavior.
-The hazard warnings in table 23.2 led workers to increase their risk perceptions of jobs that handled hazardous
materials. If the market operates efficiently, the new risk perceptions should lead to additional wage compensation
for the jobs. If the higher wages weren't given, workers who handled the hazardous materials would quit.
-Studies show that warnings augment market forces by informing workers of the risks they face, and also lead to
increased precautions as individuals become better informed of the risks they face and the precautions needed to
reduce those risks.
-To be successful, hazard warnings must supply workers with new info. Simple browbeating won't change worker
behavior as much.
-Material in chapter 22, page 784 shows that for consumers, warnings and additional risk information increases the
frequency with which consumers wear rubber gloves or store dangerous products in child-proof locations. Some
consumers still won't take precautions because they're too onerous.
5) OSHA's Enforcement Strategy- (** Question 4, page 811) Firms will choose to comply with OSHA standards if
OSHA establishes effective financial incentives for doing so. In other words, the firm must find it more attractive
financially to make the safety improvements than to risk an adverse OSHA inspection.
-OSHA has 4 inspections types, in descending priority: (1) inspection of imminent dangers, (2) inspections of
fatalities and catastrophes, (3) investigations of worker complaints and referrals, and (4) programmed inspections.
OSHA tends to focus too much on small firms with few workers, and focuses too little on large firms with many
workers.
-OSHA offers firms the chance to reduce their penalties somewhat (~30%) if they seek to fix the problems.
Penalties average about $50 per violation. Worker’s comp and higher wages for hazardous pay are much greater
influences than OSHA at providing financial incentives for safety. Inspections have little additional deterrence
value.
6) The Impact of OSHA Enforcement on Worker Safety-(** Question 4 cont'd, page 816) In math terms, a firm will
comply with an OSHA regulation if:
Expected cost of compliance < [probability of inspection x Expected # of violations per inspection x
Average penalty per violation].
-A firm has a one in 200 chance of being inspected each year, there's an average of only 2 violations per
visit, and each violation costs an average of $60.k
-The OSHA penalties cost an average of 50 cents per worker, while market wage forces and workers' comp
imposes a cost of $800 per worker. OSHA's standards may be doomed to fail: strict regulations with lax
enforcement.
7) OSHA and other factors affecting injuries- (** Question 5, page 819) Not all workplace injuries are due to
factors under OSHA's influence. OSHA hasn't caused accidents to reduce by the expected 50% amount because
many accidents are due in large part to worker actions. Estimates for accident reduction when there's full
compliance with OSHA regs puts the reduction between 10% and 50%. Even without gov't reg., there would still
probably be a trend toward reduction of workplace accidents.
-Figure 23.6 shows death risk for American workers dropped from 15.8 per 100,000 workers in 1928 to 6.8 per
100,000 workers in 1970 (before the birth of OSHA). After OSHA, the death rate shrank to 4.6 deaths per 100,000
in 1987.
-Figure 23.7 demonstrates the effects of OSHA. Since OSHA came into effect, there has been a greater decline in
accident deaths than one would expect based on the pre-OSHA accident decline trend.
-Two approaches can be used to judge the effectiveness of OSHA: (1) estimate the equation to characterized the
injury-rate performance during a pre-OSHA era. Then project the expected accident rate after OSHA, and compare
it to the actual accident rate after OSHA. (2) Equation 23.2 is used on an industry-specific basis, using only
post-regulation data:
Risk(t) = d + B(1)Risk(t-1) + B(2)Cyclical Effects(t) + B(3)Industry Characteristics(t) + B(4)Worker
Characteristics(t) + B(5)(n SUM i=0)OSHA(t-1) + E.
-It's like the pre-regulation simulation, except it uses only post-regulation data and it includes variables that
capture measures of the effect of the regulation, where the principle variables that have been used in the literature
pertain to the rate of OSHA inspections for the expected penalty level.
-General consensus based on data is that OSHA hasn't had a substantial impact. A small impact perhaps. The
market will continue to be the key influence on worker safety. OSHA would be better off focusing on health issues
over safety issues (I guess because workers need more info about health issues, and also to be sure that it's getting as
much health and safety improvement as possible for the costs imposed).