Download Geometry of Linear Programming

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cayley–Hamilton theorem wikipedia , lookup

Brouwer fixed-point theorem wikipedia , lookup

System of linear equations wikipedia , lookup

Shapley–Folkman lemma wikipedia , lookup

Transcript
Chapter 2
Geometry of Linear
Programming
The intent of this chapter is to provide a geometric interpretation of
linear programming problems. To conceive fundamental concepts and
validity of different algorithms encountered in optimization, convexity
theory is considered the key of this subject. The last section is on the
graphical method of solving linear programming problems.
2.1
Geometric Interpretation
Let Rn denote the n-dimensional vector space (Euclidean) defined over
the field of reals. Suppose X, Y ∈ Rn . For X = (x1 , x2 , . . . , xn )T and
Y = (y1 , y2 , . . . , yn )T we define the distance between X and Y as
1/2
.
|X − Y | = (x1 − y1 )2 + (x2 − y2 )2 + · · · + (xn − yn )2
Neighbourhood. Let X0 be a point in Rn . Then δ-neighbourhood
of X0 , denoted by Nδ (X0 ) is defined as the set of points satisfying
Nδ (X0 ) = {X ∈ Rn : |X − X0 | < δ, δ > 0} .
But
Nδ (X0 ) \ X0 = {X ∈ Rn : 0 < |X − X0 | < δ}
will be termed the deleted neighbourhood of X 0 .
34
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
In R2 , Nδ (X0 ) is a circle without circumference, and in R 3 , Nδ (X0 )
is sphere without boundary, and for R, an open interval on the real
line. For n > 3, figures are hypothetical.
Let S ⊂ Rn . We give few elementary definitions.
Boundary point. A point X0 is called a boundary point of S if
each deleted neighbourhood of X0 intersects S and its compliment S c .
Interior point. A point X0 ∈ S is said to be an interior point of
S, if there exists a neighbourhood of X 0 which is contained in S.
Open set. A set S is said to be open if for each X ∈ S there exists
a neighbourhood of X which is contained in S.
For example, S = {X ∈ Rn : |X −X0 | < 2} is an open set. The well
known results: (i) A set is open ⇔ it contains all its interior points,
and (ii) The union of any number of open sets is an open set, are left
as exercises for the reader.
Close set. A set S is closed if its compliment S c is open.
For example, S = {X ∈ Rn : |X − X0 | ≤ 3} is a closed set. Again a
useful result arises: intersection of any number of closed sets is closed.
A set S in Rn is bounded if there exists a constant M > 0 such
that |X| ≤ M for all X in S.
Definition 1. A line joining X1 and X2 in Rn is a set of points given
by the linear combination
L = {X ∈ Rn : X = α1 X1 + α2 X2 , α1 + α2 = 1}.
Obviously,
L+ = {X : X = α1 X1 + α2 X2 , α1 + α2 = 1 , α2 ≥ 0}
is a half-line originating from X1 in the direction of X2 as, for α2 =0,
X = X1 and α2 =1, X = X2 .
Similarly,
L− = {X : X = α1 X1 + α2 X2 , α1 + α2 = 1 , α1 ≥ 0}
is a half-line emanating from X2 in the direction of X1 as, for α1 =0,
X = X2 and α1 =1, X = X1 .
Definition 2. A point X ∈ Rn is called a convex linear combination
(clc) of two points X1 and X2 , if it can be expressed as
X = α1 X1 + α2 X2 , α1 , α2 ≥ 0, α1 + α2 = 1.
2.1. GEOMETRIC INTERPRETATION
35
Geometrically, speaking convex linear combination of any points X 1
and X2 is a line segment joining X1 and X2 .
For example, let X1 = (1, 2) and X2 = (3, 7). Then,
1
2
1 2
7 16
14
X=
(1, 2) +
(3, 7) =
+ 2,
=
,
,
3
3
3 3
3
3 3
is a point lying on the line joining X 1 and X2 .
Convex set. A set S is said to be convex if clc of any two points
of S belongs to S, i.e.,
X = α1 X1 + α2 X2 ∈ S, α1 + α2 = 1, α1 , α2 ≥ 0 ∀ X1 , X2 ∈ S.
Geometrically, this definition may be interpreted as the line segment
joining every pair of points X1 , X2 of S lies entirely in S. For more
illustration, see Fig. 2.1.
convex
convex
nonconvex
convex
Figure 2.1
By convention empty set is convex. Every singleton set is convex. A
straight line is a convex set, and a plane in R 3 is also a convex set. Convex sets have many pleasant properties that give strong mathematical
back ground to the optimization theory.
Theorem 1. Intersection of two convex sets is a convex set.
Proof. Let S1 and S2 be two convex sets. We have to show that
S1 ∩ S2 is a convex set. If this intersection is empty or singleton there
is nothing to prove.
Let X1 and X2 be two arbitrary points in S1 ∩S2 . Then X1 , X2 ∈ S1
and X1 , X2 ∈ S2 . Since S1 and S2 are convex, we have
α1 X1 + α2 X2 ∈ S1 and α1 X1 + α2 X2 ∈ S2 , α1 , α2 ≥ 0, α1 + α2 = 1.
Thus, α1 X1 + α2 X2 ∈ S1 ∩ S2 , and hence S1 ∩ S2 is convex.
36
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
Remarks. 1. Moreover, it can be shown that intersection of any number
of convex sets is a convex set, see Problem 3.
2. The union of two or more convex sets may not be convex. As an
example the sets S1 = {(x1 , 0) : x1 ∈ R} and S2 = {(0, x2 ) : x2 ∈ R}
are convex in x1 −x2 plane, but their union
S1 ∪ S2 = {(x1 , 0), (0, x2 ) : x1 , x2 ∈ R}
is not convex, since (2, 0), (0, 2) ∈ S 1 ∪ S2 , but their clc,
1
1
(2, 0) +
(0, 2) = (1, 1) ∈
/ S 1 ∪ S2 .
2
2
Hyperplanes and Half-spaces. A plane in R 3 is termed a hyperplane. The equation of hyperplane in R 3 is the set of points (x1 , x2 , x3 )T
satisfying
a1 x1 + a2 x2 + a3 x3 = β.
Extending the above idea, a hyperplane in R n is the set of points
(x1 , x2 , . . . , xn )T satisfying the linear equation
a1 x1 + a 2 x2 + · · · + a n xn = β
or aT X = β, where a = (a1 , a2 , . . . , an )T . Thus, a hyperplane in Rn is
the set
H = {X ∈ Rn : aT X = β}.
(2.1)
A hyperplane separates the whole space into two closed half-spaces
HL = {X ∈ Rn : aT X ≤ β},
HU = {X ∈ Rn : aT X ≥ β}.
Removing H results in two disjoint open half-spaces
HL0 = {X ∈ Rn : aT X < β},
HU0 = {X ∈ Rn : aT X > β}.
From (2.1), it is clear that the defining vector a of hyperplane H is
orthogonal to H. Since, for any two vectors X 1 and X2 ∈ H
aT (X1 − X2 ) = aT X1 − aT X2 = β − β = 0.
Moreover, for each vector X ∈ H and W ∈ H L0 ,
aT (W − X) = aT W − aT X < β − β = 0.
This shows that the normal vector a makes an obtuse angle with any
vector that points from the hyperplane toward the interior of H L . In
other words, a is directed toward the exterior of H L . Fig. 2.2 illustrates
the geometry.
2.1. GEOMETRIC INTERPRETATION
PSfrag replacements
X ∈ R n : aT X = β
37
a
HL
HU
Figure 2.2
Theorem 2. A hyperplane in Rn is a closed convex set.
Proof. The hyperplane in Rn is the set S = {X ∈ Rn : aT X = α}.
We prove that S is closed convex set. First, we show that S is closed.
To do this we prove S c is open, where
S c = X ∈ R n : aT X < α ∪ X ∈ R n : aT X > α = S1 ∪ S2 .
Let X0 ∈ S c . Then X0 ∈
/ S. This implies
aT X0 < α
or
aT X0 > α.
Suppose aT X0 < α. Let aT X0 = β < α. Define
α−β
n
.
Nδ (X0 ) = X ∈ R : |X − X0 | < δ, δ =
|a|
(2.2)
If X1 ∈ Nδ (X0 ), then in view of (2.2),
aT X1 − a T X0 ≤ aT X1 − a T X0 = aT (X1 − X0 )
= aT |X1 − X0 |
< α − β.
But aT X0 = β. This implies aT X1 < α and hence X1 ∈ S1 . Since X1
is arbitrary, we conclude that Nδ (X0 ) ⊂ S1 . This implies S1 is open.
Similarly, it can be shown that S2 = {X : aT X0 > α} is open. Now,
= S1 ∪ S2 is open (being union of open sets) which proves that S is
closed.
Sc
Let X1 , X2 ∈ S. Then aT X1 = α and aT X2 = α, and consider
X = β1 X1 + β2 X2 , β1 , β2 ≥ 0, β1 + β2 = 1
38
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
and operating aT , note that
aT X = β1 aT X1 + β2 aT X2 = β1 β + β2 β = β(β1 + β2 ) = β.
Thus, X ∈ S and hence S is convex.
Theorem 3. A half-space S = {X ∈ Rn : aT X ≤ α} is a closed convex
set.
Proof. Let S = {X ∈ Rn : aT X ≤ α}. Suppose X0 ∈ S c . Then
aT X0 > α. Now, aT X0 = β > α. Consider the neighbourhood N δ (X0 )
defined by
β −α
n
.
Nδ (X0 ) = X ∈ R : |X − X0 | < δ, δ =
|a|
Let X1 be an arbitrary point in Nδ (X0 ). Then
aT X0 − aT X1 ≤ aT X1 − aT X0 = aT |X1 − X0 | < β − α
Since aT X0 = β, we have
−aT X1 < −α ⇒ aT X1 > α ⇒ X1 ∈ S c ⇒ Nδ (X0 ) ⊂ S c .
This implies S c is open and hence S is closed.
Take X1 , X2 ∈ S. Hence aT X1 ≤ α, aT X2 ≤ α. For
X = α1 X1 + α2 X2 , α1 , α2 ≥ 0, α1 + α2 = 1.
Note that
aT X = aT (α1 X1 + α2 X2 )
= α 1 aT X1 + α 2 aT X2
≤ α1 α + α 2 α
= α(α1 + α2 ) = α.
This implies X ∈ S, and hence S is convex.
Polyhedral set. A set formed by the intersection of finite number of
closed half-spaces is termed polyhedron or polyhedral.
If the intersection is nonempty and bounded, it is called a polytope.
For a linear programme in standard form, we have m hyperplanes
Hi = X ∈ Rn : aTi X = bi , X ≥ 0, bi ≥ 0, i = 1, 2, . . . , m ,
2.1. GEOMETRIC INTERPRETATION
39
where aTi = (ai1 , ai2 , . . . , ain ) is the ith row of the constraint matrix A
and bi is the ith element of the right-hand vector b.
Moreover, for a linear program in standard form, the hyperplanes
H = {X ∈ Rn : C T X = β, β ∈ R} depict the contours of the linear objective function, and cost vector C T becomes the normal of its contour
hyperplanes.
Set of feasible solutions. The set of all feasible solutions forms a
feasible region, generally denoted by P F , and this is the intersection of
hyperplanes Hi , i=1,2,. . . ,m and the first octant of R n .
Note that each hyperplane is intersection of two closed half-spaces
HL and HU , and the first octant of Rn is the intersection of n closed
half-spaces {xi ∈ R : xi ≥ 0}. Hence the feasible region is a polyhedral
set, and is given by
PF = {X ∈ Rn : AX ≤ b, X ≥ 0, b ≥ 0} .
When PF is not empty, the linear programme is said to be consistent. For a consistent linear programme with a feasible solution
X ∗ ∈ PF , if C T X ∗ attains the minimum or maximum value of the
objective function C T X over the feasible region PF , then we say X ∗ is
an optimal solution to the linear programme.
Moreover, we say a linear programme has a bounded feasible region,
if there exists a positive constant M such that for every X ∈ P F , we
have |X| ≤ M . On the other hand for minimization problem, if there
exists a constant K such that C T X ≥ K for all X ∈ PF , then we say
linear programme is bounded below. Similarly, we can define bounded
linear programme for maximization problem.
Remarks. 1. In this context, it is worth mentioning that a linear
programme with bounded feasible region is bounded, but the converse
may not be true, i.e., a bounded LPP need not to have a bounded
feasible region, see Problem 2.
2. In R3 , a polytope has prism like shape.
Converting equalities to inequalities. To study the geometric
properties of an LPP we consider the LPP in the form, where the
constraints are of the type ≤ or ≥, i.e.,
opt
s.t.
z = CT X
gi (X) ≤ or ≥ 0, i = 1, 2, . . . , m
X ≥ 0.
40
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
In case there is equality constraint like x 1 + x2 − 2x3 = 5, we can
write this as (equivalently): x1 + x2 − 2x3 ≤ 5 and x1 + x2 − 2x3 ≥ 5.
This tells us that m equality constraints will give rise to 2m inequality
constraints. However, we can reduce the system with m equality constraints to an equivalent system which has m+1 inequality constraints.
Example 1. Show that the system having m equality constraints
n
X
aij xj = bi ,
i = 1, 2, . . . , m
j=1
is equivalent to the system with m + 1 inequality constraints.
To motivate the idea, note that x = 1 is equivalent to the combination x ≤ 1 and x ≥ 1. As we can check graphically, the equations x = 1
and y = 2 are equivalent to the combinations x ≤ 1, y ≤ 2, x + y ≥ 3.
The other way to write equivalent system is x ≥ 1, y ≥ 2, x + y ≤ 3.
This idea can further be generalized to m equations. Consider the
system of m equations
n
X
aij xj = bi ,
i = 1, 2, . . . , m.
j=1
This system has the equivalent form:
n
X
j=1
or
n
X
j=1
aij xj ≤ bi
aij xj ≤ bi
and
and
n
X
j=1
m
n
X
X
j=1
aij xj ≥ bi
aij
i=1
!
xj ≥
m
X
bi .
i=1
If we look at the second combination, then the above system is
equivalent to
!
n
n
m
m
X
X
X
X
aij xj ≥ bi , and
aij xj ≤
bi .
j=1
j=1
i=1
i=1
Consider the constraints and nonnegative restrictions of an LPP in its
standard form
x1 + x 2 + s 1 = 1
− x1 + 2x2 + s2 = 1
x1 , x2 , s1 , s2 ≥ 0.
2.1. GEOMETRIC INTERPRETATION
41
Although it has four variables, the feasible reason P F can be represented as a two dimensional graph. Write the basic variables s 1 and s2
in terms of nonbasic variables, see Problem 28, and use the conditions
s1 ≥ 0, s2 ≥ 0 to have
x1 + x 2 ≤ 1
− x1 + 2x2 ≤ 1
x1 , x 2 ≥ 0
and is shown in Fig. 2.3.
PSfrag replacements
−x1 + 2x2 = 1
(1/3, 2/3)
(0, 1/2)
(2/3, 1/3)
PF
(−1, 0) (0, 0)
x1 + x 2 = 1
(1, 0)
Figure 2.3
Remark. If a linear programming problem in its standard has n variables and n − 2 nonredundant constraints, then the LPP has a twodimensional representation. Why?, see Problem 28.
Theorem 4. The set of all feasible solutions of an LPP (feasible region
PF ) is a closed convex set.
Proof. By definition PF = {X : AX = b, X ≥ 0}. Let X1 and
X2 be two points of PF . This means that AX1 = b, AX2 = b, X1 ≥
0, X2 ≥ 0. Consider
Z = αX1 + (1 − α)X2 , 0 ≤ α ≤ 1.
Clearly, Z ≥ 0 and AZ = αAX1 + (1 − α)AX2 = αb + (1 − α)b = b.
Thus, Z ∈ PF , and hence PF is convex.
Remark. Note that in above theorem b may be negative, i.e., the LPP
may not be in standard form. The only thing is that we have equality
constraint. If b is negative, we multiply by minus in the constraint.
Alternative proof. Each constraint a Ti X = bi , i = 1, 2, . . . m is
closed (being a hyperplane), and hence intersection of these m hyperplanes (AX = b) is closed. Further, each nonnegative restriction x i ≥ 0
42
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
is a closed (being closed half-spaces) is closed, and hence their intersection X ≥ 0 is closed. Again, intersection P F = {AX = b, X ≥ 0} is
closed. This concludes that PF is a closed convex set.
Clc’s of two or more points. Convex linear combination of two
points gives a line segment. The studies on different regions need the
clc of more than two points. This motivates the idea of extending the
concept.
Definition 3. The point X is called a clc of m points X 1 , X2 , . . . , Xm
in Rn , if there exist scalars αi , i = 1, 2, . . . , m such that
X = α1 X1 + α2 X2 + · · · + αm Xm , αi ≥ 0, α1 + α2 + · · · + αm = 1.
Remark. This definition includes clc of two points also. Henceforth,
whenever we talk about clc, it means clc of two or more points.
Theorem 5. A set S is convex ⇔ every clc of points in S belongs to
S.
Proof. (⇐) Given that every clc of points in S belongs to S includes
the assertion that every clc of two points belongs to S. Hence S is
convex.
(⇒) Suppose S is convex. we prove the result by induction. S is
convex ⇒ clc of every two points in S belongs to S. Hence the theorem
is true for clc of two points. Assume that theorem is true for clc of n
points. we must show that it is true for n + 1 points. Consider
X = β1 X1 + β2 X2 + · · · + βn Xn + βn+1 Xn+1
such that
β1 + β2 + · · · + βn+1 = 1, βi ≥ 0.
If βn+1 = 0, then X, being clc of n points belongs to S (by assumption).
If βn+1 = 1, then β1 = β2 = · · · = βn = 0 and X = 1.Xn+1 , the
theorem is trivially true. Assume, β n+1 6= 0 or 1, i.e., 0 < βn+1 < 1.
Now, β1 + β2 + · · · + βn 6= 0.
X=
or
β1 + β 2 + · · · + β n
(β1 X1 + β2 X2 + · · · + βn Xn ) + βn+1 Xn+1
β1 + β 2 + · · · + β n
X = (β1 + β2 + · · · + βn )(α1 X1 + α2 X2 + · · · + αn Xn ) + βn+1 Xn+1 ,
where, αi = βi /(β1 + β2 + · · · + βn ), i = 1, 2, . . . , n. Clearly, αi ≥ 0 and
α1 + α2 + · · · + αn = 1. Hence, by assumption α1 X1 + α2 X2 + · · · +
2.1. GEOMETRIC INTERPRETATION
43
αn Xn = Y (say) belongs to S. Again,
X = (β1 + β2 + · · · + βn )Y + βn+1 Xn+1
such that
β1 + β2 + · · · + βn ≥ 0, βn+1 ≥ 0,
n+1
X
βi = 1.
i=1
Thus, X is the clc of two points and hence belongs to S.
Convex hull. Let S be a nonempty set. Then convex hull of S, denoted
by [S] is defined as all clc’s of points of S,
[S] = {X ∈ Rn : X is clc of points in S} .
Remarks. 1. By convention [∅] = {0}.
2. The above discussion reveals that the convex hull of finite number
of points X1 , X2 , . . . , Xm is the convex combination of the m points.
This is the convex set having at most m vertices. Here, at most means
some points may be interior points. Moreover, convex hull generated
in this way is a closed convex set.
3. The convex hull of m points is given a special name as convex
polyhedron.
Theorem 6. Let S be a nonempty set. Then the convex hull [S] is
the smallest convex set containing S.
Proof. Let X, Y ∈ [S]. Then
X = α1 X1 + α2 X2 + · · · + αn Xn , αi ≥ 0,
Y = β1 Y1 + β2 Y2 + · · · + βm Ym , βj ≥ 0,
n
X
αi = 1,
i=1
m
X
βj = 1.
j=1
Consider the linear combination αX + βY, α, β ≥ 0, α + β = 1, and
note that
αX + βY = α(α1 X1 + α2 X2 + · · · + αn Xn )
+ β(β1 Y1 + β2 Y2 + · · · + βm Ym )
= (αα1 )X1 + · · · + (ααn )Xn + (ββ1 )Y1 + · · · + (ββm )Ym .
44
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
Now, each ααi ≥ 0, ββj ≥ 0 and
n
X
(ααi ) +
i=1
m
X
(ββj ) = α
j=1
n
X
αi + β
i=1
m
X
βj = α + β = 1,
j=1
i.e., αX + βY is a clc of points X1 , X2 , . . . , Xn , Y1 , Y2 , . . . , Ym . This
implies that αX + βY ∈ [S] and hence [S] is convex.
Clearly, it contains S because each X ∈ S can be written as X =
1.X + 0.Y , i.e., clc of itself. To prove that [S] is the smallest convex
set containing S, we show that if there exists another convex set T
containing S, then [S] ⊂ T .
Suppose T is a convex set which contains S. Take any element
X ∈ [S]. Then
X = α1 X1 + · · · + αn Xn , αi ≥ 0,
n
X
i=1
αi = 1 ∀ X1 , X2 , . . . , Xn ∈ S.
Since S ⊂ T , it follows that X1 , X2 , . . . , Xn ∈ T and, moreover convexity of T ensures that
α1 X1 + α2 X2 + · · · + αn Xn = X ∈ T.
Hence [S] ⊂ T .
Remark. If S is convex, then S = [S].
For convex set S ⊂ Rn , a key geometric figure is due to the following
separation theorem. The proof is beyond the scope of the book.
Theorem 7 (Separation theorem). Let S ⊂ R n and X be a boundary point of S. Then there is a hyperplane H containing X with S
contained either in lower half-plane or upper half-plane.
Based on Theorem 7 we can define a supporting hyperplane H to be
the hyperplane such that (i) the intersection of H and S is nonempty;
(ii) lower half-plane contains S, see Fig. 2.4.
PSfrag replacements
a
S
X
HL
Figure 2.4
H
2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS
45
One very important fact to point out here is that the intersection
set of the polyhedral set and the supporting hyperplane with negative
cost vector C T as its normal provides optimal solution of an LPP. This
is the key idea of solving linear programming problems by the graphical
method.
To verify this fact, let us take
min x0 = −x1 − 2x2
as the objective function for the LPP whose feasible region is shown in
Fig. 2.3. Note that −x1 −2x2 = −80 is the hyperplane passing through
(0, 40) and the vector −C T = (1, 2) is normal to this plane. This is a
supporting hyperplane passing through (0, 40), since H L = {(x1 , x2 ) :
x1 + 2x2 ≤ 80} contains PF and is satisfied by the points (20, 20) and
(30, 0).
However, the hyperplane passing through (20, 20) which is normal
to −C T = (1, 2) is given by −x1 − 2x2 = −60. This is not a supporting
hyperplane as point (0, 40) is not in {(x 1 , x2 ) : x1 + 2x2 ≤ 60}. Similarly, it can be shown that hyperplane at (3, 0) which is normal to −C T
is also not a supporting hyperplane. This implies that x 1 = 0, x2 = 40
is the optimal solution.
2.2
Vertices and Basic Feasible Solutions
Definition 4. A point X of a convex set S is said to be an vertex
(extreme point) of S if X is not a clc of any other two distinct points
of S, i.e., X can not be expressed as
X = α1 X1 + α2 X2 , α1 , α2 > 0, α1 + α2 = 1.
In other words, a vertex is a point that does not lie strictly within the
line segment connecting two other points of the convex set.
From the pictures of convex polyhedron sets, especially in lower
dimensional spaces it is clear to see the vertices of a convex polyhedron,
Analyze the set PF as depicted in Fig. 2.3. Further, we note the
following observations:
(a) A(0, 0), B(1, 0), C(1/3, 2/3) and D(0, 1/2) are vertices and moreover, these are boundary points also. But every boundary point
need not be a vertex. In Fig. 2.3, as E is boundary point of the
46
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
feasible region PF but not vertex, since it can be written as clc
of distinct points of the set PF as
2 1
1
1
1 2
,
,
=
(1, 0) +
.
3 3
2
2
3 3
(b) All boundary points of {(x, y) : x 2 + y 2 ≤ 9} are vertices. Hence,
vertices of a bounded closed set may be infinite. However, in an
LPP, if PF is bounded and closed, then it contains finite number
of vertices, see Theorem 9.
(c) Needless to mention whenever we talk about vertex of a set S, it
is implied that S is convex.
(d) If S is unbounded, then it may not have a vertex, e.g., , S = R 2 .
(e) If S is not closed, then it may not have vertex, e.g., , S = {(x, y) :
0 < x < 1, 2 < y < 3} has no vertex.
Remark. Here extreme points are in reference to convex sets. However,
extreme points of a function will be defined in Chapter 13.
To characterize the vertices of a feasible region P F = {X ∈ Rn :
AX = b, X ≥ 0} of a given LPP in standard form, we may assume
A is an m × n matrix with m < n and also denote the jth column
of the coefficient matrix A by Aj , j = 1, 2, . . . , n. Then, for each
X = (x1 , x2 , . . . , xn )T ∈ PF , we have
x1 A1 + x2 A2 + · · · + xn An = b.
Therefore, Aj is the column of A corresponding to the jth component
xj of X.
Theorem 8. A point X of feasible region P F is a vertex of PF ⇔
the columns of A corresponding to the positive components of X are
linearly independent.
Proof. Without loss of generality, we may assume that the components of X are zero except for the first p components, namely
" #
X
, X = (x1 , x2 , . . . , xp )T > 0.
X=
0
We also denote the first p columns of matrix A by A. Hence AX =
A X = b.
2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS
47
(⇒) Suppose that the columns of A are not linearly independent,
then there exists nonzero vector w (at least one of the p components is
nonzero) such that Aw = 0. Now, define
Y = X + δw
and
Z = X − δw
For sufficiently small δ > 0, we note that Y, Z ≥ 0 and
AY = AZ = A X = b.
We further define
Y1 =
"
Y
0
#
and Z1 =
"
Z
0
#
Note that Y1 , Z1 ∈ PF and X = (1/2)Y1 + (1/2)Z1 . In other words X
is not an vertex of PF .
(⇐) Suppose that X is not vertex of PF , then X = αY1 + (1 −
α)Z1 , α ≥ 0 for some distinct Y1 , Z1 ∈ PF . Since Y1 , Z1 ≥ 0 and
0 ≤ α ≤ 1, the last n − p components of Y1 must be zero, as
0 = αyj + (1 − α)zj , j = p + 1, p + 2, . . . , n.
Consequently, we have a nonzero vector w = X − Y 1 (X 6= Y1 ) such
that
Aw = Aw = AX − AY1 = b − b = 0.
This shows that columns of A are linearly dependent.
Consider an LPP in standard, AX = b, suppose we have, n =
number of unknowns; m = number of equations. Assume m < n
(otherwise the problem is over-specified). However, after introducing
the slack and surplus variables, generally this assumption remains valid.
Let r(A) and r(A, b) be the ranks of matrix A and augmented matrix
(A, b), respectively.
(i) r(A) = r(A, b) guarantees the consistency, i.e., AX = b has at
least one solution.
(ii) r(A) 6= r(A, b) the system is inconsistent, i.e., AX = b has no
solution. For example
x1 + x 2 + x 3 = 1
4x1 + 2x2 − x3 = 5
9x1 + 5x2 − x3 = 11
has no solution and hence inconsistent.
48
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
For consistent systems we have difficulty when r(A) = r(A, b) < m =
number of equations. It means that all m rows are not linearly independent. Hence, any row may be written as linear combination of other
rows. We consider this row (constraint) as redundant. For example
x1 − x2 + 2x3 = 4
2x1 + x2 − x3 = 3
5x1 + x2 = 10
x1 , x 2 , s 1 , s 2 ≥ 0
In this example r(A) = r(A, b) = 2 which is less than number
of equations. The third constraint is the sum of the first constraint
and two times of the second constraint. Hence, the third constraint
is redundant. Actually, if the system is consistent, then the rows in
echelon from of the matrix which contain pivot elements are to be
considered. All nonpivot rows are redundant.
Another type of redundancy happens when r(A) = r(A, b), but
some of the constraint does not contribute any thing to find the optimal
solution. Geometrically, a constraint is redundant if its removal leaves
the feasible solution space unchanged. However, in this case r(A) =
r(A, b) = m, the number of equations. Such type of cases we shall deal
in Chapter 3. The following simple example illustrates the fact.
x1 + x 2 + s 1 = 1
x1 + 2x2 + s2 = 2
x1 , x 2 , s 1 , s 2 ≥ 0
Here r(A) = r(A, b) = 2. The vertices of the feasible region are (0, 0),
(1, 0) and (0, 1). The second constraint is redundant as it contributes
only (0, 1) which is already given by the first constraint.
It is advisable to delete redundant constraints, if any, before an LPP
is solved to find its optimal solution, otherwise computational difficulty
may arise.
Basic solutions. Let AX = b be a system of m simultaneous
linear equations in n unknowns (m < n) with r(A) = m. This means
that there exist m linearly independent column vectors. In this case
group these linearly independent column vectors to form a basis B and
leave the remaining n − m columns as nonbasis N . In other words, we
can rearrange A = [B|N ].
2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS
49
We can also rearrange the components of any solution vector X in
the corresponding order, namely
#
"
XB
X=
XN
For a component in XB , its corresponding column is in the basis B.
Similarly, components in XN correspond to a nonbasis matrix N .
If all n − m variables XN which are not associated with columns
of B are equated to zero, then the solution of the resulting system
BXB = b is called a basic solution of AX = b. Out of n columns,
m columns can be selected in n!/m!(n − m)! ways. The m variables
(left after putting n − m variables equal to zero) are called the basic
variables and remaining n − m variables as nonbasic variables. The
matrix corresponding to basic variables is termed the basis matrix.
Basic feasible solution. A basic solution of the system AX = b,
X ≥ 0 is called a basic feasible solution.
Nondegenerate BFS. If all the m basic variables in a BFS are
positive than it is called nondegenerate basic feasible solution.
The following result is a direct consequences of Theorem 8.
Corollary 1. A point X ∈ PF is an vertex of PF ⇔ X is a basic
feasible solution corresponding to some basis B.
Proof. By Theorem 8, we have
X ∈ PF is vertex ⇔ columns Ai for xi > 0 (i = 1 to m) are LI
⇔ B = [A1 , . . . , Am ] is a nonsingular matrix of X
⇔ X is a basic feasible solution.
Degenerate BFS. A BFS which is not nondegenerate is called
degenerate basic feasible solution, i.e., at least one of the basic variable
is at zero level in the BFS.
Remarks. 1. Corollary 1 reveals that there exists a one-one correspondence between the set of basic feasible solutions and set of vertices of
PF only in the absence of degeneracy. Actually, in case of degeneracy,
a vertex may correspond to many degenerate basic feasible solutions.
Examples 3 will make the remark more clear and justified.
2. When we select m variables out of n variables to define a basic
solution it is must that the matrix B formed with the coefficients of
50
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
m variables must be nonsingular, otherwise we may get no solution or
infinity of solutions (not basic), see Problem 12, Problem set 2.
3. Every basic solution of the system AX = b is a basic feasible
solution. Why?
Example 2. Without sketching PF find the vertices for the system
− x1 + x2 ≤ 1
2x1 + x2 ≤ 2
x1 , x2 ≥ 0.
First, write the system in standard form
− x1 + x2 + s1 = 1
2x1 + x2 + s2 = 2.
Here n = 4, m = 2 ⇒ 4!/2!2! = 6. Hence the system has at most 6
basic solutions. To find all the basic solutions we take any of the two
variables as basic variables from the set {x 1 , x2 , s1 , s2 } to have
1 4
, , 0, 0 , (1, 0, 2, 0), (−1, 0, 0, 4),
3 3
(0, 2, −1, 0),
(0, 1, 0, 1),
(0, 0, 1, 2).
Thus, 4 are basic feasible solutions. The solution set is nondegenerate and hence there exists one-one correspondence between BFS and
vertices, i.e., the feasible region P F has 4 vertices.
Note that (−1, 0, 0, 4) and (0, 2, −1, 0) are basic solutions but not
feasible.
Example 3. The system AX = b is given by
x1 + x2 − 8x3 + 3x4 = 2
−x1 + x2 + x3 − 3x4 = 2.
Determine the following:
(i) a nonbasic feasible solution;
(ii) a basic solution with x1 , x4 as basic variables;
(iii) a vertex which corresponds to two different basic feasible solutions.
2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS
51
(iv) all nondegenerate basic feasible solutions
Here X = (x1 , x2 , x3 , x4 )T , b = (2, 2)T and the coefficient matrix A
and the augmented matrix (A, b) are written as




1 1 −8 3 2
1 1 −8 3

 ; (A, b) = 
A=
−1 1 1 −3 2
−1 1 1 −3
We make the following observations:
(i) Since r(A) = r(A, b) = 2 < number of unknowns, the system
is consistent and has infinity of solutions with two degrees of
freedom. The row reduced echelon form of (A, b) is


1 0 −9/2 3 0


0 1 −7/2 0 2
This gives
9
x1 = x3 − 3x4
2
(2.3)
7
x2 = x3 + 2
2
Let us assign x3 = x4 = 2, i.e., nonzero values to x3 and x4 to have
x1 = 3 and x2 = 9. Thus one of the feasible solution is (3, 9, 2, 2)
but not a basic feasible solution, since at least two variables must
be at zero level. Moreover, (−3, 2, 0, 1) is a nonbasic feasible
solution of the system.
(ii) If x1 and x4 are chosen basic variables, then the system becomes
x1 + 3x4 = 2
− x1 − 3x4 = 2
The system has no solution with x1 , x4 as basic variables.
(iii) Further, for x1 , x2 as basic variables,

1
1


(0, 2, 0, 0) is a BFS with basis matrix 
−1 1
52
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
while, for x2 , x4 as basic variables,

(0, 2, 0, 0) is a BFS with basis matrix 
1
3
1 −3


Both BFS given by (0, 2, 0, 0) seem to be same but these are different as their basis matrices are different. However, both BFS correspond to the same vertex (0, 2). The vertex (0, 2) can be identified by writing the equivalent form of LPP as two-dimensional
graph in x1 −x2 plane.
(iv) All nondegenerate basic feasible solutions of the system are
(−18/7, 0, 4/7, 0), (0, 0, −4/7, −6/7).
Note that these vectors are BFS even though some entries are
negative, because all variables are unrestricted.
Remarks. 1. Note that system (2.3) can be written as
1x1 + 0x2 − 92 x3 + 3x4 = 0
0x1 + 1x2 − 27 x3 + 0x4 = 2
This is canonical form of constraint equations. Further, note that
 
 
 
−8
1
1
  = −9   − 7  
2 −1
2 1
1




 
1
1
3
  = 3  + 0 
−3
−1
1
This extracts a good inference that if x 1 and x2 are basic variables and
are pivoted, then coefficients of x3 and x4 are coordinate vectors of A3
and A4 column vectors of A with respect to the basic vectors A 1 and
A2 . This phenomenon will be used in simplex method to be discussed
in next chapter.
2. All basic solutions can also be obtained by pivoting at any two
variables and assigning zero value to the remaining variables. This
can be done in 6 ways and hence will give at most six different basic
solutions.
2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS
53
3. For the existence of all basic solutions it is necessary that column
vectors of the coefficient matrix A must be linearly independent. It is
also possible that after keeping requisite number of variables at zero
level the remaining system may have infinity of solutions, see Problem
12(c). This happens when at least two columns are not linearly independent. In Problem 12, A2 and A5 are not linearly independent.
We do not term these infinity solutions as basic solutions because for
a basic solution the coefficient matrix formed with the coefficients of
basic variables (in order) must be nonsingular.
Theorem 9. The set of all feasible solutions P F of an LPP has finite
number of vertices.
Proof. Let PF be the set of all feasible solutions of the LPP, where
constraints are written in standard form, AX = b, b ≥ 0. If A is of
full rank m, i.e., each m × m submatrix of A is nonsingular, then the
system has
n
n!
=
m
m!(n − m)!
basic solutions. Further, we add the condition X ≥ 0, then we can
say AX = b has at most n!/m!(n − m)! basic feasible solutions. As
there exists one to one correspondence between BFS set and set of all
vertices, provided nondegeneracy persists in the problem, we conclude
that set of vertices may have at most n!/m!(n−m)! elements and hence
number of vertices is finite.
In case of degeneracy more than one BFS may correspond to the
same vertex. Hence, in this situation number of vertices will be less
than n!/m!(n − m)!, and again the vertices are finite.
Note that two basic feasible solutions (vertices) are adjacent, if they
use m − 1 basic variables in common to form basis. For example, in
Figure 2.3, it is easy to verify that (0, 1/2) is adjacent to (0, 0) but not
adjacent to (1, 0) since (0, 1/2) takes x 2 and s1 as basic variables, while
(0, 0) takes s1 and s2 and (1, 0) takes x1 and s2 . Under the nondegeneracy assumption, since each of the n − m nonbasic variables could
replace one current basic variable in a given basic feasible solution, we
know that every BFS (vertex) has n − m neighbours. Actually, each
BFS can be reached by increasing the value of one nonbasic from zero
to positive and decreasing the value of one basic variable from positive
to zero. This is the basic concept of pivoting in simplex method to be
discussed in next chapter.
Suppose that the feasible region PF is bounded, in other words it
54
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
is a polytope as shown in Fig. 2.5.
PSfrag replacements
X1
X5
X2
• X
X4
X6
X3
Figure 2.5
From this figure, it is easy to observe that each point of P F can be
represented as a convex combination of finite number of vertices of P F ,
in particular X is the clc of the vertices X 1 , X3 , X4 .
This idea of convex resolution can be verified for a general polyhedron (may be unbounded) with the help of following definition:
Definition 5. An extremal direction of a polyhedron set is a nonzero
vector d ∈ Rn such that for each X0 ∈ PF , the ray
{X ∈ Rn : X = X0 + αd, α ≥ 0}
is contained in PF .
Remark. From the definition of feasible region, we see that a nonzero
vector d ∈ Rn is an extremal direction of PF ⇔ Ad = 0 and d ≥ 0.
Also, PF is unbounded ⇔ PF has an extremal direction.
Considering the vertices and extremal directions every point in P F
can be represented by the following useful result known as the resolution theorem.
Theorem 10 (Resolution theorem). Let B = {V i ∈ Rn : i ∈ Z} be
the set of all vertices of PF with a finite index set Z. Then, for each
X ∈ PF , we have
X
X
X=
αi Vi + d,
αi = 1, αi ≥ 0,
(2.4)
i∈Z
i∈Z
where d is either the zero vector or an extremal direction of P F .
Proof. To prove the theorem by the induction, we let p be the
number of positive components of X ∈ P F . When p = 0, X =
(0, 0, . . . , 0) is obviously a vertex. Assume that the theorem holds
2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS
55
for p = 0, 1, . . . , k and X has k + 1 positive components. If X is a
vertex, then there is nothing to prove. If X is not a vertex, we let
X T = (x1 , x2 , . . . , xk+1 , 0, . . . , 0) ∈ Rn such that (x1 , x2 , . . . , xk+1 ) > 0
and A = [A|N ], A is the matrix corresponding to positive components
of X. Then, by Theorem 8, the columns of A are linearly independent, in other words there exists a nonzero vector w ∈ R k+1 such
that Aw = 0. We define w = (w, 0, . . . , 0) ∈ R n , then w 6= 0 and
Aw = Aw = 0. There are three possibilities: w ≥ 0, w < 0 and w has
both positive and negative components.
For w ≥ 0, consider X(θ) = X + θw and pick θ ∗ to be the largest
value of θ such that X ∗ = X(θ ∗ ) has at least one more zero component
than X. Then follow the induction hypothesis to show that the theorem
holds. Similarly, show that in the remaining two cases, the theorem still
holds.
The direct consequences of the resolution theorem are:
Corollary 2. If PF is a bounded feasible region (polytope), then each
point X ∈ PF is a convex linear combination of its vertices.
Proof. Since PF is bounded, by the remark following Definition 5
the extremal direction d is zero, and application of (2.4) ensures that
X is a clc of vertices.
Corollary 3. If PF is nonempty, then it has at least one vertex.
Example 4. Consider the set
PF = {(x1 , x2 ) : x1 + x2 ≤ 1, −x1 + 2x2 ≤ 1, x1 , x2 ≥ 0}.
Show that a point of PF may be clc of different vertices.
Take the point (1/3, 1/6) ∈ PF . Now
1
1
1 1
1
1
1 2
= (0, 0) + (1, 0) +
0,
+0
,
,
3 6
3
3
3
2
3 3
or,
1 1
,
3 6
5
1
3
= (0, 0) + (1, 0) +
8
16
16
1 2
,
3 3
1
+
4
1
0,
2
.
Thus, an additional information is gathered here that a point of P F
may have different clc’s of its vertices.
56
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
2.3
Basic Theorem of Linear Programming
Theorem 11. The maximum of the objective function f(X) of an LPP
occurs at least at one vertex of PF , provided PF is bounded.
Proof. Given that the LPP is a maximization problem. Suppose
that maximum of f (X) occurs at some point X 0 in the feasible region
PF . Thus,
f (X) ≤ f (X0 ) ∀ X ∈ PF .
We show that this f (X0 ) occurs at some vertex of PF . Since PF is
bounded, closed and convex and the problem is LPP, it contains finite
number of vertices X1 , X2 , . . . , Xn . Hence,
f (Xi ) ≤ f (X0 ), i = 1, 2, . . . , n.
(2.5)
By Corollary 2, each X0 ∈ PF can be written as clc of its vertices, i.e.,
X0 = α1 X1 + α2 X2 + · · · + αn Xn , αi ≥ 0,
n
X
αi = 1.
i=1
Using linearity of f , we have
f (X0 ) = α1 f (X1 ) + α2 f (X2 ) + · · · + αn f (Xn ).
Let
f (Xk ) = max {f (X1 ), f (X2 ), . . . , f (Xn )} ,
where f (Xk ) is one of the values of f (X1 ), f (X2 ), . . . , f (Xn ). Then
f (X0 ) ≤ α1 f (Xk ) + α2 f (Xk ) + · · · + αn f (Xk ) = f (Xk ).
(2.6)
Combining (2.5) and (2.6), we have f (X 0 ) = f (Xk ). This implies that
the maximum value f (X0 ) is attained at the vertex Xk and hence the
result.
The minimization case can be treated on parallel lines just by reversing the inequalities. Note that in the minimization case, we define
f (Xk ) = min{f (X1 ), f (X2 ), . . . , f (Xn )}.
Thus, we have proved that the optimum of an LPP occurs at some
vertex of PF , provided PF is bounded.
Remark. Theorem 11 does not rule out the possibility of having an
optimal solution at a point which is not vertex. It simply says among
all optimal solutions to an LPP at least one of them is a vertex. The
following theorem further strengthens Theorem 11.
2.4. GRAPHICAL METHOD
57
Theorem 12. In an LPP, if the objective function f (X) attains its
maximum at an interior point of PF , then f is constant, provided PF
is bounded.
Proof. Given that the problem is maximization, and let X 0 be an
interior point of PF , where maximum occurs, i.e.,
f (X) ≤ f (X0 ) ∀ X ∈ PF .
Assume contrary that f (X) is not constant. Thus, we have X 1 ∈ PF
such that
f (X1 ) 6= f (X0 ), f (X1 ) < f (X0 ).
Since PF is nonempty bounded closed convex set, it follows that X 0
can be written as a clc of two points X 1 and X2 of PF
X0 = αX1 + (1 − α)X2 , 0 < α < 1.
Using linearity of f , we get
f (X0 ) = αf (X1 ) + (1 − α)f (X2 ) ⇒ f (X0 ) < αf (X0 ) + (1 − α)f (X2 ).
Thus f (X0 ) < f (X2 ). This is a contradiction and hence the theorem.
2.4
Graphical Method
This method is convenient in case of two variables. By Theorem 11, the
optimum value of the objective function occurs at one of the vertices
of PF . We exploit this result to find an optimal solution of any LPP.
First, we sketch the feasible region and identify its vertices. Compute
the value of objective function at each vertex, and take largest of these
values to decide the optimal value of the objective function, and the
vertex at which this largest value occurs is the optimal solution. For
minimization problem we consider the smallest value.
Example 5. Solve the following LPP by the graphical method
max
s.t.
z = x1 + 5x2
− x1 + 3x2 ≤ 10
x1 + x 2 ≤ 6
x1 − x 2 ≤ 2
x1 , x2 ≥ 0.
58
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
Rewrite each constraint in the forms:
x1
x2
− +
≤1
10 10/3
x1 x2
+
≤1
6
6
x1 x2
−
≤1
2
2
Draw the each constraint first by treating as linear equation. Then
use the inequality condition to decide the feasible region. The feasible
region
and vertices are shown in Fig. 2.6.
PSfrag replacements
6
=
2
x −x + 3x = 10
1
2
+
x1
(2, 4)
x1 − x 2 = 2
(0, 10/3)
PF
(4, 2)
(0, 0)
(2, 0)
Figure 2.6
The vertices are (0, 0), (2, 0), (4, 2), (2, 4), (0, 10/3).
The values of the objective function is computed at these points are
z=0
at
(0, 0)
z=2
at
(2, 0)
z = 14
at
(4, 2)
z = 22
at
(2, 4)
z = 50/3
at
(0, 10/3)
Obviously, the maximum occurs at vertex (2, 4) with maximum value
22. Hence,
optimal solution: x1 = 2, x2 = 4,
z = 22.
Example 6. A machine component requires a drill machine operation followed by welding and assembly into a larger subassembly. Two
2.4. GRAPHICAL METHOD
59
versions of the component are produced: one for ordinary service and
other for heavy-duty operation. A single unit of the ordinary design requires 10 min of drill machine time, 5 min of seam welding, and 15 min
for assembly. The profit for each unit is $ 100. Each heavy-duty unit
requires 5 min of screw machine time, 15 min for welding and 5 min for
assembly. The profit for each unit is $ 150. The total capacity of the
machine shop is 1500 min; that of the welding shop is 1000 min; that
of assembly is 2000 min. What is the optimum mix between ordinary
service and heavy-duty components to maximize the total profit?
Let x1 and x2 be number of ordinary service and heavy-duty components.
The LPP formulation is
max
s.t.
z = 100x1 + 150x2
10x1 + 5x2 ≤ 1500
5x1 + 15x2 ≤ 1000
15x1 + 5x2 ≤ 2000
x1 , x2 ≥ 0 and are integers.
Draw the feasible region by taking all constraints in the format as
given in Example 5 and determine all the vertices. The vertices are
(0, 0), (400/3, 0), (125, 25), (0, 200/3). The optimal solution exists at
the vertex x1 = 125, x2 = 25 and the maximum value: z = 16250.
Problem Set 2
1. Which of the following sets are convex
(a) {(x1 , x2 ) : x1 x2 ≤ 1};
(b) {(x1 , x2 ) : x21 + x22 < 1};
(c) {(x1 , x2 ) : x21 + x22 ≥ 3};
(d) {(x1 , x2 ) : 4x1 ≥ x22 };
(e) {(x1 , x2 ) : 0 < x21 + x22 ≤ 4};
(f) {(x1 , x2 ) : |x2 | = 5}.
2. Prove that a linear program with bounded feasible region must
be bounded, and give a counterexample to show that the converse
need not be true.
3. Prove that arbitrary intersection of convex sets is convex.
4. Prove that the half-space {X ∈ Rn : aT X ≥ α} is a closed convex
set.
60
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
5. Show that the convex sets in Rn satisfy the following properties.
(a) If S is a convex set and β is a real number, the set
βS = {βX : X ∈ S}
is convex;
(b) If S1 and S2 are convex sets in Rn , then the set
S1 ± S2 = {X1 ± X2 : X1 ∈ S1 , X2 ∈ S2 }
is convex.
6. A point Xv in S is a vertex of S ⇔ S \ {Xv } is convex.
7. Write the system
x1 + x 2 = 1
2x1 − 4x3 = −5
into its equivalent system which contains only three inequality
constraints.
8. Define the convex hull of a set S as
[S] = {∩i∈Λ Ai : Ai ⊃ S and Ai is convex}.
Show that this definition and the definition of convex hull in
Section 2.1 are equivalent.
9. Using the definition of convex hull in Problem 8, show that [S] is
the smallest convex set containing S.
10. Find the convex hull of the following sets
(a) {(1, 1), (1, 2), (2, 0), (0, −1)}; (b) {(x 1 , x2 ) : x21 + x22 > 3};
(c) {(x1 , x2 ) : x21 + x22 = 1};
(d) {(0, 0), (1, 0), (0, 1)}.
11. Prove that convex linear combinations of finite number of points
is a closed convex set.
Suggestion. For convexity, see Theorem 5.
2.4. GRAPHICAL METHOD
61
12. Consider the following constraints of an LPP:
x1 + 2x2 + 2x3 + x4 + x5 = 6
x2 − 2x3 + x4 + x5 = 3
Identify (a) all nondegenerate basic feasible solutions; (b) all degenerate basic infeasible solutions (c) infinity of solutions.
Suggestion. Here x1 , x2 , x3 , x4 , x4 are unrestricted and hence every basic solution will be a basic feasible solution.
13. Use the resolution theorem to prove the following generalization
of Theorem 11.
For a consistent LP in its standard form with a feasible region
PF , the maximum objective value of z = C T X over PF is either
unbounded or is achieved at least at one vertex of P F .
14. Prove Theorems 11 and 12 for the minimization case.
15. Prove that if the optimal value of an LPP occurs at more than
one vertex of PF , then it also occurs at clc of these vertices.
16. Consider the above problem and mention whether the point other
than vertices where optimal solution exists is a basic solution of
the LPP.
17. Show that set of all optimal solutions of an LPP is closed convex
set.
18. Consider the system AX = b, X ≥ 0, b ≥ 0 (with m equations
and n unknowns). Let X be a basic feasible solution with p < m
components positive. How many different bases will correspond
to X due to degeneracy in the system.
19. In view of problem 18, the BFS (0, 2, 0, 0) of Example 3 has one
more different basis. Find this basis.
20. Write a solution of the constraint equations in Example 3 which
is neither basic nor feasible.
21. Let X0 be an optimal solution of the LPP min z = C T X, subject
to AX = b in standard form and let X ∗ be any optimal solution
when C is replaced by C ∗ . Then prove that
(C ∗ − C)T (X ∗ − X0 ) ≥ 0.
62
CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING
22. To make the graphical method work, prove that the intersection set of the feasible domain PF and the supporting hyperplane
whose normal is given by the negative cost vector −C T provides
the optimal solution to a given linear programming problem.
23. Find the solution of following linear programming problems using
the graphical method
(a) min
s.t.
z = −x1 + 2x2
− x1 + 3x2 ≤ 10
x1 + x 2 ≤ 6
x1 − x 2 ≤ 2
x1 , x 2 ≥ 0
(b) max
s.t.
z = 3x1 + 4x2
x1 − 2x2 ≤ −1
− x1 + 2x2 ≥ 0
x 1 , x2 ≥ 0
24. Prove that a feasible region given by n variables and n − 2 nonredundant equality constraints can be represented by two dimensional graph.
25. Let us consider the problem
max
s.t.
z = 3x1 − x2 + x3 + x4 + x5 − 10
3x1 + x2 + x3 + x4 + x5 = 10
x1 + x3 + 2x4 + x5 = 15
2x1 + 3x2 + x3 + x4 + 2x5 = 12
x1 , x 2 , x 3 , x 4 , x 4 , x 5 ≥ 0
Using Problem 24, write this as a two-dimensional LPP and then
find its optimal solution by graphical method.
Suggestion. Do pivoting at x3 , x4 , x5 .
26. Consider the LPP:
max
s.t.
z = x1 + 3x2
x 1 + x2 + x3 = 2
− x1 + x2 + x4 = 4
x2 , x 3 , x 4 ≥ 0
(a) Determine all the basic feasible solutions of problem;
(b) Express the problem in two-dimensional plane;
2.4. GRAPHICAL METHOD
63
(c) Show that the optimal solution obtained using (a) and (b)
are same.
(d) Is there any basic infeasible solution? if, yes find it.
27. What difficulty arises if all the constraints are taken as strict
inequalities?
28. Show that by properly choosing c i ’s in objective function of an
LPP, every vertex can be made optimal.
29. Let X be a basic solution of the system AX = b having both
positive and negative variables. How can X be reduced to a
BFS.
http://www.springer.com/978-3-540-40138-4