Download C16: Linear programming — The Simplex Method Linear

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Linear programming
“Interest in linear programming (LP) in its own right began in
the late 1940’s with the invention of the simplex method, and
has continued unabated until the present day. Linear
programming is viewed and taught in an astonishing variety of
ways, to the extent that even an expert may be puzzled by
someone else’s version of the subject!”
C16: Linear programming — The Simplex Method
Niclas Börlin
5DA001 Non-linear Optimization
Problem formulation
Gill, Murray, Wright, “Numerical Linear Algebra and Optimization, vol. 1”,
Addison-Wesley, 1991, Chap. 7.
Feasible region
◮
An LP problem (Linear Programming problem) is an optimization
problem where the objective function and the constraints are linear
functions.
◮
An example of an LP problem is
◮
The feasible region for the problem is
1.5
c
max x1 − x2
x̄
1
x1 ,x2
x1
x1
x1
x2
+x2
x2
s.t.
≥
≥
≥
≤
≤
a1 a5
p
0.5
0
0
0
1
1
a4
p
a3
a2
x2
0
x1
p
p
−0.5
−0.5
0
0.5
1
1.5
Constraints
Standard form
◮
We have two different types of constraints; equality and inequality.
◮
A linear inequality constraint may be written as
◮
We will rewrite all LP problems to the following standard form:
min c T x
aT x ≥ β
x
s.t.
Ax ≥ b,
for some vector a.
◮
◮
◮
A point x̄ is feasible with respect to the constraint if the constraint is
satisfied, i.e. aT x̄ ≥ β.
where Ax ≥ b is interpreted to be elementwise.
◮
max c T x
If the constraint is satisfied with equality, the constraint is active,
otherwise inactive.
We will only work with ≥ inequality constraints, since
T
Maximization problems
x
are rewritten as
min −c T x.
x
T
1. A constraint a x ≤ β may be rewritten as −a x ≥ −β.
2. A constraint aT x = β may be described by two inequality constraints
aT x ≥ β and aT x ≤ β.
Example
Sets of active constraints
◮
Our example problem
ri (x̄) = aiT x̄ − βi .
becomes
min c T x
s.t.
x1 ,x2
x1
x1
x1
x2
+x2
x2
0
0
0
1
1
◮
x
max x1 − x2
s.t.
≥
≥
≥
≤
≤
Ax ≥ b
with
c=
◮
−1
1
For a constraint aiT x ≥ βi define the residual in the point x̄ as



, A=


1
0
0
1
1
1
−1
0
0 −1






, b = 




0
0
0
−1
−1



.


positive
inactive
The residual is zero
if the constraint is active .
negative
violated
For the set of constraints
Ax ≥ b,
the residual vector r is defined as
r (x̄) = Ax̄ − b.
◮
◮
Given the inequality Ax ≥ b we define the index set A(x̄) as the set
of indices
{i : ri (x̄) = 0},
i.e. the constraints that are satisfied with equality at the point x̄.
The matrix with the rows A(x̄) of A is denoted AA (x̄) and called the
active set matrix.
Algorithm outline
Gradients and Descent directions
◮
The objective function of the minimization problem is written as
f (x) = c T x,
We will adopt a linesearch-based descent algorithm for solving the
LP.
◮
◮
◮
Determine a starting approximation x0 .
Repeat for k = 0, 1, . . .
◮
◮
◮
◮
where c is a constant vector.
◮
The gradient to f (x) is
∇f (x) = c.
If xk optimal, terminate.
Determine a descent search direction pk .
Determine a step length αk ≥ 0.
Update the point xk+1 = xk + αk pk .
◮
Define F (α) as the function value starting from a point x̄ along a
search direction p and with a step length α:
F (α) = f (x̄ + αp) = c T (x̄ + αp) = c T x̄ + αc T p
= f (x̄) + αc T p.
◮
Gradients and Descent directions
Constraint gradients and Descent directions
Cont’d
◮
Thus, the change in objective function value depends only on c T p.
A search direction p1 such that c T p1 = 0
means that
F (α) = f (x̄) + α c T p = f (x̄),
|{z}
◮
Each constraint splits ℜn in two halves of feasible points
{x : aT x ≥ β} and infeasible points {x : aT x < β}.
◮
The gradient of a constraint aT x ≥ β is a.
◮
The gradient is always directed into the feasible set.
=0
i.e. the function value is constant.
◮
A search direction p2 such that c T p2 < 0
means that
T
F (α) = f (x̄) + α c p < f (x̄) for α > 0,
|{z}
<0
i.e. the function value is reduced.
◮
Any search direction p such that
thus a descent direction.
cT p
< 0 is
1.5
c
p2
p1



A=


a1T
a2T
a3T
a4T
a5T


 
 
=
 
 
1
0
0
1
1
1
−1
0
0 −1






c
x̄
1
a1 a5
p
0.5
a4
p
a3 a 2
0
−0.5
−0.5
x1
p
0
0.5
x2
p
1
1.5
Residuals and search directions
◮
◮
Residuals and search directions
Consider a point x̄ (not necessarily feasible), a direction p and a
constraint aiT x ≥ βi .
For a step α along p, the residual is
ri (x̄ + αp) = aiT x̄ + αaiT p − βi = ri (x̄) + αaiT p.
◮
If aiT p 6= 0 it is possible to calculate how long step σi would satisfy
and activate the constraint:
ri (x̄ + σi p) = ri (x̄) + σi aiT p = 0 ⇒ σi =
◮
ri (x̄)
.
−aiT p
For aiT p = 0, define
σi =
+∞ if ri (x̄) ≥ 0,
−∞ otherwise.
Example
◮
0
0
At x̄ =
with p =
, we have
1
− 21
 
0
1
 

r (x̄) = Ax̄ − b = 
1 ,
1
0


 
0
+∞
 2 
− 1 


 21 



Ap = − 2  , σ = 
 2 .
+∞
 0 
1
0
2
Search directions
Search directions
Inactive constraints
Active constraints
◮
1.5
c
x̄
1
a1 a5
p
a4
p
0.5
a3 a 2
0
x1
−0.5
−0.5
p
0
x2
p
0.5
For an active constraint (ri (x̄) = 0) we must maintain
ri (x̄ + αp) ≥ 0,
◮
The vector p 6= 0 is a feasible direction in the feasible point x̄ with
respect to the constraint aiT x ≥ βi if there exists a positive scalar τi
such that
ri (x̄ + αp) ≥ 0 , 0 ≤ α ≤ τi .
◮
for α > 0.
Since
ri (x̄ + αp) = ri (x̄) +αaiT p = |{z}
α aiT p ≥ 0,
|{z}
>0
=0
◮
For an inactive constraint (ri (x̄) > 0), all directions p are feasible.
ai
x̄
p
◮
◮
◮
we conclude that aiT p must be ≥ 0.
If aiT p1 = 0, the constraint remains active.
If aiT p2 > 0 the constraint will be deactivated.
If aiT p3 < 0 the constraint will be violated.
ai
p2
x̄ p1
p3
1
1.5
Search directions
Computing a search direction
Sets of constraint
◮
Assume we can find a solution to
For a set of constraint, a vector p is a feasible direction from the
feasible point x̄ if p 6= 0 and there exists a positive scalar τ such that
◮
r (x̄ + αp) ≥ 0 , 0 ≤ α ≤ τ
AT
A λ = c,
◮
m
and some λi is negative.
Then a search direction p can be determined as the solution of
A A p = ei ,
A(x̄ + αp) ≥ b , 0 ≤ α ≤ τ
As the only limitations on p are given by the active constraints, p
will be feasible if
◮
aiT p
≥ 0, ∀i ∈ A(x̄)
m
AA p
◮
◮
≥ 0,
where AA is the active set matrix in x̄.
Computing a search direction
where ei is a zero vector except a 1 in the i:th element.
This means that
ri (x̄ + αp) > 0 for α > 0,
i.e. constraint i will be deactivated.
A search direction p computed this way will be a descent direction,
since
T
T
T
c T p = (AT
A λ) p = λ AA p = λ ei = λi < 0.
Corners and edges
Example
 
0
1
 
−1
0

with c =
, we have r (x̄) = 
At x̄ =
1 ,
1
1
1
0
◮
A = {1, 5},
1 0
.
AA =
0 −1
1.5
A corner is an extreme point of the feasible set which cannot lie on a
line between two other feasible points.
◮
A corner is restricted in all n dimensions, i.e. the active set matrix
AA has at least n linearly independent rows.
◮
If AA has more than n rows, the corner is degenerated, otherwise
non-degenerated.
◮
An edge is a linear subspace of ℜn , and is defined by an active set
matrix AA with n − 1 linearly independent rows.
x̄
1
−1
T
.
AA λ = c ⇒ λ =
−1
◮
c
◮
Selecting i = 2,
0
0
.
⇒p=
AA p =
−1
1
a1 a5
p
0.5
a4
p
◮
a 3 a2
0
−0.5
−0.5
x2
◮
x1
p
0
0.5
p
1
1.5
A corner is always a minimizer to an LP problem.
If the minimizer is not unique, all points on an adjoining edge are
minimizers.
Step length
Step length
Example
◮
◮
◮
◮
Assume that x̄ is a corner of the feasible set and we have calculated
a search direction p from x̄.
◮
Let D denote the set of all inactive constraints with a diminishing
residual along p, i.e. j ∈ D if ajT p < 0.
 
 
0
0
−1
1
 
 
 

r (x̄) = 
1 , Ap = −1 ,
0
1
1
0
Determine σj as the longest step that can be taken along p without
violating constraint j, i.e.
( −r (x̄)
j
if j ∈ D
ajT p
.
σj =
+∞ otherwise
D = {2, 3},


+∞
 1 



σ=
 1  ⇒ α = 1.
+∞
+∞
Then
α = min σj
j
is the longest step we can take before activating any constraint.
◮
The new point is x̄ + αp.
◮
◮
◮
◮
◮
Determine the step length
AA =
α = min σj ,
Identify the active set AA in x̄.
Solve
AT
A λ = c.
j
where
(
Choose an i such that λi < 0.
Determine the search direction
p as the solution of
A A p = ei .
c
x̄
1
a1 a5
p
a4
p
0.5
a3 a 2
0
x1
−0.5
−0.5
p
p
0
x2
0.5
1
1.5
Degenerate corners
Let x [n] denote the current point after n iterations. Given a
non-degenerated corner x [n] , the simplex algorithm is as follows:
Determine a feasible descent
direction p in x [n] :
1.5
 
0
0
 
0

◮ At x1 =
, r (x̄) = 
0 , A = {1, 2, 3}. The corner is degenerate.
0
1
◮ Choosing A = {1, 3}
1
The Simplex Algorithm
◮
0
0
At x̄ =
with p =
, we have
1
−1
σj =
◮
−rj (x̄)
ajT p
if j ∈ D
+∞
otherwise
x [n+1] = x [n] + αp.
.
1 0
,
1 1
1.5
−2
T
⇒ i = 1.
AA λ = c ⇒ λ =
1
1
1
.
⇒p=
AA p =
−1
0


 
+∞
1
 0 
−1


 
 , D = {2, 4}, σ = +∞
0
Ap = 


 
 1 
−1
+∞
1
⇒ α = 0.
c
x̄
1
a1 a5
p
0.5
a4
p
a3 a2
0
−0.5
−0.5
x1
p
0
0.5
x2
p
1
1.5
Degenerate corners
At the solution
Cont’d
◮
Choosing A = {2, 3}
◮
0 1
,
AA =
1 1
1.5
2
T
⇒ i = 2.
AA λ = c ⇒ λ =
−1
1
0
.
⇒p=
AA p =
0
1


 
+∞
1
+∞
0


 



+∞
Ap =  1  , D = {4}, σ = 


 1 
−1
+∞
0
⇒ α = 1.
c
 
1
0
 
1

At x2 =
, r (x̄) = 
1 , A = {2, 4}.
0
0
1
x̄
1
0.5
0 1
,
AA =
−1 0
a1 a5
p
a4
p
a 3 a2
0
−0.5
−0.5
x1
p
0
0.5
1
T
.
AA λ = c ⇒ λ =
1
x2
p
1
1.5
◮
◮
1.5
c
x̄
1
a1 a5
p
0.5
a4
p
a3 a2
There is no descent
direction from x2 .
0
The corner x2 is the
solution.
−0.5
−0.5
x1
p
0
0.5
x2
p
1
1.5
Related documents