Download Exercise 4.1 True and False Statements about Simplex x1 x2

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computational complexity theory wikipedia , lookup

Regression analysis wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Knapsack problem wikipedia , lookup

Inverse problem wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Linear algebra wikipedia , lookup

Dynamic programming wikipedia , lookup

Travelling salesman problem wikipedia , lookup

Multi-objective optimization wikipedia , lookup

Least squares wikipedia , lookup

Mathematical optimization wikipedia , lookup

Weber problem wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Transcript
MS-E2140 Linear Programming
Exercise 4
Fri 23.09.2016
Maari-B
Week 2
This week’s homework https://mycourses.aalto.fi/mod/folder/view.php?id=130923 is due no
later than Sunday 02.10.2016 23:55.
Exercise 4.1 True and False Statements about Simplex
Course book Exercise 3.18
Consider the simplex method applied to a standard form minimization problem, and assume that the
rows of the matrix A are linearly independent. For each of the statements that follow, give either a proof
or a counter example.
(a) An iteration of the simplex method might change the feasible solution while leaving the cost
unchanged.
(b) A variable that has just entered the basis cannot leave in the very next iteration.
(c) If there is a non-degenerate optimal basis, then there exists a unique optimal basis.
Solution
(a) False. In any iteration of the Simplex algorithm, the entering variable xj has negative reduced cost
c̄j . When xj enters the basis, the basic variables xB are modified by xB → xB + θdjB , variable xj
is modified by xj → xj + θ, and the cost changes by c′ x → c′ x + θc̄j . Therefore, if the current
solution changes, we must have θ > 0, and the cost changes by an amount θc̄j .
(b) False. Consider the problem min −x1 − 2x2 such that x1 + x2 ≤ 1, and x1 , x2 ≥ 0. The inequality
x1 + x2 ≤ 1 becomes x1 + x2 + x3 = 1 with x3 ≥ 0 when transformed into standard form. Assume
that the simplex algorithm starts at (x1 , x2 , x3 ) = (0, 0, 1). Taking either x1 or x2 into the basis
decreases the objective value, wherefore both of them are candidate variables for entering the basis.
Now, if x1 is taken to the basis, the solution becomes (x1 , x2 , x3 ) = (1, 0, 0). At this point, taking x2
into the basis improves the objective value, which gives the optimal solution (x1 , x2 , x3 ) = (0, 1, 0).
In this iteration, x1 has become non-basic, which disproves the statement. The figure below shows
the feasible region and the path followed by the example.
x2
Extreme points
1
0.8
cost
decreases
0.6
0.4
0.2
feasible region
0
0
0.2
0.4
0.6
0.8
1
x1
(c) False. Consider the problem min x2 such that x1 + x2 ≤ 1, and x1 , x2 ≥ 0. The inequality
x1 + x2 ≤ 1 can again be transformed into x1 + x2 + x3 = 1 with x3 ≥ 0.
1
MS-E2140 Linear Programming
Exercise 4
Fri 23.09.2016
Maari-B
Week 2
x2
1
Extreme points
Optimal solutions
0.8
0.6
0.4
cost
decreases
0.2
feasible region
0
0
0.2
0.4
0.6
0.8
1
x1
Two distinct optimal bases correspond to extreme points (x1 , x2 , x3 ) = (0, 0, 1) with the basic
variable x3 = 1 and (x1 , x2 , x3 ) = (1, 0, 0) with the basic variable x1 = 1.
Exercise 4.2 Simplex
Consider the problem
max 40x1 + 60x2
s.t. 2x1 + x2 ≤ 7,
x1 + x2 ≤ 4,
x1 + 3x2 ≤ 9,
x1 , x2 ≥ 0.
A feasible point for this problem is (x1 , x2 ) = (0, 3). Formulate the problem as a minimization
problem in standard form and verify whether or not this point is optimal. If not, solve the problem by
using the Simplex algorithm.
Solution
The minimization problem in standard form is
min
s.t.
−40x1
2x1
x1
x1
x1 ,
The corresponding constraint matrix

2
A = [A1 , A2 , A3 , A4 , A5 ] =  1
1
−60x2
+ x2
+ x2
+ 3x2
x2 ,
+x3
+x4
x3 ,
x4 ,
+x5
x5
=7
=4
=9
≥0
is
1 1
1 0
3 0



0 0
7
1 0  and the right hand side vector is b =  4  .
0 1
9
The cost vector is c′ = (c1 , c2 , c3 , c4 , c5 ) = (−40, −60, 0, 0, 0). The point (x1 , x2 ) = (0, 3) corresponds
to the basic feasible solution x = (x1 , x2 , x3 , x4 , x5 ) = (0, 3, 4, 1, 0) in standard form with basic variables
xB = (x2 , x3 , x4 ) = (3, 4, 1) and non-basic variables xN = (x1 , x5 ) = (0, 0). The corresponding basis is
2
MS-E2140 Linear Programming
Exercise 4
Fri 23.09.2016
Maari-B
Week 2

1 1
B = [A2 , A3 , A4 ] =  1 0
3 0

0
1 .
0
where Aj denotes the jth column of the constraint matrix. The cost vector for the basic variables is
c′B = (c2 , c3 , c4 ) = (−60, 0, 0) and c′N = (c1 , c5 ) = (−40, 0) for the nonbasic ones. The point x is
optimal if all the reduced costs are non-negative. The reduced cost vector c̄ is given by
c̄′ = c′ − c′B B−1 A.
The reduced costs of the basic variables are always zeros. Thus we only have to compute the reduced
costs of the nonbasic variables x1 and x5 . We first compute the inverse of the basis matrix B which is


0 0
1/3


B−1 =  1 0 −1/3
0
1 −1/3
To facilitate the computation of reduced costs, we compute the (dual) vector
p′ = c′B B−1 = [0, 0, −20]
The reduced costs c̄1 and c̄5 of the non-basic variables x1 and x5 are thus computed as follows:
c̄1 = c1 − p′ A1 = −40 − (−20) = −20.
c̄5 = c5 − p′ A5 =
0 − (−20) = 20.
Since c̄1 = −20 < 0, the point x = (0, 3, 4, 1, 0) is not optimal.
Let us denote by dj the jth basic direction corresponding to the nonbasic variable xj . The reduced
cost c̄1 is equal to the rate of cost change along the basic direction d1 . Thus, we can reduce the objective
function value by moving along the direction d1 . In general, we have
djB = −B−1 Aj .
Therefore, the basic components of d1 are d1B = −B−1 Aj = (d12 , d13 , d14 ) = (− 13 , − 35 , − 23 ). The remaining
non-basic components are d11 = 1 (x1 must increase), and d15 = 0 (x5 must stay at zero). Thus we obtain
d1 = (1, − 31 , − 53 , − 32 , 0).
We can move up to θ units along the direction d1 as long as the new point y = x + θd1 ≥ 0. This
yields θ ≤ −xi /d1i for all i such that d1i < 0, wherefore the largest possible value for θ is
θ=
min
{i|d1i <0,xi basic}
{−
xi
1
5
2
12 3
} = min{(3/ , 4/ , 1/ ) = (9, , )}.
d1i
3
3
3
5 2
The smallest ratio (and thereby most constraining) is − xd14 =
4
3
2
3
2
and corresponds to variable x4 . In other
words, by traveling a distance θ = from point x = (x1 , x2 , x3 , x4 , x5 ) = (0, 3, 4, 1, 0) along direction
d1 = (1, − 31 , − 35 , − 23 , 0), we get x4 = 0, and if we move any longer along d1 , then x4 becomes negative
(and thereby infeasible). The new point we move to is:
y = x + θd1 = (0, 3, 4, 1, 0) +
3
1 5 2
3 5 3
· (1, − , − , − , 0) = ( , , , 0, 0) .
2
3 3 3
2 2 2
Thus x1 has entered the basis, and x4 has left the basis. The new basis is B = [A2 , A3 , A1 ]. A new
iteration has to be made. After obtaining the new inverse B−1 , and after computing the reduced costs
(as above) we obtain c̄ = (0, 0, 0, 30, 10), so we can conclude that y is optimal. The optimality of y can
also be verified by the graphical representation of the problem below.
3
MS-E2140 Linear Programming
Exercise 4
Fri 23.09.2016
Maari-B
Week 2
x2
2x 1
Extreme points
+x
2
4
≤7
x1
+
x2
≤
3.5
4
x
3
d1 =
(1,- 1
,
3
cost
decreases
- 5 ,- 2
3
3 , 0)
y = x + θd1
2.5
x1 +
3x2
≤9
2
1.5
1
feasible region
0.5
0
0
0.5
1
1.5
2
2.5
3
3.5
4
x1
Project work 2
The rest of this exercise session is devoted to Project work 2 (see project work instructions for details).
4