Download Optimum Design 03/03/05

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

System of linear equations wikipedia , lookup

Transcript
Optimum Design Notes09
Announcements:
Please note that the extra midterm exam is on 5/31.
About homework grading, I finally decided to use only the best 4 homework sets.
Totally there are 5 formal homework and 1 bonus homework (the speech on 5/10).
Constrained Optimization: indirect methods
Saddle Point Theorem and Lagrangian Dual problem
Definition of a saddle point: If there exists ε > 0, and for all x satisfying |x–x*|<ε and
for all λ satisfying |λ–λ*|<ε, the equation f(x*, λ)≤f(x*, λ*)≤f(x, λ*) holds, then the
point P = [x* ; λ*] is a saddle point.
Definition of a primal problem (which is just the original problem) with inequality
constraints:
Find x which minimizes f(x),
subject to gj(x) ≤ 0, j = 1, 2, …, m
Definition of the dual problem corresponding to the primal problem defined above:
Find λ which maximizes h(λ), where h(λ) is the function obtained by finding x which
minimizes the Lagrangian fuction L(x, λ) = f(x) +
m
  g (x)
j 1
j
j
Saddle point theorem:
If the point P = [x* ; λ*] (with λ* ≥ 0) is a saddle point of the Lagrangian function
associated with the primal problem defined above, then x* is a solution to the primal
problem.
Duality theorem:
The point P = [x* ; λ*] (with λ* ≥ 0) is a saddle point of the Lagrangian function
associated with the primal problem defined above, if and only if
1. x* is a solution to the primal problem.
2. λ* is a solution to the dual problem.
3. f(x*) = h(λ*)
For more details, please read chapter 3.3 (and proofs of the related theories in chapter
6) of the book by Snyman.
Although the next topic of this class should be direct methods for constrained
optimization, some more background knowledge (linear and quadratic programming)
should be introduced first.
Linear Programming (LP)
LP problems were first recognized in the 1930s by economists while developing
methods for the optimal allocation of resources. George B. Dantzig, who was a
member of the US Air Force, formulated the general LP problem and devised the
simplex method of solution in 1947.
Standard problem statement: Minimize f(x) = cTx = c1x1 + c2x2… + cnxn
subject to [A]x = b , (m equations)
and
x ≥ 0 , (n equations)
a11 a12 ... a1n 
b1 
a

b 
2
21


where the m by n matrix [A] =
and the m by 1 vector b =   .


 


 
... a mn 
a m1
b m 
All the elements in [A], b, and c are constants.
Note: Any linear programming problem can be transformed into the standard form.
Definition of basic solution, basic and nonbasic variables:
A solution of [A]x = b obtained by setting (n-m) of the variables to zero and solving
the equations is call a basic solution. The (n-m) variables set to zero are called
nonbasic and the remaining m variables are basic variables.
Example 1:
Maximize 4x1 + 5x2
Subject to -x1 + x2 ≤ 4, x1 + x2 ≤ 6 , and x1 ≥ 0, x2 ≥ 0
This problem can be transformed to the standard form by adding 2 variables.
Theorem 1: The feasible region of an LP problem is convex. Any local optimum is
also global. At the optimum, at least one constraint must be active.
Theorem 2: The collection of feasible solutions of an LP problem constitutes a convex
set whose extreme points (vertex) correspond to basic feasible solutions.
Theorem 3: If there is a feasible solution (or optimal feasible solution), there is a basic
feasible solution (or optimal feasible solution).
Thus, solving an LP problem is reduced to the search for optimum only among the
basic feasible solutions. For a problem having n variables and m constraints
(excluding the x ≥ 0 constraints), there are at most n!/(m! (n-m)!) basic solutions. The
simplex method is a systematic way of finding the optimal one among them.
Canonical form:
A system of equations (m equations and n variables) is said to be in canonical form if
each equation has a variable (with unit coefficient) that does not appear in any other
equation.
For example, the set of equations [A]x = b can be transformed by Gauss-Jordan
elimination process into the form:
[I]x(m) + [Q]x(n-m)= bnew
where [I] = m-dimensional identity matrix, [Q] = m by (n-m) matrix consisting of
coefficients of the variables from xm+1 to xn, x(m) is the m by 1 column vector
(containing basic variables), x(n-m) is the (n-m) by 1 column vector (containing
nonbasic variables), and bnew is resultant column vector from b after Gauss-Jordan
elimination process.
Pivot step: interchange of basic and nonbasic variables.
For example, in example 1 the additional slack variables x3 and x4 are first chosen as
basic variables. We can choose x3 and x1 as basic variables and leave x2 and x4 as
nonbasic variables by the pivot step. That is, interchange the role of x4 and x1.
Simplex algorithm:
1. Start with an initial basic feasible solution. This is readily obtained if all
constraints are “≤” type and the right-hand side elements of the constraint
equations are nonnegative. This is because the additional slack variables can be
selected as basic and the others as nonbasic. If there are other constraints, the
two-phase procedure (which is described later) will be used.
2. The cost function must be in terms of only the nonbasic variables. This is readily
available if all constraints are “≤” type.
3. If all the cost coefficients for nonbasic variables are nonnegative, the optimum
solution is obtained. Otherwise, identify the pivot column (with the most negative
cost coefficient).
4. If all elements in the pivot column are negative, the problem is unbounded. The
problem formulation should be reexamined. Among several positive elements in
the pivot column, identify the pivot row (with smallest positive bi/aij). The pivot
element (on the pivot row and pivot column) is then determined.
5. Complete the pivot step. That is, perform Gauss-Jordan elimination so that the
coefficient of the pivot element becomes 1 and other coefficients in the same
column become 0.
6. Identify basic and nonbasic variables. Identify cost function value and go to step
3.
Example: Solve example 1 using the simplex algorithm.