Download Unlicensed-7-PDF801-804_engineering optimization

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Finite element method wikipedia , lookup

Monte Carlo method wikipedia , lookup

Horner's method wikipedia , lookup

Newton's method wikipedia , lookup

P versus NP problem wikipedia , lookup

Multi-objective optimization wikipedia , lookup

Root-finding algorithm wikipedia , lookup

Linear programming wikipedia , lookup

False position method wikipedia , lookup

Multidisciplinary design optimization wikipedia , lookup

Mathematical optimization wikipedia , lookup

Transcript
Convex and Concave Functions
(d) f = 4x
2
1
+ 3x
2
2
+ 5x
2
3
+ 6x
, •
- 2
H (X) = -_
. f/_x
1x 2
2f/_x 2
1
_x2
2
• f/_x 1_x3
, 8 6
6
= .6
1
1
1/
01
+x
1x 3
• 2f/_x
1
Š 3x
_x2
2
2
• f/_x2
• 2f/_x 2 _x3
Š 2x
2
+ 15:
• 2f/_x
1
_x3 /
1
• 2 f/_x 2 _x
2
2
3•0 f/_x3
0
1
0 10
Here the principal minors are given by
| 8| = 8 > 0
fi
fi
fi 6fifi= 12 > 0
fi8 6fi
fifi8 6fi6 fi
fi 6 1fi
fi6
= 114 > 0
fi
fi1 0 10fifi
0fi
fi
and hence the matrix H(X) is positive definite for all real values of x
x3 . Therefore, f (X) is a strictly convex function.
1,
x
2,
and
783
B
Some Computational Aspects
of Optimization
Several methods were presented for solving different types of optimization problems
in Chapters 3 to 14. This appendix is intended to give some guidance to the reader in
choosing a suitable method for solving a particular problem along with some computational details. Most of the discussion is aimed at the solution of nonlinear programming
problems.
B.1
CHOICE OF METHOD
Several factors are to be considered in deciding a particular method to solve a given
optimization problem. Some of them are
1. The type of problem to be solved (general nonlinear programming problem,
geometric programming problem, etc.)
2. The availability of a ready-made computer program
3. The calender time required for the development of a program
4. The necessity of derivatives of the functions f and g j , j = 1, 2, . . . , m
5. The available knowledge about the efficiency of the method
6. The accuracy of the solution desired
7. The programming language and quality of coding desired
8. The robustness and dependability of the method in finding the true optimum
solution
9. The generality of the program for solving other problems
10. The ease with which the program can be used and its output interpreted
B.2
COMPARISON OF UNCONSTRAINED METHODS
A number of studies have been made to evaluate the various unconstrained minimization
methods. Mor´e, Garbow, and Hillstrom [B.1] provided a collection of 35 test functions
for testing the reliability and robustness of unconstrained minimization software. The
performance of eight unconstrained minimization methods was evaluated by Box [B.2]
using a set of test problems with up to 20 variables. Straeter and Hogge [B.3] compared
four gradient-based unconstrained optimization techniques using two test problems.
784
Engineering Optimization: Theory and Practice, Fourth Edition
Copyright © 2009 by John Wiley & Sons, Inc.
Singiresu S. Rao
B.3
Comparison of Constrained Methods
785
A comparison of several variable metric algorithms was made by Shanno and Phua
[B.4]. Sargent and Sebastian presented numerical experiences with unconstrained minimization algorithms [B.5]. On the basis of these studies, the following general conclusions can be drawn.
If the first and second derivatives of the objective function ( f ) can be evaluated
easily (either in closed form or by a finite-difference scheme), and if the number of
design variables is not large (n  50), Newton's method can be used effectively. For
n greater than about 50, the storage and inversion of the Hessian matrix at each stage
becomes quite tedious and the variable metric methods might prove to be more useful.
As the problem size increases (beyond n = 100 or so), the conjugate gradient method
becomes more powerful.
In many practical problems, the first derivatives of f can be computed more accurately than the second derivatives. In such cases, the BFGS and DFP methods become
an obvious choice of minimization. Of these two, the BFGS method is more stable
and efficient. If the evaluation of the derivatives of f is extremely difficult or if the
function does not possess continuous derivatives, Powell's method can be used to solve
the problem efficiently.
With regard to the one-dimensional minimization required in all the unconstrained
methods, the Newton and cubic interpolation methods are most efficient when the
derivatives of f are available. Otherwise, the Fibonacci or the golden section method
has to be used.
B.3
COMPARISON OF CONSTRAINED METHODS
The comparative evaluation of nonlinear programming techniques was conducted by
several investigators. Colville [B.6] compared the efficiencies of 30 codes using eight
test problems that involve 3 to 16 design variables and 0 to 14 constraints. However,
the codes were tested at different sites on different computers and hence the study was
not considered reliable. Eason and Fenton [B.7] conducted a comparative study of 20
codes using 13 problems that also included the problems used by Colville. However,
their study was confined primarily to penalty function-type methods. Sandgren and
Ragsdell [B.8] studied the relative efficiencies of the leading nonlinear programming
methods of the day more systematically. They studied 24 codes using 35 problems,
including some of those used by Colville and Eason and Fenton.
The number of design variables varied from 2 to 48 and the number of constraints
ranged from 0 to 19; some problems involved equality constraints, too. They found
the GRG method to be most robust and efficient followed by the exterior and interior
penalty function methods.
Schittkowski published the results of his study of nonlinear programming codes in
1980 [B.9]. He experimented with 20 codes on 180 randomly generated test problems
using multiple starting points. Based on his study, the sequential quadratic programming was found to be most efficient, followed by the GRG, method of multipliers,
and penalty function methods, in that order. Similar comparative studies of geometric programming codes were also conducted [B.10-B.12]. Although the studies above
were quite extensive, the conclusion may not be of much use in practice since the
studies were limited to relatively few methods and further they are limited to specially
786
Some Computational Aspects of Optimization
formulated test problems that are not related to real-life problems. Thus each new practical problem has to be tackled almost independently based on past experience. The
following guidelines are applicable for a general problem.
The sequential quadratic programming approach can be used for solving a variety of
problems efficiently. The GRG method and Zoutendijk's method of feasible directions,
although slightly less efficient, can also be used for the efficient solution of constrained
problems. The ALM and penalty function methods are less efficient but are robust and
reliable in finding the solution of constrained problems.
B.4
AVAILABILITY OF COMPUTER PROGRAMS
Many computer programs are available to solve nonlinear programming problems.
Notable among these is the book by Kuester and Mize [B.13], which gives Fortran
programs for solving linear, quadratic, geometric, dynamic, and nonlinear programming
problems. During practical computations, it is important to note that a method that
works well for a given class of problems may work poorly for others. Hence it is
usually necessary to try more than one method to solve a particular problem efficiently.
Further, the efficiency of any nonlinear programming method depends largely on the
values of adjustable parameters such as starting point, step length, and convergence
requirements. Hence a proper set of values to these adjustable parameters can be given
only by using a trial-and-error procedure or through experience gained in working with
the method for similar problems. It is also desirable to run the program with different
starting points to avoid local and false optima. It is advisable to test the two convergence
criteria stated in Section 7.21 before accepting a point as a local minimum.
More´ and Wright present information on the current state of numerical optimization
software in [B.16]. Several software systems such as IMSL, MATLAB, and ACM
contain programs to solve optimization problems. The relevant addresses are
IMSL
7500 Bellaire Boulevard
Houston, TX 77036
MATLAB
The MathWorks, Inc.
24 Prime Park Way
Natick, MA 01760
ACM Distribution Service
c/o International Mathematics and Statistics Service
7500 Bellaire Boulevard
Houston, TX 77036
In addition, the commercial structural optimization packages listed in Table B.1
are available in the market [B.14, B.15]. Most of these softwares are based on a
finite-element-based analysis for objective and constraint function evaluations and use
several types of approximation strategies.