Download Preliminary review / Publisher`s description: This self

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computational phylogenetics wikipedia , lookup

Mathematical physics wikipedia , lookup

Lateral computing wikipedia , lookup

Computational chemistry wikipedia , lookup

Artificial intelligence wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Inverse problem wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Natural computing wikipedia , lookup

Dynamic programming wikipedia , lookup

Drift plus penalty wikipedia , lookup

Theoretical computer science wikipedia , lookup

Shapley–Folkman lemma wikipedia , lookup

Operations research wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Mathematical economics wikipedia , lookup

Genetic algorithm wikipedia , lookup

Multi-objective optimization wikipedia , lookup

Mathematical optimization wikipedia , lookup

Transcript
A Book Review of M.A. Goberna, published in
Newsletter of the European Mathematical Society No. 61, September 2006, 41-42.
J. Brinkhuis and V. Tikhomirov: Optimization: Insights and Applications,
Princeton Series in Applied Mathematics, Princeton University Press, PrincetonOxford, 2005, 658 pp., sterling 51.95/hbk, ISBN 0-691-10287-0, ISBN 0-691-10287-2
This book provides an excellent overview of the theory, methods and applications of
continuous optimization problems. More in detail, the book deals with one-dimensional
optimization (Chapter 1), unconstrained optimization (Chapter 2), optimization under
equality constraints (Chapter 3), optimization under inequality constraints (Chapter 4),
linear programming (Chapter 6), convex optimization (Chapter 7), mixed smoothconvex optimization (Chapter 10), dynamic programming in discrete time (Chapter 11),
and dynamic optimization in continuous time (Chapter 12). The following four-step
procedure, conveniently adapted to each of the above families of continuous
optimization problems, is proposed for computing an optimal solution of those
problems which are analytically solvable (i.e., by a formula):
1. Establish the existence of global solutions.
2. Write down the first order necessary conditions.
3. Investigate these conditions.
4. Write down the conclusions.
The main tools in Step 1 are the classical Weierstrass theorem (applied to some
bounded non-empty sublevel set) and its extension to coercive functions. Step 2 is based
upon the so-called Fermat-Lagrange principle (i.e., conditions of multiplier type), for
finite-dimensional optimization problems, and the Pontryagin principle, for dynamic
optimization problems. The particular versions of the Fermat-Lagrange principle for the
different classes of mathematical programming problems considered in this book (all of
them involving differentiable and/or convex functions, i.e., functions admitting linear
approximations) are obtained via the tangent space theorem (closely related with the
implicit function theorem), for smooth problems, and by means of the supporting
hyperplane theorem (related with the separation theorem), in the case of convex
problems. Typically, Step 3 provides a list of candidates and Step 1 allows the
identification of optimal solutions among them, if the given problem is solvable. The
second order conditions in Chapter 5 give some insight although they do not play a
crucial role in this approach because they can seldom be checked in practice. A similar
point of view is adopted regarding the constraint qualifications.
Concerning those optimization problems which cannot be solved analytically, but admit
a convex reformulation, the authors propose to solve them by means of interior point
methods (as the self-concordant barrier method) or cutting plane methods (as the
ellipsoid method). These and other complementary methods (e.g., line search methods
and linear optimization methods) are described in Chapters 6 and 7.
Each chapter contains a rich list of selected applications (many of them unusual in the
standard textbooks) that are presented as either motivating examples or as
complementary exercises. Classical optimization problems (most of them relative to
geometric objects), physical laws and economic models usually admit analytic solution,
whereas real life (or pragmatic) applications may require the use of numerical methods.
The solutions are given in Appendix H. Chapters 8 and 9 are devoted to economic and
mathematical applications, many of them really surprising.
The previous description of the content of the book suggests that it is one more of the
many good textbooks that have been written on continuous optimization. Nevertheless,
this book presents the following novel features:
1. The presentation of the materials is as friendly, simple and suggestive as
possible, including many illustrative examples and historical notes about results,
methods and applications. These notes show important contributions of Russian
mathematicians to optimization and related topics that have been ignored up to
now in the occidental academic world.
2. The book is totally self-contained, although some familiarity with basic calculus
and algebra could help the reader. The first contact with each topic is intuitive,
showing the geometrical meaning of concepts (including those of basic
calculus), results and numerical methods by means of suitable pictures. For
instance, all the main ideas in the book are sketched in Chapter 1, about onedimensional optimization, which could be read by college students. Subsection
1.5.1, entitled All methods of continuous optimization in a nutshell, is an
excellent introduction to this topic. In order to understand the basic proofs and to
be able to obtain analytic solutions, the reader must recall some elements of
linear algebra, real analysis and continuity (Weierstrass theorem) that can be
found in Appendices A, B and C, respectively.
3. The basic part of the book is written in a personal way, emphasizing the
underlying ideas that are usually hidden behind the technical details. For
instance, in the authors’ opinion, the secret power of the Lagrange multiplier
method consists in reversing the order of the two tasks, elimination and
differentiation, and not in the use of multipliers (as many experts think). In the
same vein, personal anecdotes are also reported, e.g., mention of some open
problems, most of them extending classical optimization problems, that they
posed in their lectures and were brilliantly solved by their students.
4. The book provides new simple proofs of important results on optimization
theory (as the Pontryagin principle in Appendix G) and also results on other
mathematical fields that can be derived from the optimization theory (as the
fundamental theorem of algebra, in Chapter 2). Most formal proofs are confined
to the last two appendices, which are written in a fully analytic style.
This book can be used for different courses on continuous optimization, from
introductory to advanced, for any field for which optimization is relevant: mathematics,
engineering, economics, physics, etc. The introduction of each chapter describes a
“royal road” containing the essential tools for problem solving. Examples of possible
courses based on materials contained in this textbook are:
Basic optimization course: Chapters 1, 2, 3, 4, and Appendix D (Crash course on
problem solving).
Intermediate optimization course: Chapters 5, 6, 7, 10, and Appendices E and F.
Advanced-course on the applications of optimization: Chapters 8 and 9.
Advanced-course on dynamic optimization: Chapters 11 and 12, and Appendix G.
The website of the first author (people.few.eur.nl/brinkhuis) contains references to
implementations of optimization algorithms and a list of corrections allowing the repair
of the few shortcomings that are unavoidable in a first edition despite of its careful
production. In my opinion, this stimulating textbook will be for the teaching of
optimization what Spivak's "Calculus" was for the teaching of this subject (and even
real analysis) in the 70s.
Miguel A. Goberna got his PhD in 1979 from the Universidad de Valencia and he is
currently Professor at the Universidad de Alicante (Spain). He has published more than
60 research papers on optimization and related topics (inequality systems, convexity
and set-valued análisis) and several books, as Linear Semi-Infinite Optimization, J.
Wiley, 1998 (with M.A. López), Semi-Infinite Programming: Recent Advances, Kluwer,
2001 (with M.A. López, Eds.), and Linear Optimization, McGraw-Hill, 2004 (in
Spanish, with V. Jornet and R. Puente).