Download - Free Documents

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Determinant wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Vector space wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Jordan normal form wikipedia , lookup

System of linear equations wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Four-vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Transcript
M SMA International Ecole Centrale de Nantes
Numerical Analysis
Anthony NOUY
anthony.nouyecnantes.fr
Oce F
Origin of problems in numerical analysis References
Part I Introduction
Origin of problems in numerical analysis
References
Origin of problems in numerical analysis References
Part I Introduction
Origin of problems in numerical analysis
References
d du b x for x . .Origin of problems in numerical analysis I Origin of problems in numerical
analysis References How to interpret the reality with a computer language from a continuous
world to a discrete world.. beam in traction. .. u u dx dx . Numerical solution of a dierential
equation Find u x u x such that Au b Example D diusion equation.
. . dierent alternatives such as methods based on a weak formulation of the problem. un Rn .
R. v v such that dv du dx dx dx v b dx v V and replace function space V by approximation
space V n v x n i vi hi x V . .Origin of problems in numerical analysis II Approximation from a
continuous to a discrete representation Represent a function u on a nitedimensional
approximation space u x n i Origin of problems in numerical analysis References ui hi x The
solution is then represented by u u . Example Galerkin approximation Find u V v . For the
denition of the expansion. .
Au In order to construct the system of equation matrix A and righthandside b K k k f xk If A is
a nonlinear operator .Origin of problems in numerical analysis III If A is a linear operator. the
initial continuous equation is then transformed into Linear systems of equations Find u Rn
such that where A Rnn is a matrix and Numerical Integration f x dx Origin of problems in
numerical analysis References b n b R a vector.
Remedy iterative solution techniques which transform the solution of a nonlinear equation
into solution of linear equations. B Cnn are matrices. u for x . u b x . Example d du x . . u u dx
dx Eigenproblems Find u. Cn C such that Au u or Au Bu where A.Origin of problems in
numerical analysis IV Nonlinear system of equations Find u Rn such that Au Origin of
problems in numerical analysis References b where A u Rn Au Rn . .
t such that u u x x t Origin of problems in numerical analysis References for x . . t bt dt
.Origin of problems in numerical analysis V Example Eigenmodes of a beam Wave equation
solution u x . t for which we search solutions of the form u x . t w x cost w Vn . u . t u . v w dx
x x v w dx v V n Ordinary dierential equations in time d ut Aut .
Origin of problems in numerical analysis References Part I Introduction Origin of problems in
numerical analysis References .
M. Allaire. Han. Springer. a clear and simple presentation of all the ingredients of the course
Numerical Analysis and Optimization . Mayers. G. Numerical linear algebra. Eigenvalues K.
materials for chapters Nonlinear equations. . An Introduction to Numerical Analysis
.References for the course Origin of problems in numerical analysis References G. additional
material for numerical solution of PDE and optimization problems a natural continuation of
the course . Kaber. Approximation/Interpolation a quite abstract introduction to numerical
analysis very instructive. Springer. . Linear systems. materials for chapters Linear Algebra.
with an introduction to functional analysis E. . . Cambridge University Press. Suli and D.
Theoretical Numerical Analysis A Functional Analysis Framework . Allaire and S. Cambridge
University Press. Atkinson and W.
Matrices Reduction of matrices Vector and matrix norms Part II Linear algebra Matrices
Reduction of matrices Vector and matrix norms .
Matrices Reduction of matrices Vector and matrix norms Part II Linear algebra Matrices
Reduction of matrices Vector and matrix norms .
. . . which are the following row vectors v T v . A vector v V admits a unique decomposition v
n where the vi n i are the components of v on the basis E . . en be a basis of V . v . v n vH v v
where a denotes the complex conjugate of a. on the eld K R or C.Vector space Matrices
Reduction of matrices Vector and matrix norms Let V be a vector space with nite dimension
n. . . vn We denote respectively by v T and v H the transpose and conjugate transpose of v .
. When a basis is chosen and when there is no ambiguity. . Let E e . vn . represented by the
column vector i vi e i . . . we can identify V to Kn Rn or Cn and let v vi n i .
It is called euclidian inner product if K R and hermitian inner product if . V V K the canonical
inner product dened for all u . v u H v v H u n i n i ui vi u i vi if if KR KC K C.Canonical inner
product Matrices Reduction of matrices Vector and matrix norms We denote by . v u T v v T
u u . v V by u .
if and only if v .Orthogonality Matrices Reduction of matrices Vector and matrix norms
Orthogonality on a vector space V must be thought with respect to an inner product . v V are
said orthogonal with respect to inner product . which is the largest subspace orthogonal to U
. Two linear subspaces U V and U V are said orthogonal. u u U . u for all u U . which is
denoted v U . A vector v is said orthogonal to a linear subspace U V . If not mentioned. . we
classically consider the canonical inner product. v . we denote by U its orthogonal
complement. if and only if u . . Two vectors u . The orthogonal complement of a vector v V is
denoted by v . if u . and it is denoted U U . u U For a given subspace U V .
j n We denote Aij aij . Denition The set of matrices with m rows and n columns with entries in
the eld K is a vector space denoted Mm.n K or Kmn . . . an A . . am am . . . .Matrices
Matrices Reduction of matrices Vector and matrix norms Let V and W be two vector spaces
with dimension n and m respectively. . . relatively to those bases. an a a . . . . is represented
by a matrix A with m rows and n columns a a . . . . with m bases E ei n i and F fi i . amn
where the coecients aij are such that Aej m i aij fi . A linear map A V W . The j th column of A
represents the vector Aej in the basis F . .
AT v u C n . v R m .Transpose Matrices Reduction of matrices Vector and matrix norms We
denote AH the adjoint or conjugate transpose matrix of a complex matrix A aij Cmn . v u .
dened by AT ij aji We have the following characterization of AH and AT Au . v u . dened by
AH ij aji We denote AT the transpose of a real matrix A aij Rnm . v C m u R n . AH v Au .
we only consider square matrices.Product Matrices Reduction of matrices Vector and matrix
norms To the composition of two linear maps corresponds the multiplication of the
associated matrices. the product AB Kmn is dened by AB ij q k aik bkj We have The set of
square matrices Mn. unless it is mentioned. In the following. AB H B H AH .n K is simply
denoted Mn K Knn . AB T B T AT . If A aik Kmq and B bkj Kqn .
AH A H AH . If A and B are invertible. we have AB B A . such that AA A A I . AT A T AT
.Inverse Matrices Reduction of matrices Vector and matrix norms We denote by In the
identity matrix on Knn . associated with the identity map from V to V . A matrix which is not
invertible is said singular. we simply denote In I and I ij ij where ij is the Knonecker delta. A
matrix is invertible if there exists a matrix denoted A unique if it exists and called the inverse
matrix of A. If there is no ambiguity.
Particular matrices Matrices Reduction of matrices Vector and matrix norms Denition A
matrix A Cnn is said Hermitian if A AH Normal if AAH AH A Unitary if AAH AH A I Denition A
matrix A Rnn is said Symmetric if A AT Orthogonal if AAT AT A I .
. . . . . . . ann A matrix A is said upper triangular if aij for i gt j a a . ann . .. . . . A . . . . . . ann A
matrix A is said lower triangular if aij for j gt i a . an A . . a a .. . . . . an a . . . . . . . . . . . . ann . .
. . . . . . A diag aii diag a . .Particular matrices Matrices Reduction of matrices Vector and
matrix norms A matrix A Knn is said diagonal if aij for i j and we denote a .. . .. . .. . . . . . . . . .
. an an . . .
Properties of triangular matrices Matrices Reduction of matrices Vector and matrix norms Let
Ln Knn be the set of lower triangular matrices. and Un Knn be the set of upper triangular
matrices. Theorem If A. If A Ln . B Ln . then AB Un A Ln or Un is invertible if and only if all its
diagonal terms are nonzero. A Un if it exists . B Un . then AB Ln If A. A Ln if it exists If A Un .
Trace Matrices Reduction of matrices Vector and matrix norms Denition The trace of a matrix
A Knn is dened as tr A Property tr A B tr A tr B . tr AB tr BA n i aii .
For Sn . . we denote by sign the signature of the permutation. ann .Determinant Matrices
Reduction of matrices Vector and matrix norms Let Sn denote the set of permutations of . . n.
if is an even resp. . . . Denition The determinant of a matrix A Knn is dened as det A Property
det AB det BA det Adet B Sn signa . . . with sign resp. . odd permutation of . n. . .
denoted rank A. Kernel I Matrices Reduction of matrices Vector and matrix norms Denition
The image of A Kmn is a linear subspace of Km dened by ImA Av .Image. v Kn The rank of a
matrix A. is the dimension of ImA rank A dimImA minm. Av The dimension of Ker A is called
the nullity of A. n Denition The kernel of A Kmn is a linear subspace of Kn dened by Ker A v
Kn . Property dimImA dimKer A n .
Secondly. u ImA u T Av v v T AT u v AT u ImA Ker AT . Exercice. Let us prove that Ker AT
ImA .Image. u Ker AT AT u v T AT u v u T y y ImA Ker AT ImA . First. Ker AT ImA Rm .
Finish the proof. Ker A ImAT Proof. . Ker AT ImA Ker A ImAT Rn . Kernel II Matrices
Reduction of matrices Vector and matrix norms Property For A Rmn . which implies Ker AT
ImA Rm .
det A n i i A . An eigenvalue is said of multiplicity k if it is a root of pA with multiplicity k .
characteristic polynomial i n. The spectrum of matrix A is the following subset of the complex
plane sp A i An i We have tr A n i i A .Eigenvalues and eigenvectors I Matrices Reduction of
matrices Vector and matrix norms Denition Eigenvalues i i A. of a matrix A Knn are the n
roots of its pA C pA det A I The eigenvalues may be real or complex.
a vector v satisfying Av v is called an eigenvector of A associated with . . Av v with
dimension at least one is called the eigenspace associated with . The linear subspace v Kn
.Eigenvalues and eigenvectors II Matrices Reduction of matrices Vector and matrix norms
Denition The spectral radius A of a matrix A is dened by A max i A i n Property sp A if and
only if the following equation has at least a nontrivial solution v Cn Av v Denition For sp A.
Matrices Reduction of matrices Vector and matrix norms Part II Linear algebra Matrices
Reduction of matrices Vector and matrix norms .
i. Relatively to another basis F fi n of V . relatively to the basis E ei n i of V .Reduction of
matrices Matrices Reduction of matrices Vector and matrix norms Let V be a vector space
with dimension n and A V V a linear map on V . .e. the application A is associated i with
another matrix B such that B P AP where P is an invertible matrix whose j th column is
composed by the components of fj on the basis E . Denition Matrices A and B are said
similar when they represent the same linear map in two dierent basis. Let A be the matrix
associated with A. when there exists an invertible matrix P such that B P AP .
.e. there exists a unitary matrix U such that U AU is a triangular matrix. there exists an
orthogonal matrix O such that O AO is diagonal. The previous theorem says that there exists
a nested sequence of Ainvariant subspaces V V . Theorem Diagonalization For a normal
matrix A Cnn . Remark. . i. such that AH A AAH . .Matrices Reduction of matrices Vector and
matrix norms Theorem Triangularization For A Cnn . For a symmetric matrix A Rnn . Vn Cn
and there exists an orthonormal basis of Cn such that Vi is the span of the rst i basis vectors.
called the Schur form of A if upper triangular. there exists a unitary matrix U such that U AU
is diagonal.
Singular values and vectors Matrices Reduction of matrices Vector and matrix norms
Denition The singular values of A Kmn are the eigenvalues of Singular values of A are real
nonnegative numbers. . AH A Knn . Denition R is a singular value of A if and only if there
exists normalized vectors u Km and v Kn such that we have simultaneously Av u and AH u v
u and v are respectively called the left and right singular vectors of A associated with singular
value .
S diag i . If n m. and the columns of V are the right singular vectors of A..Singular value
decomposition SVD I Theorem For A Kmn . n . The columns of U are the left singular vectors
of A. .. . Matrices Reduction of matrices Vector and matrix norms If n m.. there exist two
orthogonal if K R or unitary if K C matrices U Kmm and V Knn such that A USV H where S
diag i Rmn is a diagonal matrix. n mnm if n gt m. with i the singular values of A. mnn if n lt m.
. . m . S diag i Rmn must be interpreted as follows kl is a k l matrix with zero entries .
.Truncated Singular Value Decomposition SVD The SVD of A can be written A USV H minn.
matrix A can be approximated by a rankK matrix AK obtained by a truncation of the SVD AK
We have the following error estimate K i i ui viH minn.m A AK F i A F i K .m Matrices
Reduction of matrices Vector and matrix norms i i ui viH After ordering the singular values by
decreasing values . ..
Illustration SVD for data compression Matrices Reduction of matrices Vector and matrix
norms Initial image Singular values Rank SVD .
Illustration SVD for data compression Matrices Reduction of matrices Vector and matrix
norms Initial image Singular values Rank SVD .
Illustration SVD for data compression Matrices Reduction of matrices Vector and matrix
norms Initial image Singular values Rank SVD .
Illustration SVD for data compression Matrices Reduction of matrices Vector and matrix
norms Initial image Singular values Rank SVD .
Illustration SVD for data compression Matrices Reduction of matrices Vector and matrix
norms Initial image Singular values Rank SVD .
Illustration SVD for data compression
Matrices Reduction of matrices Vector and matrix norms
Initial image
Singular values
Rank SVD
Matrices Reduction of matrices Vector and matrix norms
Part II Linear algebra
Matrices Reduction of matrices Vector and matrix norms
Vector norms
Matrices Reduction of matrices Vector and matrix norms
Denition A norm on vector space V is an application V R verifying v if and only if v v v for all v
V and K uv u v for all u , v V triangle inequality Example For V Kn
n v / norm v i i n norm v i vi norm v maxi ,...,n vi
p norm v p
nvpii
/p
for p
.
Useful inequalities
Matrices Reduction of matrices Vector and matrix norms
, denote the canonical inner product.
Theorem CauchySchwartz inequality
u,v
u
v
Theorem Hlders inequality
q , then Let p , q such that p
u,v
upvq
Theorem Minkowski inequality Let p , then uv p
upvp
Minkowski inequality is in fact the triangular inequality for the norm p .
and dened by A max n v C v Av max v Cn v v Av v Cn v max Av . B Kmn triangle inequality
For square matrices n m. Denition subordinate matrix norm Given norms on Kn and Km .
subordinate to the vectors norms. we can dene a natural norm on Kmn .Matrix norms I
Matrices Reduction of matrices Vector and matrix norms Denition A norm on Kmn is a map
Kmn R which veries A is and only if A A A for all A Kmn and K AB A B for all A. a matrix norm
is a norm which satises the following additional inequality AB A B for all A Knn . B Knn An
important class of matrix norms is the class of subordinate matrix norms.
we have the following characterization of the subordinate norms of a square matrix A Knn A
maxv Av v maxj i aij A A Note that A Property maxv Av v maxi j aij H H H maxv Av v A A AA
A . UU H I . then A A . AAH AH A.Matrix norms II Matrices Reduction of matrices Vector and
matrix norms Example When considering classical vector norms on Kn . For all unitary matrix
U i.e. corresponds to the dominant singular value of A. we have A AU UA U H AU If A is
normal i. .e.
Matrix norms III Matrices Reduction of matrices Vector and matrix norms Theorem Let A be a
square matrix and an arbitrary matrix norm. Then A A A For gt . there exists at least one
subordinate matrix norm such that A .
Conditioning Direct methods Iterative methods Part III Systems of linear equations
Conditioning Direct methods Triangular systems Gauss elimination LU factorization Cholesky
factorization Householder method and QR factorization Computational work Iterative
methods Generalities Jacobi. GaussSeidel. Relaxation Projection methods Krylov subspace
methods .
. b R .Conditioning Direct methods Iterative methods The aim is to introduce dierent
strategies for the solution of a system of linear equations Ax b n n n with A R .
Conditioning Direct methods Iterative methods Part III Systems of linear equations
Conditioning Direct methods Triangular systems Gauss elimination LU factorization Cholesky
factorization Householder method and QR factorization Computational work Iterative
methods Generalities Jacobi. Relaxation Projection methods Krylov subspace methods .
GaussSeidel.
the solution of systems of equations obtained with nite precision computers has to be
considered carefully or even not considered as a good solution. x . x . . . . x . We observe
that a little modication of the righthand side leads a large modication in the solution. This
phenomenon is due to a bad conditioning of the matrix A. If an error is made on the input
data here the righthand side.Condition number Conditioning Direct methods Iterative
methods Let consider the following two systems of equations x . . It reveals that for badly
conditioned matrices. . the error on the solution may be drastically amplied.
A x b . The condition number of A is dened as cond A A A Let b Kn be the righthand side of
a system and let A Knn and b Kn be perturbations of matrix A and vector b. with A A O and b
b O . Property If x and x are solutions of the following systems Ax b.Conditioning Direct
methods Iterative methods Denition Let A Knn be an invertible matrix and let be a matrix
norm subordinate to the vector norm . then x x x cond A AA A bb b O .
cond A maxi i A mini i A A where the i A are the eigenvalues of A. cond A cond A . For
unitary or orthogonal matrix A.Conditioning Direct methods Iterative methods Property For
every matrix A and every matrix norm. cond A cond A. . The condition number cond A is
invariant trough unitary transformation cond A cond AU cond UA cond U H AU for every
unitary matrix U. For a normal matrix A. cond A . For every matrix A. the condition number
cond A A associated with the norm veries maxi i A cond A mini i A where the i A are the
singular values of A. . the condition number cond A .
GaussSeidel. Relaxation Projection methods Krylov subspace methods .Conditioning Direct
methods Iterative methods Triangular systems Gauss elimination LU factorization Cholesky
Part III Systems of linear equations Conditioning Direct methods Triangular systems Gauss
elimination LU factorization Cholesky factorization Householder method and QR factorization
Computational work Iterative methods Generalities Jacobi.
Do not compute the inverse In practice. MAx Mb direct methods consist in determining an
invertible matrix M such that is an upper triangular system. Indeed. it would be equivalent to
solving n systems of linear equations. we use sometimes the notation M x but the inverse is
never computed in practise. . triangular.Principle of direct methods I For solving Conditioning
Direct methods Iterative methods Triangular systems Gauss elimination LU factorization
Cholesky Ax b. the solution x of Ax b is not obtained by rst computing the inverse A and then
computing the matrixvector product A b. a simple backward substitution can be performed to
solve this triangular system. This is called the elimination step. Then. For simplicity. This
operation corresponds to the solution of a system of equations generally easy due to
properties of M diagonal.
Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky Part III Systems of linear equations Conditioning Direct methods
Triangular systems Gauss elimination LU factorization Cholesky factorization Householder
method and QR factorization Computational work Iterative methods Generalities Jacobi.
Relaxation Projection methods Krylov subspace methods . GaussSeidel.
an b .. . . ann . .. . is solved by a forward substitution Algorithm Forward substitution for
lower triangular system Step . . . . a x a x . . .. . a x b Step . . ..Triangular systems of
equations I If A is lower triangular... the system Conditioning Direct methods Iterative
methods Triangular systems Gauss elimination LU factorization Cholesky a a .. . Step n. x . .
an a . . . xn bn . ann xn bn n j anj bj .
. . the system Conditioning Direct methods Iterative methods Triangular systems Gauss
elimination LU factorization Cholesky a a . ann is solved by a backward substitution
Algorithm Backward substitution for upper triangular system Step . .n xn an. .Triangular
systems of equations II If A is upper triangular. Step . . .. .n xn a x b n a b j j j . . ann xn bn
an.. . an x b a . . . . Step n. .. . . an . . . . . xn bn . . .. .
Relaxation Projection methods Krylov subspace methods .Conditioning Direct methods
Iterative methods Triangular systems Gauss elimination LU factorization Cholesky Part III
Systems of linear equations Conditioning Direct methods Triangular systems Gauss
elimination LU factorization Cholesky factorization Householder method and QR factorization
Computational work Iterative methods Generalities Jacobi. GaussSeidel.
associated with a linear mapping written in a basis E ei n i . . j I ei ej ei ej H For A Knn . Let
us note that P i . Let P P . j A is the matrix A with permuted lines i and j . j .Gauss elimination
I Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky Denition Pivoting matrix A pivoting matrix P i . P i . i and set P A A aij
Step . i I . j is the matrix A with permuted columns i and j . Select a nonzero element ai of the
rst column and permute the lines and i . is dened as follows P i . We now describe the Gauss
elimination procedure Let A A aij . and AP i .
.Gauss elimination II Conditioning Direct methods Iterative methods Triangular systems
Gauss elimination LU factorization Cholesky Let introduce the matrix a a E . . . . a n an . a n
a . . a a such that a A E A . We can then operate as in step for this submatrix for eliminating
the subdiagonal elements of . . . ann . and so is the submatrix A ij .. ... . . Therefore A is
invertible. an ... det A if not. Step . i . . j n. We have det A det E P A det E det P det A det A
det A if a line permutation has been made. . . . . .
k akk . ... . we dene A Ak Ek Ak with .. .. k akn . E P A .. After k steps.. a . . .. . line operation
matrix E . . and a k .. . ... k ann k Pk Ak and After an eventual pivoting with a pivoting matrix
Pk ... . k ank k a n k a n . . .. we have the matrix k a Ak Ek Pk . ..Gauss elimination III
Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky column introduce a permutation matrix P P . i .. . . a k . with i P A and
A E A .. and let A Step k ..
. E P A The invertible matrix M En Pn .Gauss elimination IV Conditioning Direct methods
Iterative methods Triangular systems Gauss elimination LU factorization Cholesky Ek . a k ..k
k a k kk a k nk a k kk . . Last step After n steps. . . . . . . by we obtain an upper triangular
matrix An En Pn . . E P is then an invertible matrix such that MA is upper triangular. . .. . .
we adopt one of the following pivoting strategies. Choice of pivoting In order to avoid
dramatic roundo errors with nite precision computers.j n . i Ak P j . Partial pivoting. i such that
k aik k max aik Total pivoting.Gauss elimination V Conditioning Direct methods Iterative
methods Triangular systems Gauss elimination LU factorization Cholesky Remark. we select
Pk P k . At step k . A k i n i k . k . At step k . we select i and j such that k and we permute
lines and columns by dening aik j max aij k P k .
Remark.Gauss elimination VI Conditioning Direct methods Iterative methods Triangular
systems Gauss elimination LU factorization Cholesky Remark. . We rather operate
simultaneously on b by computing Mb bn En Pn . we solve the triangular system MAx MB . In
practice. for solving a system Ax b. Indeed. E P b Then. det A det An det M n i n aii where
the sign depends on the number of pivoting operations that have been performed. we dont
compute the matrix M . . Computing the determinant of a matrix The Gauss elimination is an
ecient technique for computing the determinant of a matrix. or equivalently An x bn . .
Otherwise. For A invertible. Theorem For A Kn inversible or not. we can set E I and Ak with
elements aik k Pk I at step k of the Gauss elimination and go to the next step. . That is the
reason why Gauss elimination can be used when no additional information is given on the
matrix. Proof. the matrix A is singular if and only there exists a matrix k for k i n. there exists
at least one invertible matrix M such that MA is an upper triangular matrix. it seems that this
computational work in O n is near the optimal that we can expect.Gauss elimination VII
Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky Computational work of Gauss Elimination O n For an arbitrary matrix.
the Gauss elimination procedure is a constructive proof for this theorem. In this case.
Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky Part III Systems of linear equations Conditioning Direct methods
Triangular systems Gauss elimination LU factorization Cholesky factorization Householder
method and QR factorization Computational work Iterative methods Generalities Jacobi.
GaussSeidel. Relaxation Projection methods Krylov subspace methods .
. i. . . . . it is a lower triangular matrix and so is its inverse M . . by letting A k . En . In fact.
We then have the desired decomposition with L M E . an . . this factorization is obtained by
the Gauss elimination procedure. a n a . We then let It is possible if at step k . akk M En . .e. .
. E and obtain MA U where U is the desired upper triangular matrix a a .. n ann M being a
product of lower triangular matrices. k Ak . . Let us consider the Gauss elimination without
pivoting. .LU factorization I Conditioning Direct methods Iterative methods Triangular
systems Gauss elimination LU factorization Cholesky The LU factorization of a matrix
consists in constructing lower and upper triangular matrices L and U such that A LU .
.LU factorization II Conditioning Direct methods Iterative methods Triangular systems Gauss
elimination LU factorization Cholesky Matrix L lij is directly obtained from matrices Ek Ek .. .k
. .... . .. . lk . . . . . . lnk . . .k . Ek lk . . . lnk ..
.LU factorization III Conditioning Direct methods Iterative methods Triangular systems
Gauss elimination LU factorization Cholesky Theorem nn be such that the diagonal
submatrices Let AK a . . akk matrix L and an upper triangular matrix U such that A LU If we
further impose that the diagonal elements of L are equal to . . diagonal term akk . . . Proof. .
The condition on the invertibility of submatrices ensures that at step k . . ak . the k is nonzero
and therefore that pivoting can be omitted. . this decomposition is unique. K ak . there exists
a lower triangular . Then. k k are invertible.
Conditioning Direct methods Iterative methods
Triangular systems Gauss elimination LU factorization Cholesky
Part III Systems of linear equations
Conditioning Direct methods Triangular systems Gauss elimination LU factorization Cholesky
factorization Householder method and QR factorization Computational work Iterative
methods Generalities Jacobi, GaussSeidel, Relaxation Projection methods Krylov subspace
methods
Cholesky factorization I
Conditioning Direct methods Iterative methods
Triangular systems Gauss elimination LU factorization Cholesky
Theorem If A Rnn is a symmetric denite positive matrix, there exists at least one lower
triangular matrix B bij Rnn such that A BB T If we further impose that the diagonal elements
bii gt , the decomposition is unique.
Cholesky factorization II
Conditioning Direct methods Iterative methods
Triangular systems Gauss elimination LU factorization Cholesky
Proof. We simply show that the diagonal submatrices k aij , i , j k , are positive denite.
Therefore, they are invertible and there exists a unique LU factorization A LU such that L has
unit diagonal terms. Since the k are positive denite, we have k kk gt , for all k . We then i uii
det dene the diagonal matrix D diag uii and we write A L U BC where B L and C U have both
diagonal terms bii cii uii . The symmetry of matrix A imposes that BC C T B T and therefore
... . . . B C T CB T . . . . .. .. . . . . ...
and this last equality is only possible if CB T I C B T . Prove the uniqueness of the
decomposition.
Conditioning Direct methods Iterative methods
Triangular systems Gauss elimination LU factorization Cholesky
Part III Systems of linear equations
Conditioning Direct methods Triangular systems Gauss elimination LU factorization Cholesky
factorization Householder method and QR factorization Computational work Iterative
methods Generalities Jacobi, GaussSeidel, Relaxation Projection methods Krylov subspace
methods
i. Theorem n For x xi n i C . we introduce the following matrix. and we have H v x x e . one
veries that the two householder matrices H v are associated with the vectors v x x e i e
.Householder matrices Conditioning Direct methods Iterative methods Triangular systems
Gauss elimination LU factorization Cholesky Denition For v a nonzero vector in Cn .
Denoting by e the rst basis vector of Cn . called Householder matrix associated with v vv H H
v I H v v We will consider. Proof. there exists two householder matrices H such that Hx i for i
. where R is the argument of x C. although incorrect. that the identity I is a Householder
matrix. x x e i .e.
. . a a . H A is under the form k a Ak k . .. k akn . . with v k Cnk ... . Then. . . k ank k a n a n .
.. we solve the following triangular system by backward substitution Hn . . .. . ... H Ax Hn . .. .
. ...... .Householder method I Conditioning Direct methods Iterative methods Triangular
systems Gauss elimination LU factorization Cholesky The Householder method for solving
Ax b consists in nding n householder matrices Hi n i such that Hn . .. H A is upper triangular..
. H b Suppose that Ak Hk . There i exists a Householder matrix H vk .. . k akk . . k ann k Let c
ci n Cnk be the vector with components ci aikk . . . such that H vk c has .
. Let us note that zero components except the rst one.Householder method II Conditioning
Direct methods Iterative methods Triangular systems Gauss elimination LU factorization
Cholesky Cn and v k we let Hk H vk the householder matrix associated with vk . we denote
vk Ik H vk Hk H vk Performing this operation for k . . we obtain the desired upper triangular
matrix An Hn . . n . Then. . H A. .
one can choose the diagonal elements of R . if A is invertible. Then. . Theorem For A Knn .
the corresponding QR factorization is unique.QR factorization I Conditioning Direct methods
Iterative methods Triangular systems Gauss elimination LU factorization Cholesky The QR
factorization is a matrix interpretation of the Householder method. there exist a unitary matrix
Q Knn and an upper triangular matrix R Knn such that A QR Moreover.
We then have the R with rkk . Remark. QD is still unitary and the matrix R D R is still upper
The matrix Q triangular with all its diagonal elements greater than . If A Rnn . . . with Q an
orthogonal matrix. The matrix Q Hn . Let now denote by i R the arguments of the diagonal
elements rkk rkk e i k and let D diag e i k . R Rnn . H A where the Hi are householder
matrices. . Hn H H . We can then show the existence of a QR factorization A Q uniqueness
of this decomposition let as an exercice. . Hk Hk k This proves this existence of a QR
decomposition. i. H H . . . .e. The previous householder construction proves the existence of
an upper triangular matrix R Hn . . Q . Hn H . .QR factorization II Conditioning Direct methods
Iterative methods Triangular systems Gauss elimination LU factorization Cholesky Proof. is
unitary recall that the Hk are unitary and hermitian.
Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky Part III Systems of linear equations Conditioning Direct methods
Triangular systems Gauss elimination LU factorization Cholesky factorization Householder
method and QR factorization Computational work Iterative methods Generalities Jacobi.
GaussSeidel. Relaxation Projection methods Krylov subspace methods .
Algorithm LU Cholesky QR Operations O n O n O n ...Computational complexity
Conditioning Direct methods Iterative methods Triangular systems Gauss elimination LU
factorization Cholesky With classical algorithms.
Conditioning Direct methods Iterative methods Generalities Jacobi. Relaxation Projection
method Part III Systems of linear equations Conditioning Direct methods Triangular systems
Gauss elimination LU factorization Cholesky factorization Householder method and QR
factorization Computational work Iterative methods Generalities Jacobi. GaussSeidel.
Relaxation Projection methods Krylov subspace methods . GaussSeidel.
Conditioning Direct methods Iterative methods Generalities Jacobi. GaussSeidel. Relaxation
Projection method Part III Systems of linear equations Conditioning Direct methods
Triangular systems Gauss elimination LU factorization Cholesky factorization Householder
method and QR factorization Computational work Iterative methods Generalities Jacobi.
GaussSeidel. Relaxation Projection methods Krylov subspace methods .
The following assertions are equivalent limk B k limk B k v v B lt B lt for at least one
subordinate matrix norm . basic iterative methods consist in constructing a sequence xk k
dened by xk Bxk c from an initial vector x . lim x x k k B and c are chosen such that I B is
invertible and such that x is the unique solution of x Bx c . Relaxation Projection method For
the solution of a linear system of equations Ax b.e. Theorem Let B Knn . Matrix B and vector
c are to be dened such that the iterative method converges towards the solution x .
GaussSeidel. i.Basic iterative methods I Conditioning Direct methods Iterative methods
Generalities Jacobi.
. a contradiction. . Relaxation Projection method Proof. B k B k . B k v B k v k . . there exists
a vector v such that Bv v with and then B k v k v does not converge towards . The proof then
results from theorem . If B . GaussSeidel. with ek xk x B k e . Consequence of theorem . k
Theorem The following assertions are equivalent i The iterative method is convergent ii B lt iii
B lt for at least one subordinate matrix norm Proof.Basic iterative methods II Conditioning
Direct methods Iterative methods Generalities Jacobi. The iterative method is convergent if
and only if limk ek .
Relaxation Projection method Part III Systems of linear equations Conditioning Direct
methods Triangular systems Gauss elimination LU factorization Cholesky factorization
Householder method and QR factorization Computational work Iterative methods
Generalities Jacobi. Relaxation Projection methods Krylov subspace methods .Conditioning
Direct methods Iterative methods Generalities Jacobi. GaussSeidel. GaussSeidel.
at each iteration. Relaxation SOR I We decompose A under the form AM N where M is an
invertible matrix and then Ax b and we compute the sequence Conditioning Direct methods
Iterative methods Generalities Jacobi.Jacobi. E and F its strict lower and upper parts.
Relaxation Projection method Mx Nx b xk M Nxk M b Bxk c In practice. Denition We
decompose A D E F where D is the diagonal part of A. GaussSeidel. The method is then
ecient if M have a simple form diagonal or triangular. GaussSeidel. we solve the system Mxk
Nxk b. .
GaussSeidel. N D F . N F Denition Successive Over Relaxation SOR M D E . GaussSeidel.
Relaxation SOR II Conditioning Direct methods Iterative methods Generalities Jacobi.
Relaxation Projection method Denition Jacobi M D. N E F Denition GaussSeidel M D
E.Jacobi.
Relaxation Projection method Theorem Let A a positive denite hermitian matrix.
decomposed under the form A M N with M invertible. . If the matrix M H N is positive denite.
GaussSeidel.Convergence results I Conditioning Direct methods Iterative methods
Generalities Jacobi. then M N lt .
we have.Convergence results II Conditioning Direct methods Iterative methods Generalities
Jacobi. which is a compact set. v w v H Aw w H Av w H Aw w H M H w w H Mw w H Aw w H
M H N w gt Therefore v v M Av lt . From theorem . GaussSeidel. for v such that v . We have
M N I M A sup v M Av v Denoting w M Av . Let rst note that M H N is hermitian since M H N
H M N H A N N H A H N H N M H N . . we know that it suces to nd a matrix norm for which M
N lt . We will show this property for the matrix norm subordinate to the vector norm v v H Av .
and therefore the supremum is reached. The function v Cn v M Av R is continuous on the
unit sphere. Relaxation Projection method Proof.
D . we have for the We show that M H N H canonical basis vectors vi .Convergence results
III Conditioning Direct methods Iterative methods Generalities Jacobi. relaxation method
converges if lt lt . Matrix M H N is then hermitian positive denite if and only if lt lt . Since A is
denite positive. relaxation method converges only if lt lt . . Relaxation Projection method
Theorem Sucient condition for convergence of relaxation If A is hermitian positive denite.
GaussSeidel. vi Avi viH Dvi gt . Theorem Necessary condition for convergence of relaxation
The spectral radius of the matrix B M N of the relaxation method veries B and therefore.
Proof. and the proof ends with theorem .
We have and then Then B D E D F det B n n i n i i B B i B /n . Relaxation Projection method
Proof. GaussSeidel.Convergence results IV Conditioning Direct methods Iterative methods
Generalities Jacobi.
Conditioning Direct methods Iterative methods Generalities Jacobi. GaussSeidel.
GaussSeidel. Relaxation Projection method Part III Systems of linear equations Conditioning
Direct methods Triangular systems Gauss elimination LU factorization Cholesky factorization
Householder method and QR factorization Computational work Iterative methods
Generalities Jacobi. Relaxation Projection methods Krylov subspace methods .
. vm and W w . b Ax W where W is a subspace of Rn with the same dimension of V . . .
.Projection methods I Conditioning Direct methods Iterative methods Generalities Jacobi. . x
is called a projection of x onto the subspace V and parallel to subspace W . The case V W
corresponds to an oblique projection and the orthogonality constraint is called PetrovGalerkin
orthogonality. . . Relaxation Projection method We consider a real system of equations Ax b.
with y Rm such that W T AVy W T b y W T AV W T b . Let V v . wm dene bases of V and W .
GaussSeidel. The approximate solution is then dened by orthogonality constraints on the
residual. Projection techniques consists in searching an approximate solution x in a
subspace V of Rn . The approximate solution is then dened by x V . the approximation is
then dened by x Vy . . The case V W corresponds to an orthogonal projection and the
orthogonality constraint is called Galerkin orthogonality.
vm and W w . .Projection methods II Conditioning Direct methods Iterative methods
Generalities Jacobi. . . Relaxation Projection method Projection method Until convergence
Select V v . . . . Theorem W T AV is nonsingular for either one the following conditions A is
positive denite and V W A is nonsingular and W AV . . wm r b Ax y W T AV W T r x x Vy
Subspaces must be chosen such that W T AV is nonsingular. . Two important particular
choices satises this property. GaussSeidel. .
Then.Projection methods III Conditioning Direct methods Iterative methods Generalities
Jacobi. GaussSeidel. Then. Relaxation Projection method Theorem Assume that A is
symmetric denite positive and V W . x V is such that Ax b W if and only if it minimizes the
norm of the residual b Ax min b Ax x V . x V T x A x Ax Theorem Let A a nonsingular matrix
and W AV . x V is such that Ax b V if and only if x x x x A min A.
. and therefore xk xk f xk . w . Relaxation Projection method Basic onedimensional
projection schemes consist in selecting V and W with dimension . with an optimal choice of
step . r If A is symmetric positive denite matrix. We then have xk xk r . Let us denote V spanv
and W spanw . the next iterate is dened by xk xk v . A x x min f xk r . Denoting r b Axk the
residual at iteration k . xk is the solution of x x x x x . Av w Av Denition Steepest descent We
let v r and w r . f A We note that f xk Ax xk b Axk r . It then corresponds to a steepest descent
algorithm for minimizing the convex function f x .Basic onedimensional projection algorithms I
Conditioning Direct methods Iterative methods Generalities Jacobi. r wT r T w . r Ar .
GaussSeidel. r .
which is the solution of Ar . Relaxation Projection method Theorem Convergence of steepest
descent If A is symmetric positive denite matrix. Ar min b Axk r Theorem If A is positive
denite. minimal residual algorithm converges. the steepest descent algorithm converges.
Denition Minimal residual We let v r and w Ar . We then have xk xk r .Basic onedimensional
projection algorithms II Conditioning Direct methods Iterative methods Generalities Jacobi. r
Ar . . GaussSeidel.
We then have xk xk A T r .Basic onedimensional projection algorithms III Denition Residual
norm steepest descent We let v AT r and w Av AAT r . which is the solution of min f xk v . f x
b Ax Conditioning Direct methods Iterative methods Generalities Jacobi. Relaxation
Projection method Av . residual norm steepest descent algorithm converges. GaussSeidel.
with an optimal choice of step . Ax b Note that f xk AT b Axk AT r v . Av Av Ax b . r v Av . It
then corresponds to a steepest descent algorithm on convex function f x . . Theorem If A is
nonsingular.
GaussSeidel. GaussSeidel.Conditioning Direct methods Iterative methods Generalities
Jacobi. Relaxation Projection method Part III Systems of linear equations Conditioning Direct
methods Triangular systems Gauss elimination LU factorization Cholesky factorization
Householder method and QR factorization Computational work Iterative methods
Generalities Jacobi. Relaxation Projection methods Krylov subspace methods .
r . . Ar . Second class of methods consisting in taking W Km AT . . r or W AKm A. where x is
an initial guess. . Iterative Methods for Sparse Linear Systems. . Am r The dierent Krylov
subspace methods dier from the choice of space W and from the choice of a preconditioner.
This Krylov subspace is dened by V Km A. r spanr . . First class of methods consisting in
taking W Km A. GaussSeidel. A complete reference about iterative methods Yousef Saad.
Relaxation Projection method Krylov subspace methods are projection methods which
consists in dening subspace V as the mdimensional Krylov subspace of matrix A. SIAM.
.Krylov subspace methods Conditioning Direct methods Iterative methods Generalities
Jacobi. r . associated with r b Ax .
Jacobi GivensHouseholder QR Power iterations Krylov Part IV Eigenvalue problems Jacobi
method GivensHouseholder method QR method Power iterations Methods based on Krylov
subspaces .
Eigenvalue problems Jacobi GivensHouseholder QR Power iterations Krylov The aim is to
present dierent techniques for nding the eigenvalues and eigenvectors i . vi of a matrix A Avi
i vi .
Jacobi GivensHouseholder QR Power iterations Krylov Part IV Eigenvalue problems Jacobi
method GivensHouseholder method QR method Power iterations Methods based on Krylov
subspaces .
Jacobi method I Jacobi GivensHouseholder QR Power iterations Krylov Jacobi method
allows to nd all the eigenvalues of a symmetric matrix A. . . n with an eventual permutation. .
. . distinct or not. The matrix k is selected as follows T e e T sine e T sine e T k I cos ep ep q
q p q q p where /. . n . dened by T T A k T k Ak k . . k Ak . . . / is the unique angle such that
bpq bqp . There exists an orthogonal matrix O such that O T AO diag . . Let A Ak and B Ak . .
where the i are the eigenvalues of A. The Jacobi method consists in constructing a sequence
of elementary orthogonal matrices k k such that the sequence Ak k . . Each transformation
Ak Ak consists in eliminating two symmetric extradiagonal terms by a rotation. k Ok AOk
converges towards the diagonal matrix diag . is solution of a app cotan qq apq . It is well
adapted to full matrices.
n. ... Then. the sequence Ok k in the Jacobi method converges to an orthogonal matrix
whose columns form an orthonormal set of eigenvectors of A..Jacobi method II Jacobi
GivensHouseholder QR Power iterations Krylov Theorem Convergence of eigenvalues The
sequence Ak k obtained with the Jacobi method converges and lim A diag i k k where is a
permutation of . . Theorem Convergence of eigenvectors We suppose that all eigenvalues of
A are distinct.
Jacobi GivensHouseholder QR Power iterations Krylov Part IV Eigenvalue problems Jacobi
method GivensHouseholder method QR method Power iterations Methods based on Krylov
subspaces .
.. . . Hn T H AH . . . .. . . . . . ... Theorem For a symmetric matrix A.. . . . . . . . . . . there exists
an orthogonal matrix P. . . .. . . . . . Two steps Determine an orthogonal matrix P such that P
T AP is tridiagonal. with the Householder method. . . Compute the eigenvalues of a
tridiagonal symmetric matrix with the Givens method. product of n Householder matrices Hk
such that P T AP is tridiagonal P H H . . . .GivensHouseholder method I Jacobi
GivensHouseholder QR Power iterations Krylov GivensHouseholder method is adapted to
the research of selected eigenvalues of a symmetric matrix A. . . such as the eigenvalues
lying in a given interval. T T H H AH H . . . . .
Jacobi GivensHouseholder QR Power iterations Krylov Part IV Eigenvalue problems Jacobi
method GivensHouseholder method QR method Power iterations Methods based on Krylov
subspaces .
even nonsymmetric. . Under certain conditions. perform until convergence Ak Qk Rk QR
factorization Ak Rk Qk All matrices Ak are similar to matrix A. QR algorithm Let A A. whose
diagonal terms are the eigenvalues of A. the matrix Ak converges towards a triangular matrix
which is the Schur form of A.QR method I Jacobi GivensHouseholder QR Power iterations
Krylov The most commonly used method to compute the whole set of eigenvalues of an
arbitrary matrix A. For k .
Jacobi GivensHouseholder QR Power iterations Krylov Part IV Eigenvalue problems Jacobi
method GivensHouseholder method QR method Power iterations Methods based on Krylov
subspaces .
Power iterations method I Jacobi GivensHouseholder QR Power iterations Krylov Power
iteration method allows the capture of the dominant largest magnitude eigenvalue and
associated eigenvector of a real matrix A. . x k If the dominant eigenvalue is real and of
multiplicity . the sequences x k k and k k respectively converge towards the dominant
eigenvector and eigenvalue. Power iteration algorithm Start with an arbitrary normalized
vector x and compute the sequence x k and Theorem Ax k Ax k k Ax k .
. The initial vector x can be decomposed on this basis x n i ai vi and then. . . Ax k Ak x x k k
Ax Ak x Ak x n i k k k ai k i vi a w . Let us consider that gt i for all i gt . Let us prove the
convergence of the method when A is symmetric. . k a w k k k Av . . n . a proof using the
Jordan form can be used. v k Let us note that for general matrices. associated with
eigenvalues . w v n i ai a i k vi and since w k v . Then. . vn . . . . since Avi i vi . we obtain x k k
a k w signa k v . there exists an orthonormal basis of eigenvectors v .Power iterations
method II Jacobi GivensHouseholder QR Power iterations Krylov Proof.
A has for eigenpairs vi . applying the power method to matrix A allows to obtain the
eigenvalue of A with smallest magnitude and the associated eigenvector if the smallest
magnitude eigenvalue is of multiplicity .Power iterations method III Jacobi
GivensHouseholder QR Power iterations Krylov Exercice. Indeed. i . Power method with
deation allows to compute the whole set of eigenvalues of a matrix. Denition Shifted inverse
power method The shifted inverse power method consists in applying the inverse power
method to the shifted matrix A A I . Denition Inverse power method For an invertible matrix A.
See exercices. Power method with deation Under certain conditions. i the eigenpairs of
matrix A. It allows the capture of the eigenvalue and associated eigenvector which is the
closest from the value . . if we denote by vi . Therefore the inverse power method on A will
converge towards the eigenvalue i such that i minj j .
Jacobi GivensHouseholder QR Power iterations Krylov Part IV Eigenvalue problems Jacobi
method GivensHouseholder method QR method Power iterations Methods based on Krylov
subspaces .
. SIAM. . Numerical Methods For Large Eigenvalue Problems.Methods based on Krylov
subspaces Jacobi GivensHouseholder QR Power iterations Krylov A complete reference for
the solution of eigenvalue problems Yousef Saad.
Fixed point Monotone operators Dierential calculus Newton method Part V Nonlinear
equations Fixed point theorem Nonlinear equations with monotone operators Dierential
calculus for nonlinear operators Newton method .
. u K V where F K V . u K V where K is a subset of a vector space V and A K V is a nonlinear
mapping.Fixed point Monotone operators Dierential calculus Newton method Solving
nonlinear equations The aim is to introduce dierent techniques for nding the solution u of a
nonlinear equation A u b . We will equivalently consider the nonlinear equation F u .
with v v . . Denition A Hilbert space is a Banach space V whose norm is associated with an
scalar or hermitian product . V Cn equipped the natural hermitian product is a
nitedimensional Hilbert space on complex eld. . That means that this is a vector space on
complex or real elds equipped with a norm and such that every Cauchy sequence with
respect to this norm has a limit in V .Fixed point Monotone operators Dierential calculus
Newton method Innite dimensional framework Denition A Banach space V is a complete
normed vector space. Example V Rn equipped the natural euclidian scalar product is a
nitedimensional Hilbert space. v .
Fixed point Monotone operators Dierential calculus Newton method Part V Nonlinear
equations Fixed point theorem Nonlinear equations with monotone operators Dierential
calculus for nonlinear operators Newton method .
T u F u u . by letting T u F u u . u K V where T K V is a nonlinear operator.Fixed point
Monotone operators Dierential calculus Newton method Fixed point theorem I We here
consider nonlinear problems under the form T u u . . We are interested in the existence of a
solution to equation and in the possibility of approaching this solution by the following
sequence uk k dened by uk T uk Remark... . Let us note that nonlinear equations F u can be
recasted in dierent ways in the form . Denition A solution u of the equation T u u is called a
xed point of mapping T .
A mapping T K V V is said contractive if there exists a constant . with lt .Fixed point
Monotone operators Dierential calculus Newton method Fixed point theorem II Denition Let V
be a Banach space endowed with a norm . nonexpansive u . v K if there exists a constant
such that u . v K if T u T v u v Lipschitz continuous u . . such that T u T v u v is called the
contractivity constant. v K T u T v u v is called the Lipschitzcontinuity constant.
Fixed point Monotone operators Dierential calculus Newton method Fixed point theorem III
Theorem Banach xedpoint theorem Assume that K is a closed set in a Banach space V and
that T K K is a contractive mapping with contractivity constant .e. u uk k . converges to u. the
sequence uk k in K. dened by uk T uk . Then. i. we have the following results There exists a
unique u K such that T u u For any u K.
u is a xed point of T . We have u for m k . um uk as m. For the uniqueness. Then we have u
i k k mk i ui u i u i k u k u k u mk i i u u u u T u T u u u which is possible only if u u . the limit u
K . by continuity of T . suppose that u and u are two xed points. Since the sequence uk is
Cauchy in a Banach space V . it converges to some u V and since K is closed. and therefore.
.Fixed point Monotone operators Dierential calculus Newton method Fixed point theorem IV
Proof. uk is a Cauchy sequence. we take the limit k and obtain u T u . In the relation uk T uk .
. Then. we then have k uk m T u k T uk u k k uk u u m u u m uk Since . Let us prove that uk
is a Cauchy sequence. k .
. xk converges to a the sequence diverges. If a lt . we have that T is a contractive mapping if
a lt . If a . Let us note that T x T x ax x and therefore.Fixed point Monotone operators
Dierential calculus Newton method Fixed point theorem V Example Let V R and T x ax b. the
sequence xk T xk is characterized by ak xk axk b ak x b a b . which is the unique xed point of
T . If a gt .
Fixed point Monotone operators Dierential calculus Newton method Part V Nonlinear
equations Fixed point theorem Nonlinear equations with monotone operators Dierential
calculus for nonlinear operators Newton method .
Fixed point Monotone operators Dierential calculus Newton method Nonlinear equations with
monotone operators I We consider the application of the xed point theorem to the analysis of
solvability of a class of nonlinear equations A u b u V where V is a Hilbert space and A V V is
a Lipschitz continuous and strictly monotone operator. u v gt strongly monotone u . u v u . u .
v V . v V . u v u v is called the strong monotonicity constant. Denition Monotone operator A
mapping A V V on a Hilbert space V is said monotone if Au Av . u v if there exists a constant
gt such that A u A v . v V strictly monotone if Au Av .
Then. there exists a unique u V such that A u b Moreover.Fixed point Monotone operators
Dierential calculus Newton method Nonlinear equations with monotone operators II Theorem
Let V be a Hilbert space and A V V a strongly monotone and Lipschitz continuous operator. .
if Au b and Au b . for any b V . then u u b b which means that the solution depends
continuously on the righthand side b. with monotonicity constant and Lipschitzcontinuity
constant .
This proves the continuity of the solution u with respect to b. with T u u Au b for any . and
therefore. The application of Banach xed point theorem will then give the existence and
uniqueness of a xed point of u .Fixed point Monotone operators Dierential calculus Newton
method Nonlinear equations with monotone operators III Proof. and T is a contraction. w v
Aw Av w v For lt / . we have lt . we have Au Au b b and u u Au Au . The equation Au b is
equivalent to T u u . u u b b u u where the second inequality is the CauchySchwartz
inequality satised by the inner product of a Hilbert space. We have T w T v w v Aw Av w v
Aw Av . u u b b . The idea is to prove that there exists a such that T V V is contractive. . the
existence and uniqueness of a solution to Au b. Now if Au b and Au b .
Fixed point Monotone operators Dierential calculus Newton method Part V Nonlinear
equations Fixed point theorem Nonlinear equations with monotone operators Dierential
calculus for nonlinear operators Newton method .
where K is a subset of a normed space V and W a normed space. Property If F admits a
Frchet derivative F u at u. W such that F u v F u Av o v as v A is denoted F u and is called
the Frchet derivative of F at u . W the Frchet derivative of F on K . We denote by LV . . If F is
Frchetdierentiable at all points in K . Denition Frchet derivative F is Frchetdierentiable at u if
and only if there exists A LV . we denote by F K V LV .Fixed point Monotone operators
Dierential calculus Newton method Frchet and Gteaux derivatives I Let F K V W be a
nonlinear mapping. then F is continuous at u. W the set of linear applications from V to W .
If F is Gteauxdierentiable at all points in K . . if a mapping F is Gteauxdierentiable at u and if
F is continuous at u or if the limit in is uniform with v such that v . Property If a mapping F is
Frchetdierentiable. W such that F u tv F u lim Av v V t t A is denoted F u and is called the
Gteaux derivative of F at u . then F is also Frchetdierentiable and the two derivatives
coincide. W the Gteaux derivative of F on K . Conversely. it is also Gteaux dierentiable and
the derivatives F coincide. we denote by F K V LV .Fixed point Monotone operators
Dierential calculus Newton method Frchet and Gteaux derivatives II Denition Gteaux
derivative F is Gteauxdierentiable at u if and only if there exists A LV .
tu t v K Denition A function J K R. t . v K with u v . v K . J tu t v lt tJ u t J v t . . strictly convex
if for all u . is said convex if for all u . v K J tu t v tJ u t J v t . .Fixed point Monotone operators
Dierential calculus Newton method Convex functions I Denition A subset K of a vector space
V is said convex if u . dened on a convex set K of V .
The following statements are equivalent J is strictly convex J v gt J u J u .Fixed point
Monotone operators Dierential calculus Newton method Convex functions II Theorem Let J K
V R be Gateauxdierentiable. v K with u v . for all u .e. v K J is monotone. for all u . J v J u .
for all u . v u . The following statements are equivalent J is convex J v J u J u . i. J v J u . i.e.
v K Theorem Let J K V R be Gateauxdierentiable. v u . for all u . v u gt . v K with u v J is
strictly monotone. v u .
Fixed point Monotone operators Dierential calculus Newton method Convex functions III
Denition A function J K V R is said strongly convex if it is Gateauxdierentiable and if its
Gteaux derivative is strongly monotone. v u u v . i. if there exists a constant gt such that J v J
u .e.
there exists u K such that J u inf J v v K if and only if there exists u K such that J u . Assume
that J K R be a convex and Gteaux dierentiable mapping. v u v K When K is a linear
subspace. v v K . Then. the last inequality reduces to J u .Fixed point Monotone operators
Dierential calculus Newton method Convex optimization I Theorem Let K be a closed convex
subset of an Hilbert space V .
. u v K and therefore J u . v J u . assume . v u J v J u Now. v v K . J u J tv t u tJ v t J u and
then J u t v u J u J v J u t .Fixed point Monotone operators Dierential calculus Newton
method Convex optimization II Proof. we have v K J v J u J u . Since Finally. if K J is convex.
v u is a subspace. then for all v K . t Taking the limit t . Then v K and t . Assume . we obtain J
u.
Fixed point Monotone operators Dierential calculus Newton method Part V Nonlinear
equations Fixed point theorem Nonlinear equations with monotone operators Dierential
calculus for nonlinear operators Newton method .
dened by v F un F un v un F un . The Newton iterations are then and we dene un such that
F dened as follows. At iteration n.Fixed point Monotone operators Dierential calculus Newton
method Newton method I Let U and V be two Banach spaces and F U V a Frchetdierentiable
function. Newton iterations Start from an initial guess u and compute the sequence un nN
dened by un un F un F un . We want to solve F u The Newton method consists in
constructing a sequence un nN by solving successive linearized problems. we introduce the
linearization F of F at un .
v N u where N u is a neighborhood of u . Then. there exists gt such that if u u .e. See
Atkinson amp Han . . i. the sequence un n of the Newton method is welldened and
converges to u .Fixed point Monotone operators Dierential calculus Newton method Newton
method II Theorem local convergence of Newton method Assume u is solution of F u and
assume that F u exists and is a continuous linear map from V to U. there exists a constant M
lt / such that un u un u and un u M /M n Proof. Moreover. section . F u F v L u v u . Assume
that F is locally Lipschitz continuous at u .
Fm am Fn a . . am a . F u . . . a F a . . . . . . . u . F u . . . . . . . . a F am Fm am . . . . am u . . .
F u and F u can be expressed as follows F u . In algebraic notations.Fixed point Monotone
operators Dierential calculus Newton method Newton method for nonlinear systems of
equations I Let F Rm Rm and consider the nonlinear system of equations F u The iterations
of the Newton method are dened by un un F un F un where F un Rmm is called the tangent
matrix at un . u u .
. n un un where An F un . The convergence of the modied Newton method is usually slower
that full Newton method but more iterations can be performed for the same computation time.
. . For example. we can use modied Newton iterations where An is only an approximation of
F un . In order to avoid the computation of the tangent matrix F un at each iteration. k
Remark. j . . . we could update An when the convergence is too slow or after every k
iterations An F um for n mk j .Fixed point Monotone operators Dierential calculus Newton
method Modied Newton method One iteration of the full Newton method can be written as a
linear system of equations An n F un .
Interpolation Best approximation Orthogonal polynomials Part VI Interpolation /
Approximation Interpolation Lagrange interpolation Hermite interpolation Trigonometric
interpolation Best approximation Elements on topological vector spaces General existence
results Existence and uniqueness of best approximation Best approximation in Hilbert
spaces Orthogonal polynomials Weighted L spaces Classical orthogonal polynomials .
known exactly or approximately. by an approximating function p which is more convenient
for numerical computation.. The most commonly used approximating functions p are
polynomials. piecewise polynomials or trigonometric polynomials. There are several ways of
dening the approximating function among a given class of functions interpolation.
.Introduction Interpolation Best approximation Orthogonal polynomials Principle of
approximation The aim is to replace a function f .. . projection.
Interpolation Best approximation Orthogonal polynomials Lagrange interpolation Hermite
interpolation Trigonometric inte Part VI Interpolation / Approximation Interpolation Lagrange
interpolation Hermite interpolation Trigonometric interpolation Best approximation Elements
on topological vector spaces General existence results Existence and uniqueness of best
approximation Best approximation in Hilbert spaces Orthogonal polynomials Weighted L
spaces Classical orthogonal polynomials .
Interpolation Best approximation Orthogonal polynomials Lagrange interpolation Hermite
interpolation Trigonometric inte Part VI Interpolation / Approximation Interpolation Lagrange
interpolation Hermite interpolation Trigonometric interpolation Best approximation Elements
on topological vector spaces General existence results Existence and uniqueness of best
approximation Best approximation in Hilbert spaces Orthogonal polynomials Weighted L
spaces Classical orthogonal polynomials .
C m I is a Banach space when equipped with the norm f C m I max f i C I i m i . vi R We
denote by C I the space of continuous functions f I R.Preliminary denitions Interpolation Best
approximation Orthogonal polynomials Lagrange interpolation Hermite interpolation
Trigonometric inte We denote by Pn I the space of polynomials of degre n dened on the
closed interval I R Pn I v I R. We denote by C m I the space of m times dierentiable functions
f such that all its derivatives f i of order i m are continuous. v x n i vi x i . C I is a Banach
space when equipped with the norm f C I sup f x x I We denote by f the i th derivative of f .
. We introduce a set of n distinct points xi n i on a. b . It x xj x j i xj n is the unique basis of
functions satisfying the interpolation conditions i xj ij i . . . i x j i n where the i i form a basis of
Pn . b. b be a continuous function dened on the interval a. j . . such that a x lt . . .Lagrange
interpolation Interpolation Best approximation Orthogonal polynomials Lagrange interpolation
Hermite interpolation Trigonometric inte Let f C a. . n . lt xn b The Lagrange interpolation pn
Pn of f is the unique polynomial of degree n such that pn xi f xi for all i . . n We can represent
pn as follows pn x n i f xi i x . . called the Lagrange interpolation basis. .
. there exists x a. . Uniform grid red. b. . . . .Lagrange interpolation Interpolation Best
approximation Orthogonal polynomials Lagrange interpolation Hermite interpolation
Trigonometric inte Theorem Assume f C n a. . b. Then. . . . . b such that f x pn x n x n f x . n n
x n i x xi Inuence of the interpolation grid Function wn x on . . n . Random grid black . for x a.
. . GaussLegendre grid blue.
. b. there exists x a. GaussLegendre grid blue. n n x n i x xi Inuence of the interpolation grid
Function wn x on . b. . Random grid black . . x . Uniform grid red. b such that f x pn x n x n f x
. n . . Then. for x a.Lagrange interpolation Interpolation Best approximation Orthogonal
polynomials Lagrange interpolation Hermite interpolation Trigonometric inte Theorem
Assume f C n a. .
. . . Interpolation Best approximation Orthogonal polynomials Lagrange interpolation Hermite
interpolation Trigonometric inte x on . Runge function f x Uniform grid n . . . . . . . .. . . .
.Lagrange interpolation a famous example. . . . . . . . GaussLegendre grid n . . . . . . . . . . .. . . .
........
Interpolation Best approximation Orthogonal polynomials Lagrange interpolation Hermite
interpolation Trigonometric inte Part VI Interpolation / Approximation Interpolation Lagrange
interpolation Hermite interpolation Trigonometric interpolation Best approximation Elements
on topological vector spaces General existence results Existence and uniqueness of best
approximation Best approximation in Hilbert spaces Orthogonal polynomials Weighted L
spaces Classical orthogonal polynomials .
Hermite polynomial interpolation
Interpolation Best approximation Orthogonal polynomials
Lagrange interpolation Hermite interpolation Trigonometric inte
First order interpolation
First order Hermite polynomial interpolation consists in interpolating a function f x and its
derivative f x . Assume f C a, b. We introduce a set of n distinct points xi n i on a, b , with a x
lt . . . lt xn b The hermite interpolant pn Pn of f is uniquely dened by the following interpolation
conditions pn xi f xi , pn xi f xi , i n
General Hermite polynomial interpolation
Interpolation Best approximation Orthogonal polynomials
Lagrange interpolation Hermite interpolation Trigonometric inte
Higher order interpolation
Hermite interpolation can be generalized for the interpolation of higher order derivatives. At a
given point xi , it interpolates the function and its derivatives up to the order mi N. Let N n i mi
. A generalized Hermite interpolant pN PN is uniquely dened by the following conditions
j pN xi f j xi , j mi , i n
Theorem Assume f C N a, b. Then, for x a, b, there exists x a, b such that f x pN x
NxNfx,NNx
ni
x xi mi
Interpolation Best approximation Orthogonal polynomials
Lagrange interpolation Hermite interpolation Trigonometric inte
Part VI Interpolation / Approximation
Interpolation Lagrange interpolation Hermite interpolation Trigonometric interpolation Best
approximation Elements on topological vector spaces General existence results Existence
and uniqueness of best approximation Best approximation in Hilbert spaces Orthogonal
polynomials Weighted L spaces Classical orthogonal polynomials
Trigonometric polynomials
Interpolation Best approximation Orthogonal polynomials
Lagrange interpolation Hermite interpolation Trigonometric inte
A trigonometric polynomial is dened as follows pn x a
nj
aj cosjx bj sinjx ,
x,
pn is said of degree n if an bn . An equivalent notation is as follows pn x with
njn
cj e ijx ,
a c , aj cj cj , bj i cj cj or equivalently under a polynomiallike form pn x
njn
cj z j z n
n
k
ck n z k , z e ix
j n n The trigonometric interpolant of degree n of function f is dened by the following
conditions pn xj f xj . . we use uniformly distributed points xj j . Classically. j n where we have
introduce complex points zj e ixj . . j n It can be equivalently reformulated as an interpolation
problem in the complex plane nd ck n k n such that n k ck n zjk zjn f xj .Trigonometric
interpolation Interpolation Best approximation Orthogonal polynomials Lagrange interpolation
Hermite interpolation Trigonometric inte n We introduce n distinct interpolation points xj j in .
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Part VI Interpolation / Approximation Interpolation
Lagrange interpolation Hermite interpolation Trigonometric interpolation Best approximation
Elements on topological vector spaces General existence results Existence and uniqueness
of best approximation Best approximation in Hilbert spaces Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials .
pK min f p The obtained best approximation p depends on the norm selected for measuring
the error e...The problem of the best approximation Interpolation Best approximation
Orthogonal polynomials Elements on topological vector spaces General existence results
The aim is to nd the best approximation p of a function f in a set of functions K e..g.. We will
rst introduce some general results about optimization problems v K inf J v by giving some
general conditions on the set K and the function J for the existence of a minimizer.g.
piecewise polynomial space. a polynomial space. L norm. . .. L norm. .
there exists a sequence vn K such that limn J vn . vnk v k .An rst comprehensive case
extrema of realvalued functions I Consider a realvalued continuous function J C a. The
problem is to nd a minimizer of J inf J v v a. We denote by inf J v v K By denition of the
inmum. b has a minimum in K and a maximum. and therefore it is a compact set. K is a
closed and bounded interval in R. We recall the main steps of a typical proof in order to
obtain more general requirements on K and J. we can extract a subsequence vnk which
converges to some v K . b.b Interpolation Best approximation Orthogonal polynomials
Elements on topological vector spaces General existence results The classical result of
Weierstrass states that a continuous function on a closed interval K a. Therefore. from the
sequence vn K .
In order for K to contain the limit of this subsequence. The existence of a minimizing
sequence vn K is the denition of the inmum. a bounded sequence does not necessarily
admits a converging subsequence. Now we come back on the dierent points of the proof in
order to generalize the existence result for functionals J dened on a subset K of a Banach
space V.An rst comprehensive case extrema of realvalued functions II Interpolation Best
approximation Orthogonal polynomials Elements on topological vector spaces General
existence results Using the continuity of J . we obtain J v lim J vnk k which proves that v is a
minimizer of J in K . However. for a reexive Banach space V . K has to be weakly closed.
Finally. there exists a weakly convergent subsequence. In an innitedimensional Banach
space V . this condition is too restrictive and it is sucient to impose that J is weakly lower
semi continuous allowing discontinuities. . However. we want the weak limit of the
subsequence to be a minimizer of J . We then suppose that V is a reexive Banach space and
K V is a bounded set. We could then impose to J to be continuous with respect to a weak
limit.
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Part VI Interpolation / Approximation Interpolation
Lagrange interpolation Hermite interpolation Trigonometric interpolation Best approximation
Elements on topological vector spaces General existence results Existence and uniqueness
of best approximation Best approximation in Hilbert spaces Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials .
j n. a vector space equipped with a norm . there exists n N such that for all i . .j n lim sup vi vj
or equivalently. vi vj . if gt . V denotes a normed space. i. Denition Strong convergence on V
A sequence vn V is said to converge strongly to v V if n lim vn v vn v It is denoted Denition
Cauchy sequence A sequence vn V is Cauchy if n i .Elements on topological vector spaces I
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results In the following.e.
e.Elements on topological vector spaces II Interpolation Best approximation Orthogonal
polynomials Elements on topological vector spaces General existence results Denition
Closed set A subset K V is said to be closed if it contains all the limits of its convergent
sequences vn K and vn v v K The closure K of a set K is the union of this set and of the limits
of all converging sequences in K . A set K whose closure K is compact is said relatively
compact. a normed vector space such that every Cauchy sequence in V has a limit in V .
Denition Compact set A subset K of a normed space V is said to be sequentially compact if
every sequence vn nN contains a subsequence vnk k N converging to an element in K . . i.
Denition Banach space A Banach space is a complete normed vector space.
V is a Banach space for the norm L v V v sup Lv sup v V Lv v .Elements on topological
vector spaces III Interpolation Best approximation Orthogonal polynomials Elements on
topological vector spaces General existence results Denition Dual of a normed space V The
dual space of a normed space V is set space V LV . where V V is the dual of the dual of V . R
of linear continuous maps from V to R. also called bidual of V . LV Denition Reexive normed
space A normed space V is said reexive if V V . Denition Strong convergence on V A
sequence Ln V is said to converge strongly to L V if n lim Ln L .
called the weak topology. can be redened with respect to this new topology.
closure..Elements on topological vector spaces IV Interpolation Best approximation
Orthogonal polynomials Elements on topological vector spaces General existence results
The dual space can be used to dene a new topology on V . continuity.. The notions of
convergence. Denition Weak convergence on V A sequence vn V is said to converge weakly
to v V if n lim Lv vn L V vn v It is denoted Denition Weakly closed set in V A subset K V is
said to be weakly closed if it contains all the limits of its weakly convergent sequences vn K
and vn v v K .
. Theorem A set K in V is bounded and weakly closed if and only if it is weakly compact. Let
us note that the above theorem could be reformulated as follows a Banach space is reexive if
and only if the unit ball is relatively compact in the weak topology. Theorem Reexive Banach
spaces and converging bounded sequences A Banach space V is reexive if and only if every
bounded sequence in V has a subsequence weakly converging to an element in V . A set K
whose closure in the weak topology is weakly compact is said weakly relatively
compact.Elements on topological vector spaces V Interpolation Best approximation
Orthogonal polynomials Elements on topological vector spaces General existence results
Denition Weakly compact set A subset K of a normed space V is said to be weakly compact
if every sequence vn nN contains a subsequence vnk k N weakly converging to an element
in K .
s.c.l.c. if vn K and vn v K J v lim inf J vn n Proposition Continuity implies lower semicontinuity
but the converse statement is not true Weak lower semicontinuity implies lower
semicontinuity but the converse statement is not true .s.Lower semicontinuity I Interpolation
Best approximation Orthogonal polynomials Elements on topological vector spaces General
existence results Denition Lower semicontinuity A function J V R is lower semicontinuous l. if
vn K and vn v K J v lim inf J vn n Denition Weak lower semicontinuity A function J V R is
weakly lower semicontinuous w.
c. v lim vn . We then have Lvn L vn vn and therefore v Lv lim Lvn lim inf vn n n If V is an inner
product space. we have a simpler proof.s. v lim inf v n n vn .l. There exists a linear form L V
such that Lv v and L Corollary of the Generalized HahnBanach theorem.Lower
semicontinuity II Interpolation Best approximation Orthogonal polynomials Elements on
topological vector spaces General existence results Example Let us prove that the norm
function . Let vn V be a weakly convergent sequence with vn v . Indeed. v v . v V v R in a
normed space V is w.
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Part VI Interpolation / Approximation Interpolation
Lagrange interpolation Hermite interpolation Trigonometric interpolation Best approximation
Elements on topological vector spaces General existence results Existence and uniqueness
of best approximation Best approximation in Hilbert spaces Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials .
Let J V R denote a weakly l.c. Let K V denote a bounded and weakly closed set. vn is a
bounded sequence in a reexive Banach space and therefore. Since K is weakly closed.
Since K is bounded. Then.s. Since J is w. u K is a minimizer of J . Denote inf v K J v and vn
K a minimizing sequence such that limn J vn . we can extract a subsequence vnk weakly
converging to some u V . function. u K .c. problem has a solution in K. J u lim inf J vnk and
therefore.General existence results I We introduce the problem Interpolation Best
approximation Orthogonal polynomials Elements on topological vector spaces General
existence results v K inf J v Theorem Assume V is a reexive Banach space. Proof. k .l.s.
the problem has a solution in K. Let J V R denote a weakly l. Then. and coercive function.
Denition A functional J V R is said coercive if J v as Theorem Assume V is a reexive Banach
space.c.s. v .General existence results II Interpolation Best approximation Orthogonal
polynomials Elements on topological vector spaces General existence results We now
remove the boundedness of the set K by adding a coercivity condition on J . Let K V denote
a weakly closed set.
General existence results III Interpolation Best approximation Orthogonal polynomials
Elements on topological vector spaces General existence results Proof. Indeed. Pick an
element v K with J v lt and let K v K .l.s.l.c. Moreover.s. Lemma Convex closed sets are
weakly closed A convex and closed set K V is weakly closed. function is also w.c. if vn K is
such that vn v . The optimization problem is then equivalent to the optimization problem v K
inf J v of a w. .c. K is weakly closed. J v J v .s. and therefore v K .l. Since J is coercive. K is
bounded.s. A convex and l. Theorem allows to conclude on the existence of a minimizer.
then v K since K is weakly closed and J v lim inf n J vn J v .c.c.s. function on a bounded and
weakly closed set. Lemma Convex l. functions are w.
u u K for . theorems and can then be replaced by the following theorem. We have J u J u
minv K J v . function.General existence results IV Interpolation Best approximation
Orthogonal polynomials Elements on topological vector spaces General existence results For
convex sets and convex functions. .s. . Proof. Let J V R denote a convex l. Let K V denote a
convex and closed set. Moreover.c. we have J u u lt J u J u min J v v K which contradicts the
fact that u and u are solutions. Since K is convex. this solution is unique. if either i K is
bounded. if J is strictly convex. then the minimization problem has a solution in K. The
existence simply follows from theorems and and from properties and . or ii J is coercive on K.
and by strict convexity of J . It remains to prove the uniqueness if J is strictly convex.
Theorem Assume V is a reexive Banach space. Then. u K are two solutions such that u u .
Assume that u .
the reexivity is used for the extraction of a weakly convergent subsequence from a bounded
sequence in K . In particular. this solution is unique. we just need the completeness of the set
K and not of the space V . for nitedimensional subset K . function. . if J is strictly
convex.General existence results V Interpolation Best approximation Orthogonal polynomials
Elements on topological vector spaces General existence results In the case of a non reexive
Banach space V e. V C a. we have. b the above theorems do not apply. Let K V denote a
nitedimensional convex and closed set. However. Theorem Assume V is a normed space. if
either i K is bounded. Moreover. then the minimization problem has a solution in K.s. or ii J is
coercive on K. Then. Let J V R denote a convex l. In fact.c.g.
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Part VI Interpolation / Approximation Interpolation
Lagrange interpolation Hermite interpolation Trigonometric interpolation Best approximation
Elements on topological vector spaces General existence results Existence and uniqueness
of best approximation Best approximation in Hilbert spaces Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials .
Theorem Let V be a reexive Banach space and K V a closed convex subset. where V is a
normed space. and coercive.Existence and uniqueness of best approximation I Interpolation
Best approximation Orthogonal polynomials Elements on topological vector spaces General
existence results We apply the general results about optimization on the following best
approximation problem. continuous and hence w. We then have the two existence results.
we want to nd the elements in a subset K V which are the closest to u .s. For a given element
u V . The problem writes inf u v v K Denoting J v u v .c. the problem can then be written
under the form inf v K J v . Then there exists a best approximation u K verifying uu min u v v
K .l. Property Function J v u v is convex.
Existence and uniqueness of best approximation II Interpolation Best approximation
Orthogonal polynomials Elements on topological vector spaces General existence results
Theorem Let V be a normed space and K V a nitedimensional closed convex subset. v v p
Lp is strictly convex. If V Lp with p . Theorem I there exists a p gt such that v v p is strictly
convex. v v is a strictly convex function. we have to look at the properties of the norm. then a
solution u of the best approximation problem is unique. Then there exists a best
approximation u K verifying uu min u v v K For the uniqueness of the best approximation.
Example If V is a Hilbert space equipped with the inner product . and associated norm . . .
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Part VI Interpolation / Approximation Interpolation
Lagrange interpolation Hermite interpolation Trigonometric interpolation Best approximation
Elements on topological vector spaces General existence results Existence and uniqueness
of best approximation Best approximation in Hilbert spaces Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials .
and associated norm .Best approximation in Hilbert spaces I Interpolation Best
approximation Orthogonal polynomials Elements on topological vector spaces General
existence results Let V be a Hilbert space equipped with inner product . u K is a best
approximation of u V if and only if u u . v u v K . Lemma Let K be a closed convex set in
Hilbert space V .
and v K . First suppose that u K is a best approximation of u V . . v u u u. By selecting w u v
u . u u w u w K . v u v K . if u u. v u . That implies u u.Best approximation in Hilbert spaces II
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Proof. Then. then v u v u u u v u u u v u . v u for all . we
have u u u u v u v u . Conversely. . with . u u u u for all v K . v K .
Best approximation in Hilbert spaces III Interpolation Best approximation Orthogonal
polynomials Elements on topological vector spaces General existence results Corollary Let K
be a closed convex set in Hilbert space V . u u . Then. For any u V . We then conclude with
the following theorem. we obtain u u . u u u u and therefore u u . u K be two best
approximations of u V . Additionning these inequalities. u u and u u . Let u . . the best
approximation in K is unique. Proof. u u .
For any u V . there exists a unique best approximation u K dened by uu min u v v K .Best
approximation in Hilbert spaces IV Interpolation Best approximation Orthogonal polynomials
Elements on topological vector spaces General existence results Theorem Let K V be a
nonempty closed convex set in Hilbert space V .
we have u un u um un um u un um Since K is convex. in an inner product space.Best
approximation in Hilbert spaces V Interpolation Best approximation Orthogonal polynomials
Elements on topological vector spaces General existence results Remark. Let un nN K be a
minimizing sequence such that limn u un inf v K u v . Using the parallelogram law satised by
the norm . . which uses the inner product structure of the space V . u K. Let us give another
classical proof for the existence of a best approximation. un K converges to an element u V
and since K is closed.n which proves that un is a Cauchy sequence. we have un um / K and
therefore un um u un u un u um u um u un um / m. Since V is complete.
v V . v V and non expansive PK v PK u v u u . Proposition The projection operator is
monotone PK v PK u . v u u .Best approximation in Hilbert spaces Projection I Interpolation
Best approximation Orthogonal polynomials Elements on topological vector spaces General
existence results Denition Projector on a convex set The best approximation u K of u V in a
closed convex set K is called the projection of u onto K and is denoted u PK u where PK V K
is called the projection operator of V onto K .
PK u PK v Adding these inequalities.Best approximation in Hilbert spaces Projection II
Interpolation Best approximation Orthogonal polynomials Elements on topological vector
spaces General existence results Proof. we obtain v u . From the characterizations of PK u K
and PK v K . PK v PK u and PK v PK u v u . we have respectively PK u u . PK v PK u v u PK
v PK u We now introduce the following particular case when K is a subspace of V . PK v PK
u PK v PK u . . PK v PK u . PK v v .
We have u u . w u u u . v v K Proof. Then. for any u V . . v w K v K and since K is a
subspace. w u v K . u PK u is orthogonal to K . for all v K . and therefore In the case where K
is a subspace. and therefore. PK is called an orthogonal projection operator.Best
approximation in Hilbert spaces Projection III Interpolation Best approximation Orthogonal
polynomials Elements on topological vector spaces General existence results Theorem
Projection on linear subspaces Let K be a complete subspace of V . there exists a unique
best approximation u PK u K characterized by u PK u .
Best approximation in Hilbert spaces Projection IV Interpolation Best approximation
Orthogonal polynomials Elements on topological vector spaces General existence results Let
us consider that we know an orthonormal basis i n i of K Kn . The projection PKn u is
characterized by PKn u n i i . An orthonormal basis of Kn is given by the Legendre
polynomials Li n i dened by Li x di i / i i dx i x i . u i Example Least square approximation by
polynomials Let V L . and Kn Pn . the space of polynomials of degree less than n.
Best approximation in Hilbert spaces Projection V
Interpolation Best approximation Orthogonal polynomials
Elements on topological vector spaces General existence results
Example Least square approximation by trigonometric polynomials Let V L , and Kn the
space of trigonometric polynomials of degree less than n. The best approximation u n PKn u
is characterized by u n x a / with aj u x , cosjx cosjx , cosjx bj u x , sinjx sinjx , sinjx
nj
aj cosjx bj sinjx
u x cosjx dx , j u x sinjx dx , j
Note that u n tends to the wellknown Fourier series expansion of u .
Interpolation Best approximation Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials
Part VI Interpolation / Approximation
Interpolation Lagrange interpolation Hermite interpolation Trigonometric interpolation Best
approximation Elements on topological vector spaces General existence results Existence
and uniqueness of best approximation Best approximation in Hilbert spaces Orthogonal
polynomials Weighted L spaces Classical orthogonal polynomials
Interpolation Best approximation Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials
Part VI Interpolation / Approximation
Interpolation Lagrange interpolation Hermite interpolation Trigonometric interpolation Best
approximation Elements on topological vector spaces General existence results Existence
and uniqueness of best approximation Best approximation in Hilbert spaces Orthogonal
polynomials Weighted L spaces Classical orthogonal polynomials
Weighted
Interpolation Best approximation Orthogonal polynomials
L
spaces
Weighted L spaces Classical orthogonal polynomials
Let I R and I R be a weight function which is integrable on I and almost everywhere positive.
We introduce the weighted function space L I v I R v is measurable on I , L I is a Hilbert
space for the inner product
u,v
I
v x x dx lt
I
u x v x x dx u x x dx
/
and associated norm u
I
Two functions u , v L I are said orthogonal if u , v .
Interpolation Best approximation Orthogonal polynomials Weighted L spaces Classical
orthogonal polynomials Part VI Interpolation / Approximation Interpolation Lagrange
interpolation Hermite interpolation Trigonometric interpolation Best approximation Elements
on topological vector spaces General existence results Existence and uniqueness of best
approximation Best approximation in Hilbert spaces Orthogonal polynomials Weighted L
spaces Classical orthogonal polynomials .
Classical orthogonal polynomials I Interpolation Best approximation Orthogonal polynomials
Weighted L spaces Classical orthogonal polynomials A system of orthonormal polynomials
pn n . R . . Classical orthogonal polynomials I . x a x b ab B a. ./ x / B /. ./ x pn exp x x a exp x
a Jacobi Legendre Chebyshev of rst kind Chebyshev of second kind Hermite Laguerre .b x /
B /. . with pn Pn I . x . In the following table. we indicate classical families of polynomials for
dierent interval domains I and weight functions. For a given interval I and weight function . . .
can be constructed by applying the GramSchmidt procedure to the basis of monomials . x .. it
leads to a uniquely dened system of polynomials.
Equivalently. b denotes the Euler Beta function dened by B a . The given weight functions
are such that I x dx It then denes a measure with density d x x dx and with unitary mass. can
be interpreted as the probability law resp.Classical orthogonal polynomials II Interpolation
Best approximation Orthogonal polynomials Weighted L spaces Classical orthogonal
polynomials denotes the Euler Gamma function dened by a x a exp x dx B a. . b ab a b
Remark. probability density function of a random variable. resp.
. Construct by the GramSchmidt procedure the orthonormal polynomials of degree n . and
for the weight function x log/x . on the interval I . .Classical orthogonal polynomials III
Interpolation Best approximation Orthogonal polynomials Weighted L spaces Classical
orthogonal polynomials Exercice.
Basic quadrature formulas Gauss quadrature Part VII Numerical integration Basic quadrature
formulas Gauss quadrature .
the aim is to approximate the value of the integral I f f x dx n using evaluations of the
function I f k f xk k or eventually of the function and its derivatives I f n k f xk k n k f xk k . A
quadrature formula is said of interpolation type if it uses only evaluations of the function.
These approximations are called quadrature formulas. . .Numerical integration Basic
quadrature formulas Gauss quadrature Given a function f R. .
Integration error and precision Basic quadrature formulas Gauss quadrature We denote by In
f the quadrature formula. Denition A quadrature formula have a degree of precision k if it
integrates exactly all polynomials of degree less or equal to k In f I f f Pk In f I f for some f Pk
.
Basic quadrature formulas Gauss quadrature Part VII Numerical integration Basic quadrature
formulas Gauss quadrature .
..Basic quadrature formulas Basic quadrature formulas Gauss quadrature Rectangle formula
precision degree b a b a b f x dx b af ab Trapezoidal formula precision degree f x dx b a f a f
b ab f b Simpson formula precision degree a f x dx b a f a f ..
Composite quadrature formulas Basic quadrature formulas Gauss quadrature In order to
compute I f . m i m i we divide the domain into m subdomains m i such that I f . I f . i and we
introduce a basic quadrature formula on each subdomain I f . f x dx . i . I n f .
Basic quadrature formulas Gauss quadrature Part VII Numerical integration Basic quadrature
formulas Gauss quadrature .
b. g w b a f x g x w x dx . A Gauss quadrature formula with n points is dened by w f I w f In n
i i f xi with points and weights such that it integrates exactly all polynomials f Pn a. We
introduce the function space L w a. b and its natural inner product f . i are called Gauss
points resp. The xi resp.Gauss quadrature I Basic quadrature formulas Gauss quadrature
We want to approximate the weighted integral of a function f I w f b a f x w x dx where w x dx
denes a measure of integration. Gauss weights associated with the present measure.
b the weights are dened by i I Li . . the xi are the n roots of the degree n Hermite polynomial.
i. b is orthogonal to Pn a. the xi are the n roots of the degree n Legendre polynomial.. p x w p
Pn a.. where Li is the Lagrange interpolant at xi . b . b . zn x .j i x xj /xi xj Corollary The n
Gauss points of a npoints Gauss quadrature are the n roots of the degree n orthogonal
polynomial.Gauss quadrature II Basic quadrature formulas Gauss quadrature Theorem In f I f
for all f Pn a. b if and only if the points xi are such that the polynomial z n x n i x xi Pn a. For
a. For a. and w x . dened by Li x n j . and w x expx .e. b . .