• Study Resource
• Explore

Survey

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Transcript
```NUMERICAL METHODS
FOR MATHEMATICAL PHYSICS INVERSE PROBLEMS
Lecture 5. Inverse problems for abstract systems
We know that the inverse problems can be transformed to the problems of finding of extremum. So the practical
methods of inverse problems theory are based on the optimization methods. The problems of the minimization of the
functionals can be solved by means of the gradient methods. We applied it for the cases with direct dependence of
the functional from unknown parameter of the system. However for the standard optimization control problems this
dependence is not direct. In really the functional depends from the state function; and the state function depends
from unknown parameter (control) by the state equation. It is true for optimization control problems, which are the
transformation of mathematical physics inverse problems. So we will try to extend the known minimization methods
to the optimization control problems.
5.1. Problem statement
We consider an abstract linear system. Let the system be described by the equation
Ay  Bv  f ,
(5.1)
where y is the state function, v is the unknown parameter, A is a linear continuous operator from
the unitary space Y to the unitary space W, B is a linear continuous operator from the unitary
space V to W, f is given function of W. We suppose that for all parameter v from the space V
there exists a unique solution y  y[ v ] from the space Y. We have also the equality
Cy  z ,
(5.2)
where C is linear operator from Y to the unitary space Z, z is given function from Z (result of
measuring). Determine the following abstract inverse problem.
Problem 5.1. Find the parameter v such that the respective state function y[ v ] satisfies the
equality (5.2).
This problem can be transformed to the optimization control problem. Determine the
functional
I (v)  Cy[v] z
2
.
Problem 5.2. Minimize the functional I on the space V.
We know the relation between these problems. If u is a solution of Problem 5.1, then it
satisfies the equality (5.2). So the respective value of the functional I is zero. Therefore this is the
minimum of this functional because it does have any negative value. Thus the solution of the
inverse problem is the solution of the optimization control problem. Then let u be a solution of
Problem 5.2. We can have two different cases. If the value I (u ) is zero, then the equality (5.2) is
true; so the solution of the optimization control problem is the solution of the inverse problem. If
the minimum of the functional is positive, then the inverse problem is insolvable, else there exist
a parameter v with more small value than minimum. However the solution of Problem 5.2 can be
chosen an approximate solution of Problem 5.1 because the equality (5.1) is realized in the best
form. Hence we can transform the given inverse problem to the optimization control problem
We will try to use the known optimization problem for solving Problem 5.2.
5.2. Increment of the functional
We would like to use gradient method for solving Problem 5.2. The general step is the
determination of the functional derivative. Determine the value
I (u   h)  Cy[u  h] z
2
  Cy[u  h] z ,Cy[u  h] z  ,
where u and h are points of the space V,  is a number. So we can find the difference
I (u   h)  I (u)   Cy[u  h] z,Cy[u  h] z    Cy[u ] z,Cy[u ]z  
  Cy[u  h]Cy[u ],Cy[u  h]Cy[u ]2 z  .
Using the linearity of the operator C we get
I (u   h)  I (u )  C  y[u  h] y[u ],C  y[u  h] y[u ]2 z  


 2 C  y[u   h]  y[u ],Cy[u ] z  C  y[u  h] y[u ] .
2
(5.3)
Then we would like to determine of the difference y[u   h ]  y[u ] by means of the equality
(5.1).
Consider the equation (5.1) with parameters v  u   h and v  u . After division we have
Ay[u   h]  Ay[u ]  B (u   h )  Bu.
This equality can be transform to
A  y[u  h] y[u ]   Bh
(5.4)
because of the linearity of the operator A and B. After the scalar multiplication to an arbitrary
function  from the unitary space W we obtain the equality
  , A y[u  h] y[u ]   , Bh   W .
(5.5)
The next transformations of the equalities (5.3) and (5.5) use the definition of the adjoint
operator.
Let L be a linear operator from the unitary space X to the space X. Determine its adjoint
operator.
Definition 5.1. The operator L* on the space X is called the adjoint operator to the operator
L, if it satisfies the equality
 x, Ly    L*x, y  x, y  X .
(5.6)
n
Example 5.1. Matrix. Let X be n-dimensional Euclid space R . A linear operator on the
space X is a matrix
 a11 a12

a
a
A   21 22
...
...

 an1 an 2
... a1n 

... a2 n 
.
... ... 

... ann 
Using the equality (5.6), we have
n
 n

*
n
y


i
i
i
ij j
  a ji x j  yi   A x, y  x, y  R .
i 1
i 1
j 1
i 1  j 1

So the adjoint operator for the matrix A is the adjoint matrix
n
n
n
 x, Ay    x ( Ay)   x  a
 a11

a
*
A   12
...

 a1n
a21
a22
...
a2 n
... an1 

... an 2 
.
... ... 

... ann 

Example 5.2. Operator of the differentiation. Let X be the space of differentiable functions
on the interval (a, b) with zero on the boundary. Consider the operator of the differentiation
L  d / dt . Determine the adjoint operator. We obtain
b
b
b
dy (t )
dx(t )
b
 dy 
 dx(t ) 
 dx 
dt  x(t ) y (t ) a  
y (t )dt    
y (t )dt    , y 
 x, Ly    x,    x(t )

dt
dt
dt 
 dt  a
 dt 
a
a 
for all differentiable functions x,y with zero values at the ends of the given interval. So we find
the adjoint operator L*  d / dt .

Example 5.3. Laplace operator. We have Laplace operator  on the space X of the smooth
enough functions on the n-dimensional set  with boundary S. We have the equality
v
u
 u, v    uvd     u  v  dS   uvd   uvd   u, v 
n
n 

S 


for all functions u and v of X because of Green’s formula. So the adjoint operator of Laplace
operator is . Thus Laplace operator is self-adjoint.

Determine the derivative of the function I. We have the equalities (5.3) and (5.5). Using the
definition of the adjoint operator, we transform (5.3) to the equality


I (u   h)  I (u )  2 C* Cy[u ] z , y[u  h] y[u ]  C  y[u  h] y[u ]  .
2
(5.7)
The formula (5.4) is transformed to
 A  , y[u   h]  y[u]    B  ,h   W .
*
*
(5.8)
The last equality is true for arbitrary function . We choose it such that the term at the left
side of the equality (5.8) is equal to the first term of the right side of (5.7). Then  is the solution
p of the equation
*
A p  2C
*
 Cy[u ] z  .
(5.9)
Therefore the equality (5.6) is transformed to
2
I (u   h)  I (u )    B* p,h   C  y[u  h] y[u ] .
(5.10)
Transform the second term of the right side of the equality (5.10). We supposed that our state
equation has a unique solution. So the operator A is invertible. Using the equality (5.4), we have
y[u   h]  y[u]  A1 ( Bh)   A1Bh
because the inverse operator of the linear operator is linear too. So we have the equality
y[u   h]  y[u ]   A1Bh   A1Bh .
It is known that arbitrary linear continuous operator satisfies the inequality
Lz  c z z
where c is a positive constant. So we obtain
C  y[u  h] y[u ]  c y[u   h]  y[u ]  c A1Bh .
Therefore
 1 C  y[u h] y[u ]   c 2 A1Bh .
2
2
Hence after division the equality (5.10) by  and passing to the limit as   0 we get
 I (u), h    B* p,h h.
Hence we find the derivative of the functional I at the point u
I (u )  B* p.
(5.11)
We know the gradient methods for solving the problem of the minimization of a functional I
on a unitary space
vk 1  vk  k I (vk ), k  0,1,...,
(5.12)
where  k is a positive iterative parameter. Using formula (5.11), we can determine the gradient
method for our problem. Let k-iteration value vk of the control is known. Then we can find the
appropriate value of the state function yk from the state equation (5.1)
Ayk  Bvk  f .
(5.13)
Next step is the determination of the adjoint step pk from the adjoint equation (5.9)
*
*
A pk  2C C ( yk  z ).
(5.14)
The next iteration of the control can be determined by the formula (5.12) with using the
derivative of the functional from the equality (5.11). We get
vk 1  vk   k B* pk , k  0,1,....
(5.15)
We will use these results for analyze inverse problems for stationary systems.
Добавить выпуклость функционала (обоснование)
```
Similar
PDF