Download 1 GAUSS-SEIDEL Das rechentechnische Problem des

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
1
GAUSS-SEIDEL
Das rechentechnische Problem des ökonometrischen Modells
the computational problem of the econometric model
les problèmes numériques du modèle économétrique, algorithmes numériques
los problemas de cálculo en el modelo econométrico
il problema di calcolo dei modelli econometrici
Gauß-Seidel-Iterationsverfahren
Gauss-Seidel iteration procedure
methode d'iteration du Gauss Seidel
método iterativo de Gauss y Seidel
procedura d'iterazione di Gauss-Seidel
Example
Example
Example
Example
1
2
3
4
(Gauss Seidel iteration: a linear system Ay = b of Girschick)
(A nonlinear model of Kevorkian (1981)- multiple solutions -)
(A nonlinear model of Gabay et alii)
(Distinct normalizations of a linear model Ax= b of Bell,
Gauss-Seidel iteration of a linear model)
Example 5 (Distinct normalizations of a two equation system:
The nonlinear model of Neumann, Porosinski and Szajowski)
Example 6 (Normalization and computation: A linear model Ay = b of Gabay)
2
(i) The iterative linear problem in general
The iterative solution of linear equations (the eigenvalue criterion approach)
obviously is a subcase of a more general iteration design, i.e. the general application of
a Gauss Seidel (=GS)-procedure for solving a linear system of equations:
Ax = b (A∈Rn,n nonsingular, b∈Rn, known coefficients)
consists of three steps:
(i) deciding on the "splitting" of A, i.e. "adding" a nonsingular B∈Rn,n
By + (A-B)y = b (x=:y)
(ii) "solving" for the left-hand-side y
y = B-1 (B-A)y + B-1 b ⇔ y = Cy + c, and
(iii) establishing an iteration sequence (t denoting the index of iteration), i.e.
(0)
yt = Cyt-1 + c.
The corresponding iteration converges if and only if ρ(C)<1 (ρ(C) denoting the
modulus of matrix C).
The GS total step method
A frequent subcase and re-notation of (0) is given by the total step method
(1)
yt = Byt-1 + b
The corresponding iteration converges if and only if ρ(B)<1.
The splitting is straightforward:
A = A + In - In ⇔ B:= C:= In - A ⇔ A = In - C = In - B
lim yt = y and y = By + b ⇔ x=y.
t →∞
The GS single step method
A second subcase and re-notation of (0) is given by the single step method - a triple
splitting -, which is presented by a 2x2 example (see e.g. Stoer Bulirsch, chapter 8.1)
a 11 a12
b1
Ax = b, A =
,b=
a 21 a22
b2
U=
0
0
a 21 0
,D=
a 11 0
,O=
0 a12
0 0
0 a22
U is defined by the elements below the diagonal, D by the (nonzero) diagonal, and O by
the elements above the diagonal, therefore we have
Ax = b ⇔ Ux + Dx + Ox = b ⇒ Ux t + Dxt = - Oxt-1 + b ⇔ (U+D)x t = - Oxt-1 + b
⇔ xt = (U+D) -1 (- Oxt-1 + b) ⇔ xt = Bxt-1 + r, B:= -(U+D)-1 O, r:= -(U+D) -1 b
The iteration
(2)
xt = Bxt-1 + r
converges if and only if ρ(B)<1
An illustration is given by the following system of two equations, the solution of
which ist obvious.
3
Illustration of the single step method
A=
x1
1 0.5
1.5
1
,b=
, obviously
=
0.5 1
1.5
x2
1
There are two iterations
0 0
10
(i) U =
,D=
,O=
0.5 0
01
0 0.5
∴ - (U+D)-1 O = 0 0
0 –0.5
0 0.25
xt = -
1 0 –1
0.5 1
x1
0 0.5
1 0 –1 1.5
xt-1 +
,x=
⇔
0 0
0.5 1
1.5
x2
xt = -
1
0
–0.5 1
0 0.5
1
0
xt-1 +
0 0
–0.5 1
1.5
⇔ xt = 1.5
0
0
0.5
1.50
xt-1 +
–0.25
0.75
x1,t = - 0.5x2,t-1 + 1.5
x2,t = 0.25x2,t-1 + 0.75,
ρ(B) = 0.25 (i.e. convergence)
(ii)
U=
00
0.5 0
01
0 2
,D=
,O=
∴ - (U+D)-1 O = 10
0 0.5
00
0 –4
x2
0.5 0 –1 0 1
0.5 0 –1 1.5
xt = xt-1 +
,x=
⇔
1 0.5
00
1 0.5
1.5
x1
xt = -
2 0
–4 2
01
2 0
xt-1 +
00
–4 2
x2,t = - 2x1,t-1 + 3
x1,t = 4x1,t-1 - 3,
ρ(B) = - 4.0 (i.e. divergence)
1.5
⇔ xt = 1.5
0 2
3
xt-1 +
0 –4
–3
4
General formulation of the single step method
This is the typical recursive Gauss-Seidel iteration set up of the above linear model
Ax= b ⇔ yt = Byt-1 + β
(For the nonlinear extension see below) written out in detail:
k
k-1
y1 = b11y1
k
k-1
+ b12y2
k-1
+ ... + b 1nyn + β 1
k
k-1
yi = bi1 yk1 + bi2 yk2 + ... + b i,i-1 yi-1 + bi,i yi
k
k-1
k-1
+ bi,i+1 yi+1 + … + binyn + β i , i=2,3,...,n-1
k-1
k +b y +β
yn = bn1yk1 + bn2yk2 + ... + b n,n-1 yn–1
n,n n
n
(k denoting the iteration index)
Relaxation
A related device is relaxation. Provided divergence occurs ( lim yt → ∞), as often
t →∞
happens, a computational strategy (refinement) is to "relax" the iteration to
zt := Cyt-1 + c, yt := αzt + (1-α)yt-1 ⇔
(3)
yt = [(1-α)I + αC]yt-1 + αc
The finally accepted new iterate is a linear combination of the trial step and the
previous iterate. Again for convergence the well known eigenvalue criterion applies:
0 < ρ((1-α)I + αC) <1
However, eigenvalue computation in general is more difficult than solving the
equation, and the search for the appropriate α must be done numerically. Certainly
relaxation often works, it possibly enforces the unit-interval spectrum. Hence the
numerical codes recommend it. However, given a nonsingular A, convergence should
be obtainable by some normalization in any case. A solution must exist. T h e problem,
however, is to find that unrelaxed normalization, i.e. the appropriate ordering is a
problem of combinatorics of re-sequencing of equations and/or of re-normalizations.
(NORMALIZATION, NEWTON)