Download Math323E - Numerical Methods with MatLab

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Math323E - Numerical Methods with MatLab
Subtopics
Module 2
Systems of Linear
Algebraic Equations
Eduardo E. Descalsota, Jr.
Notations
Algebraic Notation:
2.1 Introduction
2.2 Methods of Solution
2.3 Matrix Inversion Method
2.4 Gauss Elimination Method
2.5 LU Decomposition Methods
2.6 Gauss-Jordan Method
2.7 Jacobi’s Iteration Method
2.8 Gauss-Seidel Iteration Method
Augmented Matrix
Matrix Notation:
obtained by adjoining the constant vector b
to the coefficient matrix A
a particularly useful representation of the
equations for computational purposes
or simply Ax = b
where:
coefficients Aij and
constants bj are known
xi represent the unknowns
Uniqueness of Solution
A system of n linear equations in n unknowns
has a unique solution, provided that:
determinant of the coefficient matrix is
nonsingular, i.e., if |A| ≠ 0
rows and columns of a nonsingular matrix
are linearly independent
no row (or column) is a linear combination of
other rows (or columns)
What if the coefficient matrix is
singular?
may have an infinite number of solutions
or, no solutions at all, depending on the
constant vector
Example 1:
2x + y = 3
4x + 2y = 6
x = 1 x = 2 x = 3 x = 0 x = -5
y = 1 y = -1 y = -3 y = 3 y = 13
infinite number of combinations of x and y
Module 2 - Systems of Linear Algebraic Equations
Example 2:
2x + y = 3
4x + 2y = 0
No Solution:
4x + 2y = 0 is equivalent to 2x + y = 0
hence, any solution that satisfies one
equation cannot satisfy the other one
1
Math323E - Numerical Methods with MatLab
Classes of methods for solving
system of linear algebraic equations
2.2
direct methods
Methods of Solution
transforms the original equation into equivalent
equations that can be solved more easily
transformation is carried out by applying certain
operations
indirect or iterative methods
start with a guess of the solution x
then repeatedly refine the solution until a certain
convergence criterion is reached
less efficient than direct methods due to the large
number of operations or iterations required
Direct Methods
Indirect or Iterative Methods
1. Matrix Inverse Method
2. Gauss Elimination Method
3. LU Decomposition Methods
4. Gauss-Jordan Method
1. Jacobi’s Iteration Method
2. Gauss-Seidel Iteration Method
Advantages and Drawbacks
(1/2)
Direct Method Solution
Advantages and Drawbacks
does not contain any truncation errors
round off errors is introduced due to floatingpoint operations
Iterative Method Solution
more useful to solve a set of ill-conditioned
equations
round off errors (or even arithmetic mistakes) in
one iteration cycle are corrected in subsequent
cycles
contains truncation error
does not always converge to the solution
Module 2 - Systems of Linear Algebraic Equations
(2/2)
the initial guess affects only the number of iterations
that are required for convergence
2
Math323E - Numerical Methods with MatLab
The Inverse of a Matrix
If A and B are m×n matrices such that
AB = BA = I
then B is said to be the inverse of A and is
denoted by:
B = A-1
inverse of a matrix is obtained by dividing
its adjoint matrix by its determinant |A|
2.3
Matrix Inversion
Method
Computing the Inverse of a Matrix
A-1 = Aadj /Adet
2x2 matrix:
A= a b
c d
Aadj = d -b
-c a
nxn matrix: (i.e. 3x3)
a11 a12 a13
AI = a21 a22 a23 x
a31 a32 a33
1
0
0
0
1
0
0
0
1
Perform row operations on A such that:
Adet = ad – bc
A-1 = Aadj/Adet
A-1 =
1
0
0
1
1
0
a’11 a’12 a’13
a’21 a’22 a’23
0
0
1
a’31 a’32 a’33
a’11 a’12 a’13
a’21 a’22 a’23
a’31 a’32 a’33
Requirements for obtaining a unique
inverse of a matrix
1. The matrix is a square matrix.
2. The determinant of the matrix is not zero
(the matrix is non-singular)
Example:
if |A| = 0, then the elements of A-1 approach infinity
The inverse of a matrix is also defined by the
relationship:
A-1A = I
Properties of an Inverted Matrix
1. The inverse of a matrix is unique.
2. The inverse of the product of two matrices is equal to the
product of the inverse of the two matrices in reverse order:
(AB)-1 = B-1A-1
3. The inverse of a triangular matrix is itself a triangular matrix
of the same type.
4. The inverse of a symmetrical matrix is itself a symmetrical
matrix.
5. The negative powers of a non-singular matrix are obtained
by raising the inverse of the matrix of positive powers.
6. The inverse of the transpose of A is equal to the transpose
of the inverse of A:
(AT)-1 = (A-1)T
Module 2 - Systems of Linear Algebraic Equations
3
Math323E - Numerical Methods with MatLab
Example:
Find the inverse of the matrix
Solving simultaneous linear
algebraic equations
Consider a set of three simultaneous linear
algebraic equations:
a11x1 + a12x2 + a13x3 = b1
a21x1 + a22x2 + a23x3 = b2
a31x1 + a32x2 + a33x3 = b3
can be expressed in the matrix form:
Ax = b
we obtain the solution of x as:
x = A-1b
Example: Solve the following simultaneous linear equations:
x + 3y = 5
4x – y = 12
Solving by elimination method:
x + 3y = 5
12x – 3y = 36
13x
= 41
x = 41/13
From eq. 2:
y = 4x – 12
y = 4(41/13) – 12
y = 8/13
Solve:
x – y + 3z = 5
4x + 2y – z = 0
x + 3y + z = 5
Solving by matrix inversion
method: (MatLab)
A = [1 3; 4 -1]
b = [5
12]’
Using the formula:
x = A-1 b
x = inv(A) * b
x =
3.1538
0.6154
Gauss Elimination
2.4
Gauss Elimination
Method
a popular technique for solving simultaneous
linear algebraic equations (Ax = b)
reduces the coefficient matrix into an upper
triangular matrix (Ux = c)
consists of two parts:
Module 2 - Systems of Linear Algebraic Equations
elimination phase
solution phase
Initial Form: Ax = b
Final Form: Ux = c
4
Math323E - Numerical Methods with MatLab
Gauss Elimination Operations
Gauss Elimination
Process
1. Multiplication of one equation by a non-zero
constant.
2. Addition of a multiple of one equation to
another equation.
3. Interchange of two equations.
1. Eliminate x1 from the second
and third equations
assuming a11 ≠ 0.
2. Eliminate x2 from the third
row assuming a'22 ≠ 0.
3. Apply back substitution:
Ax = b and Ux = c are equivalent if the sequence of
operations produce the new system Ux = c
A is invertible if U is invertible
Pivoting
x3 from a''33x3 = b''3
x2 from a'22x2 + a'23x3 = b'2
Gauss elimination method fails if any one of
the pivots becomes zero
What if pivot is zero?
Solution: interchange the equation with its lower
equations such that the pivots are not zero
x1 from a11x1 + a12x2 + a13x3 = b1
Example:
Solve the following equations
by Gauss elimination method:
2x + 4y – 6z = -4
x + 5y + 3z = 10
x + 3y + 2z = 5
1
2
3
To eliminate x from equations 2 & 3:
2x + 4y – 6z = -4
2x + 4y – 6z = -4 2x + 4y – 6z = -4
x + 5y + 3z = 10 (-2/1) -2x – 10y – 6z = -20
-6y – 12z = -24
x + 3y + 2z = 5 (-2/1) -2x – 6y – 4z = -10 -2y – 10z = -14
To eliminate y from equation 3:
-6y – 12z = -24
-6y – 12z = -24 -2y – 10z = -14 (6/-2) 6y + 30z = 42 -6y – 12z = -24
18z = 18
Using back substitution:
18z = 18
z=1
-6y – 12z = -24
y = (24 – 12z)/6 y = (4 – 2(1)) y = 2
2x + 4y – 6z = -4 x = 3z – 2y – 2 x = 3(1)–2(2)–2 x = -3
Problem 1:
Problem 2:
Use the method of Gaussian elimination to
solve the following system of linear equations:
x1 + x2 + x3 – x4 = 2
4x1 + 4x2 + x3 + x4 = 11
x1 – x2 – x3 + 2x4 = 0
2x1 + x2 + 2x3 – 2x4 = 2
Using the Gaussian elimination method, solve
the system of equations [A] {x} = {b} where
Module 2 - Systems of Linear Algebraic Equations
5
Math323E - Numerical Methods with MatLab
LU Decomposition
2.5
LU Decomposition
Methods
Doolittle’s Method
Choleski’s Method
Crout’s Method
aka LU Factorization
process of computing L and U for a given A
expressed as a product of a lower triangular matrix
L and an upper triangular matrix U
A = LU
General LU Process:
Ax = b LUx = b
Doolittle’s Decomposition
Method
Constraints
LU decomposition is not unique unless certain
constraints are placed on L or U
(1/2)
transforms Ax = b to LUx = b to Ux = y
Consider a 3x3 matrix A:
and assume that there exist triangular matrices:
A = LU such that after multiplication:
LU advantage over Gauss:
once A is decomposed, Ax = b can be solved
for as many constant vectors b
Doolittle’s Decomposition Method
(2/2)
Ux = y Ly = b
applying Gauss elimination:
1st pass: choosing the first row as the pivot row
and applying the elementary operations
row 2 ← row 2 − L21 × row 1 (eliminates A21)
row 3 ← row 3 − L31 × row 1 (eliminates A31)
2nd pass: choosing the second row as pivotal row
row 3 ← row 3 − L32 × row 2 (eliminates A32)
Example:
Use Doolittle’s decomposition method
to solve the equations Ax = b, where
DECOMPOSITION PHASE
1st pass:
1 4 1
pivot: row1
0 2 -2
row2 ← row2 − 1 × row1
0 -9 0
(eliminates A21)
row3 ← row3 − 2 × row1
(eliminates A31)
A’ = A replacing the eliminated
1 4 1
terms with multipliers
0 2 -2
2nd pass:
0 0 -9
pivot: row2
row3 ← row3 − (−4.5) × row2
(eliminates A32)
replacing
Module 2 - Systems of Linear Algebraic Equations
1
A’ = 1
2
4 1
2 -2
-9 0
1 4 1
A’’ = 1 2 -2
2 -4.5 -9
eliminated term
6
Math323E - Numerical Methods with MatLab
Problem:
SOLUTION PHASE
Forward substitution: Ly = b
y1
=7
= 13
y1 + y2
2y1 – 4.5y2 + y3 = 5
Solving for y2:
y2 = 13 – y1 = 13 – 7
y2 = 6
Solving for y3:
y3 = 5 – 2y1 + 4.5y2
y3 = 5 – 2(7) + 4.5(6)
y3 = 18
Backward substitution: Ux= y
x1 + 4x2 + x3 = 7
2x2 – 2x3 = 6
-9x3 = 18
x3 = -2
Solving for x2:
2x2 = 6 + 2x3 = 6 + 2(-2)
x2 = 2/2 = 1
Solving for x1:
x1 = 7 – 4x2 – x3 = 7 – 4(1) + 2
x1 = 5
Solve AX = B with Doolittle’s decomposition
and compute |A|, where
Example: Solve the following set of equations by Crout’s method:
2x + y + 4z =12
8x – 3y + 2z =20
4x + 11y – z =33
Crout’s Decomposition Method
A
= LU
A=LU
l11 = 2
l21 = 8
l31 = 4
or
From
the equation, we get:
l11 = a11
u12 = a12/l11
l21 = a21
l22 = a22 – l21u12
l31 = a31
l32 = a32 – l31u12
u13 = a13/l11
u23 = (a23 – l21u13) / l22
l33 = a33 – l31u13 – l32u23
l11u12 = 1
u12 = 1/2
l22 + l21u12 = -3
l22 = -3 – 8(1/2)
l22 = -7
l32 + l31u12 = 11
l32 = 11 – 4(1/2)
l32 = 9
l11u13 = 4
u13 = 4/2 = 2
l21u13 + l22u23 = 2
u23 = (2 – 8(2))/-7
u23 = 2
l31u13+ l32u23+ l33= -1
l33 = -1 – 4(2) – 9(2)
l33 = -27
Alternative Crout’s Solution
(Column Operations)
Ly=B (forward substitution):
2y1 = 12
y1 = 6
Solving for y2:
8y1 – 7y2 = 20
y2 = [20 – 8(6)] / -7
y2 = 4
Solving for y3:
4y1 + 9y2 – 27y3 = 33
y3 = [33 – 4(6) – 9(4)] / -27
y3 = 1
Ux=y (backward substitution):
z=1
Solving for y:
y + 2z = 4
y = 4 – 2z
y = 4 – 2(1)
y=2
Solving for x:
x + (½)y + 2z = 6
x = 6 – (½)y – 2z
x = 6 – (½)2 – 2(1)
x=3
2
8
4
1/2 2
0 0
-7 -14
9 -9
2
8
4
2
0 00
-7 -14
0
9 -27
-10
Module 2 - Systems of Linear Algebraic Equations
Ly = b: forward subst.:
2y1 = 12
y1 = 6
8y1 – 7y2 = 20
-7y2 = 20 – 8(6)
y2 = -28/-7 = 4
4y1 + 9y2 – 27y3 = 33
-27y3 = 33 – 4(6) – 9(4)
y3 = -27/-27 = 1
Ux = y: backward subst:
z = y3 = 1
y + 2z = y2
y = 4 – 2(1)
y=2
x + ½y + 2z = y1
x = 6 – ½(2) – 2(1)
x=3
7
Math323E - Numerical Methods with MatLab
Problem:
Solve the following set of equations by using the Crout’s
method:
2x1 + x2 + x3 = 7
x1 + 2x2 + x3 = 8
x1 + x2 + 2x3 = 9
Cholesky’s Decomposition
A = LLT where U=LT
not a particularly popular means of solving
simultaneous equations
but invaluable in certain other applications
(e.g., in the transformation of eigenvalue problems)
Limitations:
requires A to be symmetric since the matrix product LLT
is symmetric
involves taking square roots of certain combinations of
the elements of A
Looking at Cholesky’s A = LLT
Consider a 3x3 matrix:
square roots of negative numbers can be avoided only if A is
positive definite
Example:
Compute Cholesky’s
decomposition of the matrix
Note
that A is symmetric, so Choleski’s applicable:
the given matrix A to LLT:
Equating
Multiplying matrices on the right hand side:
Equating
the elements yields:
Equating the matrices A = LLT, we obtain six equations:
Problem:
Problem:
Solve the equations Ax = b by Cholesky’s
decomposition method, where:
Given the LU decomposition A = LU, determine A and |A|.
1)
2)
Module 2 - Systems of Linear Algebraic Equations
8
Math323E - Numerical Methods with MatLab
Gauss-Jordan Method
2.6
extension of the Gauss elimination method
Ax = b is reduced to a diagonal set Ix = b'
where:
I = a unit matrix
Ix = b' equivalent to x = b' where b' solution vector
Gauss-Jordan
Method
implements the same series of operations as
implemented by Gauss elimination process
main difference is that it applies row operations
below as well as above the main diagonal
Example:
Solve the following equations
by Gauss-Jordan method.
1 3 2 17
A|b = 1 2 3 16
2 -1 4 13
1 3 2 17
1 2 3 16
2 -1 4 13
Eliminating x in 2 & 3:
1 3 2 17
0 -1 1 -1
0 -7 0 -21
x + 3y + 2z =17
x + 2y + 3z =16
2x – y + 4z =13
Normalize 2:
1 3 2 17
0 1 -1 1
0 -7 0 -21
Eliminating y in 1 & 3:
1 0 5 14
0 1 -1 1
0 0 -7 -14
all off-diagonal elements are reduced to zero
all main diagonal elements become 1
1
2
3
Gauss-Jordan Process
Normalize 3:
1 0 5 14
0 1 -1 1
0 0 1 2
Eliminating z in 1 & 2:
1 0 0 4
0 1 0 3
0 0 1 2
x
y
=4
=3
z =2
Problems:
Problem:
Solve the following system of equations using
the Gauss-Jordan method.
1) x – 2y = -4
5y + z = -9
4x – 3z = -10
Solve the following system of equations
2x + 6y + z = 7
x + 2y – z = -1
5x + 7y – 4z = 9
using:
(a) Gaussian elimination and
(b) Gauss-Jordan elimination
2) 2x1 + x2 – 3x3 = 11
4x1 – 2x2 + 3x3 = 8
-2x1 + 2x2 – x3 = -6
Module 2 - Systems of Linear Algebraic Equations
9
Math323E - Numerical Methods with MatLab
Iterative (or Indirect) Methods
start with an initial guess of the solution x
then repeatedly improve the solution until
the change in x becomes negligible
Known methods:
2.7
Jacobi’s Iteration
Method
Jacobi
Gauss-Seidel
Pros and Cons of Iterative Methods
Jacobi Iteration
╬ feasible to store only the nonzero elements of the
coefficient matrix, making it possible to deal with
very large matrices that are sparse
╬ are self-correcting, meaning that round-off errors
(or even arithmetic mistakes) in one iterative cycle
are corrected in subsequent cycles
═ slower than their direct counterparts since the
required number of iterations can be very large
═ do not always converge to the solution
convergence is guaranteed only if the coefficient
matrix is diagonally dominant
Consider the equation:
3x + 1 = 0
which can be cast into
an iterative scheme as:
2x = -x – 1
Jacobi’s Iteration Method
Approximations and Iterations
aka the method of simultaneous displacements
Consider the system of
linear equations:
Assume that a11, a22, and
a33 are the largest
coefficients so that:
We can solve the
unknowns using:
x=−
x +1
2
w/c can be expressed
as:
1
1
x k +1 = − x k −
2
2
Iterations:
x1 = -2-1 – 2-1x0
where x0 is the initial guess
x2 = -2-1 – 2-1x1
x3 = -2-1 – 2-1x2
Will it always converge?
another iterative scheme
x = -2x – 1
xk+1 = -2xk – 1
Will this converge?
Let the initial approximations be x10, x20, and x30 respectively
Iteration 1:
Iteration 2:
iteration process is continued until the values of x1, x2 and x3
are found to a pre-assigned degree of accuracy
it is a general practice to assume x10 = x20 = x30 = 0
Module 2 - Systems of Linear Algebraic Equations
10
Math323E - Numerical Methods with MatLab
Example:
Solve the following equations
by Jacobi’s method.
xk+1 = (b1 – a12yk – a13zk)/a11
yk+1 = (b2 – a21xk – a23zk)/a22
zk+1 = (b3 – a31xk – a32yk)/a33
Let: x0 = y0 = z0= 0
x1 = b1/a11 = 85/15 = 17/3
y1 = b2/a22 = 51/10
z1 = b3/a33 = 5/8
------------------------------------------------------------------
x2 = [85 – 3(51/10) – (-2)(5/8)]/15
= 4.73
y2 = [51 – 2(17/3) – 5/8]/10
= 3.904
z2 = [5 – 17/3 – (-2)(51/10)]/8
= 1.192
15x + 3y – 2z = 85
2x + 10y + z = 51
x – 2y + 8z = 5
------------------------------------------------------------------
x3 = [85 – 3(3.904) + 2(1.192)]/15
= 5.045
y3 = [51 – 2(4.73) – 1(1.192)]/10
= 4.035
z3 = [5 – 1(4.73) + 2(3.904)]/8
= 1.010
------------------------------------------------------------------
x4 = [85 – 3(4.035) + 2(1.010)]/15
= 4.994
y4 = [51 – 2(5.045) – 1(1.010)]/10
= 3.99
z4 = [5 – 1(5.045) + 2(4.035)]/8
= 1.003
Problem:
Continuing the whole process of iteration:
Use the Jacobi iterative scheme to obtain
the solutions of the system of equations
correct to three decimal places.
x + 2y + z = 0
3x + y – z = 0
x – y + 4z = 3
x
y
z
5.667
4.730
5.045
4.994
5.100
3.904
4.035
3.990
0.625
1.192
1.010
1.003
5.002
5.000
5.000
4.001
4.000
4.000
0.998
1.000
1.000
5.000
5.000
4.000
4.000
1.000
1.000
To check:
15x + 3y – 2z = 85
2x + 10y + z = 51
x – 2y + 8z = 5
15(5)+3(4)–2(1) = 85
75 + 12 – 2 = 85
2(5)+10(4)+1(1) = 85
10 + 40 + 1 = 51
5 – 2(4) + 8(1) = 85
5–8+8=5
Problem:
Use Jacobi iterative scheme to obtain the
solution of the system of equations correct to
two decimal places.
Generalization of NxN matrix-vector
am1x1 + am2x2 + … + ammxm + … + amNxN = bm
2.8
Gauss-Seidel
Iteration Method
Module 2 - Systems of Linear Algebraic Equations
11
Math323E - Numerical Methods with MatLab
Gauss-Seidel method
aka the method of successive approximations
applicable to predominantly diagonal systems
PDS has large diagonal elements
The absolute value of the diagonal element in
each case is larger than the sum of the absolute
values of the other elements in that row
Gauss-Seidel generalization formula
Gauss-Seidel vs. Jacobi
Each iteration of Jacobi method updates the
whole set of N variables at a time
GS can speed up the convergence by using
all the most recent values of variables for
updating each variable even in the same
iteration
Solve the following equations
by Gauss-Seidel method.
8x + 2y – 2z = 8
x – 8y + 3z = -4
2x + y + 9z = 12
Iteration 2:
x2 = (b1 – a12y1 – a13z1)/a11
= [8 – 2(5/8) + 2(1.042)]/8 = 1.104
y2 = (b2 – a21x2 – a23z1)/a22
= [-4 –1(1.104) – 3(1.042)]/-8 =1.029
z2 = (b3 – a31x2 – a32y2)/a33
= [12 –2(1.104) –1(1.029)]/9 = 0.974
Let: x0 = y0 = z0 = 0
i Iteration
1
2
3
4
5
6
7
3:
Iteration 1:
(b1 –1.104
a12y20.986
– a131.004
z2)/a11
x x3 =
1.000
0.999 1.000 1.000
x1 = b1/a11 = 8/8 = 1
[8 – 2(1.029)
+ 2(0.974)]/8
= 0.986
y =0.625
1.029 0.988
1.002 0.999 1.000
1.000
y1 = (b2 – a21x1)/a22
(b2 –0.974
a21x31.004
– a230.999
z2)/a22
z y3 =
1.042
1.000 1.000 1.000
= [-4 – 1(1)]/-8 = 5/8
= [-4 –1(0.986) – 3(0.974)]/-8 =0.988
z1 = (b3 – a31x1 – a32y1)/a33
z3 = (b3 – a31x3 – a32y3)/a33
= [12 –2(1) –1(5/8)]/9 = 1.042 = [12 –2(0.986) –1(0.988)]/9 = 1.004
Problem: Solve the following equations
by the Gauss-Seidel method.
1) 4x – y + z = 12
-x + 4y – 2z = -1
x – 2y + 4z = 5
2) 2x – y + 3z = 4
x + 9y – 2z = -8
4x – 8y + 11z = 15
Gauss-Seidel Method
The equations Ax = b are in scalar notation
Extracting the term containing xi from the summation
sign yields
Solving for xi, we get:
Module 2 - Systems of Linear Algebraic Equations
12
Math323E - Numerical Methods with MatLab
Iterative Scheme
Relaxation
start by choosing the starting vector x
if a good guess is not available, x can be
chosen randomly
recompute each element of x, always using
the latest available values of xj
procedure is repeated until the changes in x
between successive iteration cycles become
sufficiently small
The essential elements of a GaussSeidel algorithm with relaxation
1. Carry out k iterations with ω=1 (k=10 is
reasonable). After the kth iteration record ∆x(k).
2. Perform an additional p iterations (p ≥ 1) and
record ∆x(k+p) after the last iteration.
a technique used to improve the convergence of
the Gauss-Seidel method
take the new value of xi as a weighted average of
its previous value and the initial guess
where ω is called the relaxation factor
ω < 1 interpolation bet. the old xi and initial guess
(or underrelaxation)
ω > 1 extrapolation (or overrelaxation)
Convergence Considerations
Convergence of the iterative schemes is ensured:
3. Perform all subsequent iterations with ω = ωopt,
where
if in each row of coefficient matrix A, the
absolute value of the diagonal element is
greater than the sum of the absolute values
of the other elements
using the relaxation technique
MatLab Functions
x = A\b
A = full(S)
S = sparse(A)
x = lsqr(A,b)
Choleski’ s decomposition A =
B = inv(A)
c = cond(A)
LLT
returns B as the inverse of A
returns the condition number of the matrix A
creates a n× n sparse matrix from the columns of matrix
B by placing the columns along the diagonals specified
by d
L = chol(A)
converts the sparse matrix S into a full matrix A
converts the full matrix A into a sparse matrix S
conjugate gradient method for solving Ax = b
spy(S)
Module 2 - Systems of Linear Algebraic Equations
2/2
A = spdiags(B,d,n,n)
Doolittle’ s decomposition A = LU
which may be helpful in accelerating the
convergence of Gauss–Seidel iteration
MatLab Functions
returns the solution x of Ax =b
obtained by Gauss elimination
[L,U] = lu(A)
1/2
sufficient, but not a necessary, condition
draws a map of the nonzero elements of S
13
Math323E - Numerical Methods with MatLab
Exercises:
1) Determine the inverse
of the following matrices:
a)
Set 1
2) Solve the following
set of simultaneous
linear equations by the
matrix inverse method.
(a) 2x + 3y – z = -10
-x + 4y + 2z = -4
2x – 2y + 5z = 35
b)
(b) 10x + 3y + 10z = 5
8x – 2y + 9z = 2
8x + y – 10z =35
Exercises:
Set 3
1) Solve using
Choleski’s method:
a) 2x – y = 3
-x + 2y – z = -3
-y + z = 2
2) Solve using Crout’s
method:
a) 3x + 2y + 7z = 4
2x + 3y + z = 5
3x – 4y + z = 7
b) x + y + z = 7
3x + 3y + 4z = 23
2x + y + z = 10
b) x + y + z = 9
2x – 3y + 4z = 13
3x + y + 5z = 40
Exercises:
Exercises:
1) Solve using
Gaussian elimination:
a) 2x + y – 3z = 11
4x – 2y + 3z = 8
-2x + 2y – z = -6
b) 6x + 3y + 6z = 30
2x + 3y + 3z = 17
x + 2y + 2z = 11
Exercises:
Set 2
2) Solve using GaussJordan method:
a) 4x – 3y + 5z = 34
2x – y – z = 6
x + y + 4z = 15
b) 2x – y + z = -1
3x + 3y + 9z = 0
3x + 3y + 5z = 4
Set 4
Solve using Jacobi’s
iteration method:
a) 2x – y + 5z = 15
2x + y + z = 7
x + 3y + z = 10
Solve using GaussSeidel iteration method:
a) 4x – 3y + 5z = 34
2x – y – z = 6
x + y + 4z = 15
b)
b) 2x – y + 5z = 15
2x + y + z = 7
x + 3y + z = 10
20x + y – 2z = 17
3x + 20y – z = -18
2x – 3y + 20z = 25
Set 5
The electrical network shown can be viewed as
consisting of three loops. Applying Kirchoff’s
law (Σvoltage drops = Σvoltage sources) to
each loop yields the following equations for
the loop currents i1, i2 and i3:
5i1 + 15(i1 − i3) = 220V
R(i2 − i3) + 5i2 + 10i2 = 0
20i3 + R(i3 − i2) + 15(i3 − i1) = 0
Compute the three loop currents for R = 5, 10
and 20.
Module 2 - Systems of Linear Algebraic Equations
14