Download Linear Algebra & Matrices

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Linear Algebra & Matrices
MfD 2004
María Asunción Fernández Seara
[email protected]
January 21st, 2004
“The beginnings of matrices and determinants go back to
the second century BC although traces can be seen back to
the fourth century BC”
Scalars, Vectors and Matrices
•
Scalar: variable described by a single number (magnitude)
– Temperature = 20 °C
– Density = 1 g.cm-3
– Image intensity (pixel value) = 2546 a. u.
•
Vector: variable described by magnitude and direction
 vn 
v 
ve 
Column vector
1 
b  1 
2
Row vector
d  3 4 9
•
Matrix: rectangular array of scalars
2
1 2 3
A  5 4 1
6 7 4
1 4 
C  2 7 3
3 8
Square (3 x 3)
Rectangular (3 x 2)
 d11 d12 d13 
D  d 21 d 22 d 23 
d 31 d 32 d 33 
d i j : ith row, jth column
Vector Operations
• Transpose operator
1 
b  1 
2
column
bT  1 1 2
→
1 2 3
A  5 4 1
6 7 4
3 
dT  4
9 
d  3 4 9
row
→
row
1 5 6 
AT  2 4 7
3 1 4
• Outer product = matrix
 x1 
xyT   x2  y1
 x3 
y2
 x1 y1
y3    x2 y1
 x3 y1
x1 y2
x2 y2
x3 y2
x1 y3 
x2 y3 
x3 y3 
column
Vector Operations
• Inner product = scalar
xT y  x1
x3 
x2
xT y  x1
 y1 
3
y   x y  x y  x y  x y
i i
 2 1 1 2 2 3 3 
i 1
 y3   y 
1
x2 ... xn 
• Length of a vector
y  n
 2  x y
   i 1 i i
 
 yn 
Right-angle triangle
Pythagoras’ theorem
|| x || = (x12+ x22 )1/2
|| x || = (x12+ x22 + x32 )1/2
Inner product of a vector with itself =
(vector length)2
xT x =x12+ x22 +x32 = (|| x ||)2
x2
||x||
x1
Vector Operations
• Angle between two vectors
||x||
sin   y2
sin   x2
y
x
cos   y1
cos   x1

y

x
xT y
cos  
x y
xT y  x y cos 
x
xT y = 0

y1
cos   cos(    )  cos  cos   sin  sin  
Orthogonal vectors:
||y||
=/2
y
y1 x1  y2 x2
x y
y2
Matrix Operations
• Addition (matrix of same size)
– Commutative: A+B=B+A
– Associative: (A+B)+C=A+(B+C)
2 2 1 1 3 3
AB  





2
2
1
1
3
3

 
 

Matrix Operations
• Multiplication (number of columns in first matrix = number of
rows in second)
C
=A
B
(m x p)
= (m x n) (n x p)
Cij = inner product between ith
row in A and jth column in B
7 1 
1 2 3 
  1 7  2  8  3  9 11  2  2  3  3    50 14 
CA B
8
2

 

 
4 5 6 9 3 4  7  5  8  6  9 4 1  5  2  6  3 122 32


2x3
–
–
–
–
3x2
Associative: (A B) C = A (B C)
Distributive: A (B+C) = A B + A C
Not commutative: AB  BA !!!
(A B)T = BT AT
2x2
Some Definitions …
• Identity Matrix
1 0 0
I  0 1 0
0 0 1
IA= AI =A
• Diagonal Matrix
3 0 0 
D  0 5 0
0 0 7
• Symmetric Matrix
3 1 0 
B  1 5 2
0 2 7
B = BT
bij = bji
Matrix Inverse
A-1 A = A-1 A = I
3 0 0 
D D 1  0 5 0
0 0 7
1
3

0

0

0
1
5
0

0
1 0 0
 
0   0 1 0  I

1  0 0 1
7 
Properties
A-1 only exists if A is square (n x n)
If A-1 exists then A is non-singular (invertible)
(A B) -1 = B-1 A-1;
B-1 A-1 A B = B-1 B = I
(AT) -1 = (A-1)T;
(A-1)T AT = (A A-1)T = I
Matrix Determinant
a b 
A

c d 
 a b
A   d e
 g h
c 
f 
i 
det (A) = ad - bc
 e
det(A)  a det 
 h
 d
f 



b
det


 g
i 

 d e  
f 




c
det





i 
  g h 
n
A (n x n) = [a ij ]
det( A)   a1 j (1) (1 j ) M 1 j
j 1
Properties
Determinants are defined only for square matrices
If det(A) = 0, A is singular, A-1 does not exist
If det(A)  0, A is non-singular, A-1 exists
http://mathworld.wolfram.com/Determinant.html
Matrix Inverse - Calculations
a b 
A

c d 
x
A  1
 x3
1
ax1
 cx2
1
bx1
 dx2
0
ax3
 cx4
0
bx3
 dx4
1
 x1
A A
 x3
x2 
x4 
A 1 
1
x2  a b  1 0

I
x4   c d  0 1
1  cx2
a
1  cx2
b
1
b
 dx2  0  x2 

b
a
(bc  ad )
det( A)

x1 
1  d  b
det( A)  c a 
A general matrix can be inverted using methods such as the Gauss-Jordan
elimination, Gauss elimination or LU decomposition
Another Way of Looking at Matrices…
• Matrix: linear transformation between two vector spaces
Ax=y
A-1 y = x
1 2   2   5 
A x
 y



2 4 2 10
A
A-1
1 2   0   5 
A z
 y



2 4 2.5 10
det(A) = 1 x 4 – 2 x 2 = 0
In this case, A is singular, A-1 does not exist
y
z
x
A
Other matrix definitions
• Orthogonal matrix
A = [q1 | q2 | … qj …| qn]
qjT qq = 0 (if j  k) and qjT qj = djj
AT A = D
• Orthonormal matrix
A = [q1 | q2 | … qj …| qn]
qjT qq = 0 (if j  k) and qjT qj = 1
AT A = I
A-1 = AT
• Matrix rank: number of linearly independent columns or rows
if rank of A (n x n) = n, then A is non-singular
Linearly independent
Linearly dependent
Related documents