Download Definitions:

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Exterior algebra wikipedia , lookup

Matrix completion wikipedia , lookup

Euclidean vector wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

System of linear equations wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Capelli's identity wikipedia , lookup

Rotation matrix wikipedia , lookup

Principal component analysis wikipedia , lookup

Jordan normal form wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Determinant wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Four-vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix calculus wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
Linear Algebra
A. The Basics:
1. Definitions:
i. Matrix: A matrix is a two dimensional array. It has elements, and each is labeled by
its row and column number, or position in the array. (sometimes called a 2-D
array). The matrix A, below, is designated by its elements aij, where the first
number indicates the row and the second number the column. Singular; matrix,
plural; matrices.
 a11 a12 a13 a14 
a
a22 a23 a24 
21

A
 a31 a32 a33 a34 


a41 a42 a43 a44 
They are written within brackets, braces or parentheses, they are NOT written
between bars or angular braces. Thus one can write:
1 2
1 2  1 2 
3 4 = 3 4 =  3 4 



 

But NOT
1 2
1 2 1 2
3 4  3 4  3 4


A matrix is described by its dimension which is its rows by columns. The first
matrix above has dimension n x n, the second one is 4 x 4 and the ones
immediately above are 2 x 2.
ii. Vector: An array in which either the number of rows or columns is 1 (sometimes
called a 1-D array). A column vector can have multiple rows, but only one column:
 b1   b1 
b  b 
 
b   2   2
b3  b3 
   
b4  b4 
A row vector has multiple columns but only one row:
b  b1 b2 b3 b4 
iii. Scalar: Scalars can be considered 0-dimensional arrays, only one column and one
row. Thus, they are just a number.
2. Types of Matrices
i. Rectangular: All matrices are rectangular. There is one special rectangular matrix
to note:
a. Zero: A zero matrix can be of any dimension, but is rectangular and has
nothing but zeros in it.
0 0 0 0 0
0 0 0 0 0


0 0 0 0 0
ii. Square: Square matrices have the same number of rows as they do columns, they
are a particular kind of rectangular matrix. Square matrices have a main diagonal.
This is made up of the elements with the same row and column subscript, for
example a22, b11, d77.
There are some special square matrices:
a. Upper/Lower Triangular: All the elements below/above the main diagonal
are zero.
1 2 3 4 
0 2 3 4 


0 0 3 4 


0 0 0 4 
is an upper triangular matrix
b. Diagonal: All elements EXCEPT those on the main diagonal are zero.
1 0 0 0 
0 2 0 0 


0 0 3 0 


0 0 0 4 
c. Banded: These are matrices where all the non-zero elements are adjacent
along diagonal lines. The matrix below is banded and its width is 3
1 4 0 0 0 0 0
1 1 4 0 0 0 0


0 2 1 4 0 0 0 


0 0 3 1 4 0 0 
0 0 0 4 1 4 0 


0 0 0 0 5 1 4 
0 0 0 0 0 6 1 


d. Identity: A type of diagonal matrix, it has all 1’s on the main diagonal. They
can be of any dimension, but are square.
1 0 0 0 
0 1 0 0 


0 0 1 0 


0 0 0 1 
e. Symmetric: In a symmetric matrix the i-j th component is the same as the j-i
th
component. (As we will see later, this means that the matrix is its own
transpose). In the matrix below a23= a32= 4.
1 2 3 
A  2 6 4
3 4 1 
3. Matrix Arithmetic: Matrices with the same dimensions can be added (+/-) and the
operation is performed term-by-term.
i. Examples:
1 2 3   1 2 9 0 4 12

 
 

a. 4 5 6   5  8 0  9  3 6 
7 8 9  2 6 2 5 14 11

 
 

 1  1 5 5 5 4
2
0  5 5 7 5

b.


 2 1  5 5 3 6

 
 

 1  1 5 5 6 4
1 2 3   1 2 9  2 0  6

 
 

c. 4 5 6   5  8 0   1 13 6 
7 8 9  2 6 2  9 2 7 

 
 

1
1 2  
d. 
   1 is NOT defined since it doesn’t have the same dimension.
3 4  0 
 
ii. NOTE: Since vectors are matrices. Vectors can also be added (+/-) term-by-term if
they have the same dimension.
4. Scalar multiplication: A matrix multiplied or divided by a scalar is also done term-by-term.
i. Examples:
1 2  6 12 
a. 6


3 5 18 30
1
1
2 1 2

1 2 1  
1

1
1

 2 1 2
2



1
1
1

1 2 1  
b.
1
2
2
 2

1
 2 1 2 
1
1


1 2 1 
2


1 1 1
2 
 2
ii. NOTE: Scalar multiplication of a vector is also done term-by-term.
5. Matrix- matrix multiplication It is based on the connection between the system of
equations and the matrix. The first column-first row term of the 1st matrix is multiplied by
the first row-first column term of the 2nd matrix. The second column-first row term of the
1st matrix is multiplied by the second row-first column of the second matrix. The i-jth
component of the product, AB=C, is given by
n
cij   aik bkj
k 1
The dimensions of two matrices multiplied DO NOT have to be the same. HOWEVER, the
number of columns of the first must equal the number of rows of the second. Several
examples can make this summation easier:
i. Examples:
 1 3
1 6  2 
   15 7 
a. 

1
2


 
4 5 1   5 4  4 26


1
1 6 2   19
b. 
 2   
4 5 1 3 17
 
ii. NOTES:
a. If A is n x p and B is q x r, then they can be multiplied AB, if p = q. The
product will have dimension n x r.
b. This rule also tells us that matrix multiplication is not commutative.
Commutative means that you can do it in any order and the answer doesn’t
change. In matrix multiplication you CANNOT ASSUME that AB= BA.
You cannot make this assumption even with square matrices, unless one is
the identity matrix or the zero matrix. Examples:
1
 1 0 0 1 3  2  1 0  1  3 2
a. 




1  1 2
6  2  1
 2 1 1 0 0 2
0 1 3  2  1  1 0
b. 
is not defined.
1   2 1
1 0 0 2
8
 1 2  4 2  2 12 
 4 2  1 2  2
c. 
and 










  1 0    1 5    4  2
  1 5    1 0    6  2
d. For the identity matrix it works:
 1 2 1 0  1 2
1 0  1 2  1 2
 1 0 0 1   1 0 and 0 1  1 0   1 0


 



 

e. For the zero matrix it works as well
 1 2  0 0  0 0 
0 0   1 2  0 0 
 1 0 0 0  0 0 and 0 0  1 0  0 0


 



 

c. Vector-vector multiplication is somewhat special. There are two kinds.
First, if they are considered as matrices then you can multiply a row vector
(1 x n) times a column vector (1 x n) in the same way that you multiply
matrices when their dimensions are appropriate. Notice that, by the rule
above, a (1 x n) times a (n x 1) has dimension (1 x 1), and thus is a scalar.
This should look familiar; it is the definition of a dot-product (scalar
product) from vector analysis. Recall:
a • b = |a||b|cos (
This is not true multiplication in the sense that we usually define
multiplication. We usually want to multiply two things together and get the
same type of thing back. Two scalars multiplied together give a scalar, two
matrices (2-D) multiplied together also give a matrix (2-D). Two vectors (1D) multiplied together in this way don’t give a 1-D vector, they give a
scalar (0-D). Examples:
0 
1 0


1 2 34  14  2  4  14
2
3 2
The reverse of multiplying a column vector by a row vector is rarely done,
but is mathematically feasible.
There is a second “multiplication” for vectors, the cross product. As you
know:
a x b =[ |a||b|sin ( ] n
Where n is a unit vector in the direction perpendicular to the plane
determined by the two vectors a and b and follows the right hand rule. This
multiplication can also be performed utilizing the following technique, if
1 
a  2,
3
0 
b  4 then a  b 
2
i j k
1 2 3   i 2 3  j 1 3  k 1 2


4 2
0 2
0 4
0 4 2 
where
each of the 2x2 blocks are formed by removing the ith column and ith row, for example,
and retaining the numbers in the 2 x 2 matrix tha t are left and where
p q
r
s
 ps  qr
so
a  b  (4  12)i  (2  0) j  4  0)k  8i  2 j  4k
Note that this is only defined for three dimensional vectors. Another way is
to write out the vector in ijk components and use the fundamental laws:
i  i  0, j  j  0, k  k  0
i  j  k , j  k  i, i  k  j
j  i  k , k  j  i, k  i   j
Thus for the example above:
a  i  2 j  3k , b  4 j  2k
a  b  i  2 j  3k   4 j  2k 
 4k  2 j  0  4i  12i  0
 8i  2 j  4k
  8
  2
 4 
6. Matrix division. There is no such thing. Anyone who writes, even as a note on the side
something like
 1 2


A 10 0
,
B  2 1
0 3


will have the WHOLE PROBLEM MARKED WRONG.
7. Matrix Inverse: There is a way to get around this division problem. In arithmetic there are
three fundamental concepts that we need to have defined. We need an additive identity, a
multiplicative identity and an inverse.
In scalar arithmetic we have all three. Zero is an additive identity. You add zero to any
other number and you get that number back. The number 1 is a multiplicative identity. 1
times any other number gives you the number back. An inverse, in scalar arithmetic is the
reciprocal of a number. What defines an inverse? A number times its inverse is the
multiplicative identity (1). Thus:
1
0q  q
1 q  q
q 1
q
We have the equivalents in matrix arithmetic.
The zero matrix is the additive identity. The identity matrix is the multiplicative identity.
A matrix can have an inverse, A-1. Not all matrices have inverses, just as zero doesn’t have
a reciprocal.
Finding matrix inverses is the biggest challenge in linear algebra.
i. Some rules.
a. Not all matrices have inverses.
b. Only square matrices can have inverses.
ii. The inverse of a 2 x 2 matrix is easily defined.
 d  b
 c a 
a b 


1
A

If A  
then

A
c d 
where |A| is referred to as the determinant of the matrix A and defined for a 2 x 2
matrix as ad-bc. Thus, for example if:
4  3 4  3  4  3 
2 1  2 1  
 1 3

  10 10 
1
then A  
A


1 
A
4  (6)  2
  2 4

10 10 
How can we prove that this is the correct inverse? By definition A  A1  I so it
can be easily checked that
 1 3 0.4  0.3  0.4  0.6  0.3  0.3 1 0
 2 4 0.2 0.1    0.8  0.8 0.6  0.4   0 1


 
 

It is harder, and often computationally expensive, to find the inverse of higher
dimension matrices. We will discuss methods in a future lecture.
8. Vector Basics:
i. Norms: The norm is the magnitude of a vector (i.e. how big it is) and can be
defined in many ways. For most engineering applications, the Euclidean norm (aka
the 2 norm) is used. It is defined as the square root of dot product of the vector
with itself and denoted by double bars:
a  aa
This is the same thing as squaring all of the components, adding them together, and
taking the square root.
ii. Components: A vector can be represented in array form or also as a magnitude
times a direction. Sometimes this is represented graphically and we need to
compute the array form. This is simply done by using directional cosines. In the 2
dimensional case:
y
Fmag
φ
θ
x
cos  
cos  
F  Fmag 
 Fmag 


cos  
 sin  
B. Advanced Topics: There are some special things that one can define with matrices that aren’t
available with traditional scalars. They are:
1. Transpose: All matrices have a transpose; this is the matrix that results when the rows and
columns of the first one are interchanged. It is denoted by a superscript letter T.
1 5 9 
1 2 3 4 
2 6 10


T

A  5 6 7 8  A  
3 7 11
9 10 11 12


4 8 2 
We can now put this together with our previous stuff to get some more succinct
definitions.
i. Symmetric Matrices: A symmetric matrix is one whose transpose is equal to itself.
This is easily proven by a simple example.
ii. Dot Product: The dot product between two column vectors can be defined with a
transpose and matrix multiplication. Thus:
a  b  aT b
iii. Norms: We can define the Euclidean norm of a vector with matrix multiplication
and a transpose:
a  aT a
2. Properties:
i. Matrix addition has properties:
a. Associative: A  ( B  C )  ( A  B)  C
b. Commutative: A  B  B  A
ii. Matrix multiplication has some interesting properties.
a. Associative: A( BC )  ( AB)C
b. Distributive: A( B  C )  AB  AC and ( B  C ) D  BD  CD
c. NOT Commutative: FE  EF
3. Determinant: The determinant of a matrix is very useful in finding inverses. It shows
whether or not an inverse exists and helps us finding it. A determinant is only defined for
square matrices. It is denoted as “det” or bars. Thus:
det( A)  det A  A
The simplest determinant is the 2 x 2. It is defined as:
a b 
if A  
 det( A)  ad  bc
c d 
Determinants of larger matrices can be found by using cofactors and breaking the problem
into smaller subproblems. The determinant of A can be found by:
det A  ai1 Ai1  ai 2 Ai 2    ain Ain
where the cofactor Aij is the determinant of M ij with the correct sign:
Aij  (1) i  j det M ij
where M ij is formed by deleting row i and column j of A. This shows that you can expand
upon any row to get the determinant. This procedure can more accurately be understood
by example
i. Examples:
1 2 3
5 6
 4 6
4 5
a. det 4 5 6  1det 

(

2
)
det

3
det



7 8  3  12  9  0


8 9
7 9




7 8 9
1 2 3
 2 3
 1 3
1 2 
det 4 5 6  (4) det 
 5 det 
 (6) det 


8 9
7 9
7 8 
b.



7 8 9
 24  60  36  0
1 2 3
2 3
c. det 4 5 6  7 det 
 0  0  21


5 6

7 0 0
ii. The determinant has some interesting properties:
a. det( AB)  det( A) det( B)
1
b. det( A 1 ) 
det( A)
T
c. det( A )  det( A)
d. For an inverse of a matrix to exist, the determinant of the matrix must not
be zero. If the determinant is zero, the matrix is called “singular.”
e. The solution to Ax  b can be computed via a ratio of two determinants
with the denominator being the determinant of A and the numerator the
obtained from taking the determinant of the matrix formed by replacing the
column you want to solve for with b. This is known as Cramer’s rule. Thus
for a 3 x 3 system:
b1 a12 a13
a11 b1 a13
a11 a12 b1
b2 a 22 a 23
a 21 b2 a 23
a 21 a 22 b2
b3 a32 a33
a31 b3 a33
a31 a32 b3
x1 
, x2 
, x3 
det A
det A
det A
f. The cross product can be written as a determinant with ijk components:
a T  a x a y a z , b T  bx b y bz


i
j
k
a  b  ax
ay
az
bx
by
bz


4. Eigenvalues: There are times in engineering where the following problem arises:
Ax  x
where A is a square matrix, x is a column vector, and λ is a scalar. It happens so much that
mathematicians have given it a special name: the eigenvalue/eigenvector problem. This
problem can be solved like such:
Ax  Ix
Ax  Ix  0
( A  I ) x  0
for this problem to have a nontrivial solution, the determinant of the matrix must be zero.
Why? Let B  (A  λI) . Then, if an inverse exists:
B 1 Bx  B0
Ix  0
x0
Therefore, we are looking for solutions where an inverse doesn’t exist. This happens when
the determinant of the matrix is zero. Thus:
det(A  λI)  0
The  ’s that make this happen are called the eigenvalues. Thus, to find the eigenvalues of
a matrix, we simply have to solve a polynomial that is formed from the determinant. The
eigenvalues also have some interesting properties that will help us later on.
i. Examples: What are the eigenvalues of the following matrices:
a.
 4  5
A

2  3
  4  5
1 0 
answer : det( A  I )  det  
 

 
2

3
0
1




5 
4  
 det 
 3   
 2
 (4   )( 3   )  10
 2    2
1  1,  2  2,
b.
1 0
A

0 1 
 1 0
1 0 
answer : det( A  I )  det  



0 1 
0
1




0 
1  
 det 
4   
 0
 (1   )(1   )
1  1, 2  1,
You should note that the number of eigenvalues is equal to the size of the
matrix. There can be repeated eigenvalues just like you have repeated roots
of an equation. The problem becomes when the matrix is bigger, it is
difficult to solve for the roots without using a numerical routine. We will
address this later in the class.