Download Linear algebra

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bivector wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Rotation matrix wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Exterior algebra wikipedia , lookup

Cross product wikipedia , lookup

Jordan normal form wikipedia , lookup

Determinant wikipedia , lookup

Principal component analysis wikipedia , lookup

Vector space wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Euclidean vector wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Gaussian elimination wikipedia , lookup

Four-vector wikipedia , lookup

Matrix multiplication wikipedia , lookup

Matrix calculus wikipedia , lookup

Transcript
Lecture 1
Linear algebra
Vectors, matrices
Linear algebra
Encyclopedia Britannica:“a branch of mathematics that is
concerned with mathematical structures closed under
the operations of addition and scalar multiplication and
that includes the theory of systems of linear equations,
matrices, determinants, vector spaces, and linear
transformations“
Deals with:
–
–
–
–
–
Vector
vector space
linear transformation of vectors
Matrix
system of linear equations – solution (Lecture 2)
Vectors in Two Dimensions
• Two-dimensional vectors can be defined as
ordered pairs of real numers (a,b).
• a,b…components
• Vector is graphically represented by a directed
line from the origin of a coordinate system to
the point (a,b) whose length represents the
magnitude and whose orientation in space
represents the direction.
Basic operations
u  (u1 , u2 ); v  (v1 , v2 )
Equality : u  v if and only if u1  v1 and u2  v2
Multiplica tion by scalar : cv  (cv1 , cv2 )
if c  0 we get zero vector 0  (0,0)
Addition : u  v  (u1  v1,u2  v2 )
Subtractio n : u  v  (u1  v1,u2  v2 )
Unit vecto rs : i  ( 1,0 ); j  ( 0,1 )
Illustration of addition and subtraction
u+v
v
u-v
u
Addition: Place the two vectors tail-to-head and draw the resultant vector.
The addition of vectors is commutative (parallelogram law). Rovnoběžník
The parallelopiped is the analogue in space of the parallelogram in the plane.
Pythagorean theorem expressed in
terms of vectors
c2 = a2 + b2
b
c = a+b
a
b
a
Length, dot product
1
Lenght : u  (u  u ) 2 ;
2
1
2
2
u  u12  u22
Dot product : u  v  u1v1  u2 v2
( inner product  scalar product)
u  v  v u
Perpendicu lar vector s : u  v  0
Vectors v and w perpendicular, if their scalar product is equal to that
are perpendicular to each other are orthogonal.The zero vector is
perpendicular to all vectors. Example pp.194.
Multiple dimensional vectors
A k-dimensional vector y is an ordered collection
of k real numbers y1, y2, . . . , yk , and is
written as y = (y1, y2, . . . , yk).
The numbers y j ( j = 1, 2, . . . , k) are called the
components of the vector y.
Example:
u=(1, −3, 0, 5) is a four-dimensional vector.
Its first component is 1, its second component is −3,
and its third and fourth components are 0 and 5.
Transposition
• Row vector
v = (v1, v2, …, vn)
• Column vector
 v1 
 
v
2
T

v 
 
 
 vn 
Scalar multiplication, addition
Scalar multiplication:
4(3, 0,−1, 8) = (12, 0,−4, 32),
vector addition:
(3, 4, 1,−3) + (1, 3,−2, 5) = (4, 7,−1, 2).
Using both operations, we can make the following type
of calculation:
(1, 0)x1 + (0, 1)x2 + (−3,−8)x3 = (x1, 0) + (0, x2) +
(−3x3,−8x3)
= (x1 −3x3, x2 −8x3).
The dimension of vectors must be the same (2D, 3D, …).
Linear combination of vectors
• v1, v2, …, vn vectors
• a1, a2, …, an scalars
Linear combination of vectors
• v = a1.v1+ a2.v2+ … + an.vn
• Result of this operation is a vector
• Sum of vectors multiplied by scalars
Linearly independent vectors
• Vectors v1, v2, …, vn are linearly dependent if there
exist a1, a2, …, an not all equal to zero, such that
0 = a1.v1+ a2.v2+ … + an.vn
• Otherwise, these vectors are linearly independent
Matrices
An m x n matrix is a rectangular array of numbers in m
rows and n columns whose dimension is m by n.
A is called square if m = n. The numbers aij are referred
to as the elements of A.
 a11 a12  a1n 
a

a

a
21
22
2n 

A
 


 


am1 am 2  amn 
Row and column vectrors
A k-by-1 matrix is called a column vector and a 1-byk matrix is called a row vector.
The coefficients in row i of the matrix A determine
a row vector
Ai = (ai1, ai2, …, ain),
and the coefficients of column j of A determine
a column vector
Aj = (a 1 j , a 2 j , . . . , a m j).
Column vectors are frequently written horizontally
in angular brackets.
Matrices
• Main diagonal of matrix
– Entries a11, a22 ,... arr , for r = min(m,n)
• Diagonal matrix
– Square matrix with non zero elements on the main diagonal
• Triangular matrix
– Lower (left) and upper (right) triangular matrix
• Identity or unit matrix
0
1 0


0
0 1
E
 


0
1 

The identity matrix of order m,
written Im (or simply I ) is a square
m-by-m matrix with ones along the diagonal and zeros elsewhere.
Matrix operations
•
•
•
•
•
•
Transpose
Addition
Scalar multiplication
Matrix multiplication (product)
Inversion
Elementary row operations
Transpose
The transpose of a matrix A, denoted AT , is
formed by interchanging the rows and
columns of A; that is,
aTi j = aji .
Addition
Addition of two matrices A and B, both with
dimension m by n, is defined as
a new matrix C, written C = A + B,
whose elements cij are given by
cij = a ij + bij .
The usual matrix addition is defined for two
matrices of same dimensions.
Scalar multiplication
Scalar multiplication of a matrix A and
a real number a
is defined to be a new matrix B=a.A,
whose elements bi j are given by:
bi j = a.ai j
Matrix multiplication (product)
The product of an m-by-p matrix A and
a p-by-n matrix B is defined to be
a new m-by-n matrix C,
written C = A.B, whose elements cij are given by:
p
cij   aik bkj
k 1
If the number of columns of A does not equal
the number of rows of B, then AB is undefined.
Matrix multiplication
Each element of the matrix product is a scalar product of vectors:
row vector of A and column vector of B
Algebraic rules for matrices
A, B...matrices
a, b...scalars
A + B = B + A (Commutative law)
A + (B + C) = (A + B) + C (Associative law)
A(BC) = (AB)C (Associative law)
A(B + C) = AB + AC (Distributive law)
a ( A  B )  aA  a B
( a  b ) A  aA  bA
a (bA )  ( ab) A
matrix multiplication is not commutative:
AB ≠ B A
Inversion of regular matrix
Given a square m-by-m matrix B,
if there is an m-by-m matrix D such that
D.B = B.D = I
D is called the inverse of B and is denoted B−1.
properties of inverses:
• The inverse of a matrix B is unique if it exists.
• I−1 = I since I.I = I
Gauss–Jordan elimination
System of linear equations:
Bx = I y = y
If B has an inverse, then multiplying on the left by B−1 yields:
I x = B−1 y
Goal of the elimination: one variable is isolated in
each row.
Elementary row (column) operation
•
•
•
•
Add one row to another row
Multiplying row by a scalar
Exchange the rows
Releasing of the row
• The same operations can be provided on the
columns
Example