Download Review - Purdue Math

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cubic function wikipedia , lookup

Quadratic equation wikipedia , lookup

Bra–ket notation wikipedia , lookup

Quartic function wikipedia , lookup

Signal-flow graph wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Elementary algebra wikipedia , lookup

Gaussian elimination wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

System of polynomial equations wikipedia , lookup

Linear algebra wikipedia , lookup

History of algebra wikipedia , lookup

Equation wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Review
Part I: Linear Algebra
1
Core concepts in linear algebra
1.1
Linear dependence and linear independence
Suppose we have a finite nonempty set S = {v1 , v2 , · · · , vk } in some “vector space ”,
and several constants c1 , c2 , · · · , ck , and this equation
c1 v1 + c2 v2 + · · · ck vk = 0.
(1.1)
1. If there exists at least one cj 6= 0 such that (1.1) holds, then we say the vectors
{v1 , v2 , · · · , vk } are linearly dependent.
2. If the only possible values are c1 = c2 = · · · = ck = 0 , then we say the vectors
{v1 , v2 , · · · , vk } are linearly independent.
Remark 1.1. See problem 2 of Quiz 7, roblems 6, 7 and 10 of the Midterm
Exam Two.
Remark 1.2. Here, {vj }kj=1 can be either vectors (see Definition 4.5.3 and 4.5.4 on
Page 269, Theorem 4.5.14 and Corollary 4.5.15, on Page 274) or functions (see Definition 4.5.17 and 4.5.18 on Page 275, Theorem 7.2.4 on Page 542).
Remark 1.3. Here are some important concepts related to linear dependence and linear
independence:
1. An expression of the form c1 v1 + c2 v2 + · · · ck vk is called a linear combination
of {v1 , v2 , · · · , vk }. (See problem 1 of Quiz 7, problem 1 of Quiz 8 and
problem 1 of Midterm Exam One).
2. A set of vectors S = {vj }kj=1 is called a basis for a vector space V if
(a) S is linearly independent;
(b) V = span {S}.(i.e. every vector in V can be written as a linear combination
of some vectors in S.)
1
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
( See problem 2 of Quiz 8)
3. The number of nonzero rows in any row-echelon form of a matrix A is called the
rank of A and denoted as rank(A). The so called rank-nullity theorem is very
useful:
rank (A) + nullity (A) = n,
(1.2)
where the nullity of A means the dimension of the nullspaceA = {x ∈ R : Ax = 0}
and A is a m × n matrix.
See problem 2 of Quiz 4, problem 1 of Midterm Exam One and
problem 5 of Midterm Exam Two.
1.2
Singular or nonsingular matrix
Suppose A is n × n matrix. The following statements are equivalent to each other:
1. A is nonsingular ;
2. A is invertible (i.e. ∃A−1 such that AA−1 = A−1 A = In );
3. det(A) 6= 0;
4. There are no “zero rows ”after any Gaussian Elimination;
5. A is row equivalent to In ;
6. The nonhomogeneous equation Ax = b has unique solution;
7. The homogeneous equation Ax = 0 has only trivial solution (nullspace (A) = {0});
8. The column (row) vectors of A are linearly independent(i.e. they form a basis for
Rn );
9. Rank(A) = n.
10. AT is nonsingular (invertible).
Similarly, the following statements are also equivalent to each other:
1. A is singular ;
2
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
2. A is non-invertible;
3. det(A) = 0;
4. There are at least one “zero rows ”after some Gaussian Elimination;
5. A is not row equivalent to In ;
6. The nonhomogeneous equation Ax = b has infinitely many solution or no solution;
7. The homogeneous equation Ax = 0 has non trivial solution;
8. The column (row) vectors of A are linearly dependent;
9. Rank(A) < n.
10. AT is singular (non-invertible).
1.3
Kernel and range
Let T : V → W be a linear transformation. The the kernel of T is defined as
ker(T ) = {v ∈ V | T (v) = 0}.
Similar as (1.2), the general rank-nullity theorem is very useful:
dim [ker(T )] + dim [Rng (A)] = dim (V ).
(1.3)
Remark 1.4. See problem 2 of Quiz 9, problem 1 of Quiz 10, problems 3,4
of Midterm Exam Two.
2
Useful skills in linear algebra
2.1
Find the determinant of a matrix
We have learned two basic methods to find the determinant of a matrix:
1. Use elementary row (column) operations to simplify a determinant( i.e. to make
the matrix to be a lower (upper) triangular matrix) (see Section 3.2 on page 200
of the textbook);
3
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
2. Do cofactor expansion along one row (column)(see Section 3.3 on page 212 of the
textbook).
Remark 2.1. There are some important properties of determinant:
det(AT ) = det(A),
det(Ap B q ) = (det(A))p (det(B))q ,
1
det(A−1 ) =
,
det(A)
det(αA) = αn det(A).
Remark 2.2. See Homework 6, problem 2 of Quiz 5 and problem 8 of
Midterm Exam Two.
2.2
Find the inverse of a matrix
we have already learned two methods to compute the inverse of a matrix A:
1. Gauss-Jordan technique (see section 2.6 on page 166). Note that
⇒
⇒
(A | I) → (B | P ) → (I | A−1 ),
P A = B,
A = P −1 B.
See problem 4 of Midterm Exam One.
2. Adjoint method (see section 3.3 on page 217). Main result is
1
adj (A),
det(A)
adj (A) = MCT .
A−1 =
(2.1)
(2.2)
See problem 1 of Quiz 6, problem 6 of Midterm Exam One and problem
11 of Midterm Exam Two.
Remark 2.3. No matter what method you use, please make sure the correctness of
your results by checking AA−1 = A−1 A = I.
4
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Remark 2.4. There are some important properties of inverse:
(A−1 )−1 = A,
(AB)−1 = B −1 A−1 ,
(AT )−1 = (A−1 )T .
2.3
Solve the linear system Ax = b
We have already learned two methods to solve the linear system Ax = b:
1. Do Gaussian elimination to the augmented matrix A# = (A | b) (see section 2.5
on page 150);
2. Cramer’s rule (see section 23.3 on page 220).
Remark 2.5. No matter what method you use, please make sure the correctness of
your results by checking Ax = b.
Remark 2.6. Let us make a summary about the relation between the coefficients of the
system of equations and its solutions. Suppose after elementary row operations, we get
the augmented matrix like


1 ∗ ∗ ∗
A# = 0 1 ∗ ∗ .
0 0 p q
Then we can conclude that
1. it has no solution if p = 0, q 6= 0;
2. it has infinitely many solutions if p = 0, q = 0;
3. it has unique solution if p 6= 0, ∀q.
See problem 1 of Quiz 4 and Problem 3 of Midterm Exam One.
5
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
2.4
Fall 2011, Purdue
Eigenvalues and eigenvectors
There are two steps to find the eigenvalues and eigenvectors of a matrix A.
1. Step One: to find the roots of the characteristic polynomial:
p(λ) = det(λIn − A) = 0.
(2.3)
The roots {λj }j=1n are called eigenvalues of A.
2. Step Two: to solve the linear system
(λj I − A)x = 0
(2.4)
The solutions to (2.4) are {vj }nj=1 are called eigenvectors of A.
The relation between the eigenvalues and eigenvectors is
Avj = λj = vj .
Remark 2.7. See Problem 1 of Quiz 9 and Problem 12 of Midterm Exam
Two.
6
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Part II: Differential Equations
3
First order differential equations
Generally speaking, there are 5 types of first order differential equations.
1. Separable equations. See Quiz 1.
⇒
⇒
⇒
p(y)y ′ = q(x),
dy
p(y)
= q(x),
dx
p(y)dy = q(x)dx,
Z
Z
p(y)dy = q(x)dx + C,
where C is a constant.
2. Standard form of first order linear differential equation. See problem 1 of Quiz
2 and problem 8 of Midterm Exam One.
⇒
dy
+ p(x)y = q(x),
dx
Z
y(x) = e−
R
p(x)dx
(3.1)
R
q(x)e
p(x)dx
dx + C ,
(3.2)
where C is a constant.
3. Homogeneous equation. See problem 1.2 of Quiz 3 and problem 8 of
Midterm Exam One.
The homogeneous equation looks like y ′ = f (x, y) where f (tx, ty) = f (x, y). The
technique here is to make the change of variables y = x(x) and to reduce the
problem to a separable equation. Note that
⇒
⇒
7
y = xv(x),
dy = vdx + xdv,
dv
dy
=v+x .
dx
dx
(by product rule)
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
4. Bernoulli equation. See problem 1.1 of Quiz 3.
The general form of the Bernoulli equation
dy
+ p(x)y = q(x)y n .
dx
(3.3)
By changing of variables
u(x) = y(x)1−n ,
the Eq.(3.3) becomes
du
+ (1 − n)p(x)u = (1 − n)q(x).
dx
(3.4)
We can use the formula (3.1)-(3.2) to solve Eq.(3.4).
5. Exact equation. See problem 2 of Quiz 2 and problem 7 of Midterm
Exam One.
The exact equation is in this form:
M(x, y)dx + N(x, y)dy = 0,
(3.5)
where M(x, y) and N(x, y) satisfy
My = Nx .
For the details, see section 1.9 on Page 79.
Remark 3.1. For the clarification of these 5 types of equations, see my solution to
Homework 3.
4
4.1
High order differential equations
General theory
Define the differential operator with constant coefficients:
L = D n + a1 D n−1 + · · · + an−1 D + an ,
8
(4.1)
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
where {aj }nj=1 are constants.
The general solution to the equation
Ly = y (n) + a1 y (n−1) + · · · an−1 y ′ + an y = F (x)
(4.2)
y(x) = yc (x) + yp (x),
(4.3)
has this form
where the complementary solution yc (x) satisfies the homogeneous equation
Lyc = 0,
(4.4)
and the particular solution yp (x) satisfies the nonhomogeneous equation
Lyp = F.
4.2
(4.5)
How to find the yc ?
From Eq.(4.4) to get the auxiliary polynomial :
P (r) = r n + a1 r n−1 + · · · an−1 r + an .
(4.6)
yc (x) is completely determined by the roots of P (r) = 0:
1. Real root r of multiplicity m, then the linearly independent solutions are
erx , xerx , · · · , xm−1 erx .
2. Complex root a ± bi of multiplicity m, then the linearly independent solutions are
eax cos(bx), xeax cos(bx), · · · , xm−1 eax cos(bx),
eax sin(bx), xeax sin(bx), · · · , xm−1 eax sin(bx).
Finally, the general solution yc to the homogeneous equation is
yc = c1 y1 (x) + c2 y2 (x) + · · · + cn yn (x).
(4.7)
Remark 4.1. See problem 2 of Quiz 10.
9
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Remark 4.2. At least, you should remember the general solutions to the second order homogenous differential equations with constant coefficients. Suppose we have the
equation
y ′′ + αy ′ + βy = 0,
[D 2 + αD + β]y = 0,
r 2 + αr + β = 0.
⇒
⇒
(4.8)
(4.9)
(4.10)
Suppose the roots of Eq.(4.10) are r1 , r2 , then the general solution to Eq.(4.8), y(x),
should be in the following forms:
1. If r1 , r2 ∈ R and r1 6= r2 , then
y(x) = c1 er1 x + c2 er2 x ;
2. If r1 , r2 ∈ R and r1 = r2 = r, then
y(x) = c1 erx + c2 xerx ;
3. If r1 = a + bi, r2 = a − bi then
y(x) = c1 eax sin(bx) + c2 eax cos(bx).
Here, c1 and c2 are constants.
4.3
How to find the yp ?
There are two methods to find the particular solution to nonhomogeneous equation
(4.5):
1. Annihilators The method depends on the right hand function F (x). See Table 1.
If we find A(D) such that A(D)F = 0, then
⇒
P (D)y = F,
A(D)P (D) = 0,
which is a homogeneous equation.
10
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Table 1: Annihilators of functions
F (x)
Annihilators
k ax
cx e
(D − a)k+1
2
k ax
cx e cos(bx) (D − 2aD + a2 + b2 )k+1
cxk eax sin(bx) (D 2 − 2aD + a2 + b2 )k+1
2. Variation of parameter method. Suppose
yp = u1 (x)y1 (x) + u2 (x)y2 + · · · + un (x)yn (x),
then
y1
 y1′


y2
y2′

(n−1)
y1
(n−1)
y2
 ′  

···
yn
u1
0
 ′ 

···
yn′ 
  y2  =  0  .
 · · ·  · · · 
···
(n−1)
u′n
F (x)
· · · yn
For the case n = 2, we have
−F (x)y2
,
W [y1 , y2 ]
F (x)y1
u′2 =
.
W [y1 , y2 ]
u′1 =
Remark 4.3. See Quiz 11 and problem 2 of Quiz 12.
4.4
Reduction of orders
Suppose y1 (x) solves
y ′′ + a(x)y ′ + b(x)y = 0,
(4.11)
then the general solution to the the problem
y ′′ + a(x)y ′ + b(x)y = F (x),
(4.12)
y = y1 (x)u(x).
(4.13)
has this form
Plugging (4.13) into (4.12), we can obtain a differential equation about u. Solving
the equation for u(x), we can get a particular solution to Eq.(4.12).
Remark 4.4. See problem 1 of Quiz 13.
11
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
5
Fall 2011, Purdue
System of differential equations
5.1
General theory
A first order linear system of different equations can be written in this form
x′ = Ax + b,
(5.1)
where x is an unknown vector function, b is a known vector function, and A is a constant
matrix.
Likewise, the general solution to the Eq.(5.1) has this form
x(t) = xc (t) + xp (t),
(5.2)
where the complementary solution xc (t) satisfies the homogeneous equation
x′c = Axc ,
(5.3)
and the particular solution xp (t) satisfies the nonhomogeneous equation
x′p = Axp + b.
5.2
(5.4)
How to find the xc (t)?
The eigenvalues and eigenvectors of A play a crucial role in finding a fundamental
solution set (i.e. linearly independent solutions) for Eq.(5.3).
Suppose v1 , · · · , vn are linearly independent eigenvectors corresponding to eigenvalues λ1 , · · · , λn of A. Then the linearly independent solutions are
xi (t) = eλj vj ,
j = 1, 2, · · · , n.
So the fundamental solution set is
X = {x1 , x2 , · · · , xn }
(5.5)
and the general solution for Eq.(5.3) is in this form:
xc (t) = Xc,
where X is a matrix and c is a vector.
Remark 5.1. See problem 2 of Quiz 13.
12
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
5.3
Fall 2011, Purdue
How to find the xp (t)?
By a variation-of-parameters technique for linear systems, we can get a particular solution in this form
Z t
xp (t) = X(t)
X −1 (s)F (s)ds,
(5.6)
where X(t) is the fundamental matrix given in (5.2).
Remark 5.2. See Quiz 14.
6
Application of differential equations
1. Mixing problem. See section 1.7 on page 57;
2. Electric circuits. See section 1.7 on page 60;
3. Pendulum problem. See problem 19 on page 104 (HW 3);
4. Oscillation of a mechanical system. See section 6.5 on page 485.
13
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Part III: Practice problems
7
Popular problems in final exams
Let

Questions:

t 1 2
A = 0 1 1 .
1 0 t
(7.1)
1. Find det(A);
2. For what values of t, so that A is nonsingular;
3. For what values of t, so that Ax = 0 has infinitely many solutions;
4. For what values of t, so that the columns of A form a basis for Rn ;
5. Let t = 1 and T : R3 → R3 be a linear transformation given by T x = Ax. Find
ker(T ) and Rng(T );
6. For t = 2, find A−1 using adjoint method;
7. For t = 0, solve the system Ax = b, where b = [3, 2, 1]T ;
8. For t = 3.5, find the eigenvalues and eigenvectors;
9. For t = 3.5, find the general solution to the problem x′ = Ax.
14
Copy right reserved by Yingwei Wang
MA 262: Linear Algebra and Differential Equations
Fall 2011, Purdue
Answers:
1. det(A) = t2 − 1;
2. If t 6= ±1, then A is nonsingular;
3. If t = 1 or t = −1, then Ax = 0 has infinitely many solutions;
4. If t 6= ±1, then the columns of A form a basis for Rn ;
5. Suppose t = 1 and T : R3 → R3 be a linear transformation given by T x = Ax.
ker(T ) = span {[1, 1, −1]T } and Rng(T ) = span {[100]T , [0, 1, 0]T };
6. Suppose t = 2, then
A−1


2/3 −2/3 −1/3
=  1/3
2/3 −2/3 .
−1/3 1/3
2/3
7. For t = 0, the solution to the system Ax = b, where b = [3, 2, 1]T , is x = [1, 1, 1]T ;
8. For t = 3.5, the characteristic polynomial is p(λ) = (λ − 5)(λ − 1.5)2 . So the
eigenvalues are {5, −1.5};
9. Omitted.
15
Copy right reserved by Yingwei Wang