Download Topic 13 Notes 13 Vector Spaces, matrices and linearity Jeremy Orloff 13.1 Matlab

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Line (geometry) wikipedia , lookup

Classical Hamiltonian quaternions wikipedia , lookup

Vector space wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Partial differential equation wikipedia , lookup

Bra–ket notation wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Linear algebra wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix calculus wikipedia , lookup

Transcript
Topic 13 Notes
Jeremy Orloff
13 Vector Spaces, matrices and linearity
13.1 Matlab
We will use Matlab for computation and visualization. It will allow us to work with
large matrices. There is a student edition available for free from MIT. See http:
//ist.mit.edu/matlab/all. We will only use a tiny subset of Matlab’s enormous
set of functions. I’ll post some simple (and short) tutorials on its use.
A free substitute for Matlab is Octave. It has the advantage that it loads much faster
and doesn’t spread digital rights management files all around your computer. The
disadvantage is that it can be a little harder to install, especially on the Mac. Look
at http://www.gnu.org/software/octave/download.html. I can help you get it
installed if you want to try.
13.2 Linearity and vector spaces
We’ve seen before the importance of linearity when solving differential equations
P (D)x = f (t). To remind you: linearity meant that
P (D)(c1 f + c2 g) = c1 P (D)f + c2 P (D)g
for functions f , g and constants c1 , c2 .
Matrix multiplication is also linear. If A is a matrix and v1 , v2 are vectors the
A · (c1 v1 + c2 v2 ) = c1 A · v1 + c2 A · v2
Example 13.1.
In this example we write out the calculation to emphasize the
linearity.
6(3 + 4) + 5(7 + 8)
6·3+5·7+6·4+5·8
6 5
3
6 5
4
6 5
3+4
=
=
=
+
1 2
7+8
(3 + 4) + 2(7 + 8)
1·3+2·7+1·4+2·8
1 2
7
1 2
8
Superposition
Exactly like solving linear differential equations, solving linear systems of algebraic
equations involves finding a particular solution and supersitioning with the homogeneous solution.

 

2
1 3 x
Example 13.2. Solve 4 12 1 = 8
x2
6
3 9
1
13 Vector Spaces, matrices and linearity
answer: For this example will use ad hoc methods to find particular and homogeneous
solutions. Later we will learn systematic methods. The main point here is that the
solutions can be superimposed.
2
By inspection we can see one solution is xp =
. Just as valid would be to take
0
−1
5
xp =
or xp =
.
1
−1
Next we have to solve the associated homogeneous equation:


 
1 3 0
4 12 x1 = 0
x2
3 9
0
This expands to three equations
in two unknowns. You can easily check that the
3
general solution is xh = c
.
−1
By superposition, the solution to the original equation is
−1
3
x = xp + xh =
+c
.
1
−1
If this is unclear you should check the solution by substitution.
Vector spaces
The key properties of vectors are that they can be added and scaled. A vector space
is any set with the following properties.
1. Closure under addition: We can add any two elements in the set and get another
member.
2. Closure under scalar multiplication: We can scale any element in the set and get
another member.
The formal definition requires some more technical properties but this definition will
suffice for 18.03.
If the scalars are required to be real numbers we say we have a real vector space. If
we allow them to be complex numbers then we have a complex vector space.
In 18.02 you learned about vectors in the plane and vectors in space:
• We call the plane R2 . It is the set of all pairs (x, y).
• We call space R3 . It is the set of all triples (x, y, z).
• The powers indicate the dimension of each space. Likewise we can work with
high dimensional vector spaces like R1000 which consists of all lists of 1000
numbers.
• In 18.03 we have used the fact that functions can be added and scaled. That
is, the set of functions of t forms a vector space. (It happens to be infinite
dimensional, but we can talk about that later.)
2
13 Vector Spaces, matrices and linearity
13.3 Connection to DEs
We will give two examples showing directly how matrices arise in differential equations.
The companion matrix
Here we are going to convert a higher order differential equation into a system of first
order equations. Later we will see how this technique allows us to understand DEs
in a new way and also how it allows us to use numerical techniques on higher order
equation.
Consider the second order linear differential equation
x00 + 8x0 + 7x = 0.
We’ve worked this example many times. The general solution is x = c1 e−t + c2 e−7t .
To convert it to a matrix system we introduce a new variable: y = x0 . The original
equation is equivalent to y 0 + 8y + 7x = 0. Altogether we have the system of two first
order linear DE’s:
x0 =
y
y 0 = −7x − 8y
This can be rewritten in matrix form:
0 0
1
x
x
=
y0
−7 −8
y
Notice two things:
x
0
1
1. If we write this abstractly with x =
and A =
it looks like
y
−7 −8
x0 = Ax. Ignoring the fact that x is a vector and A is a matrix this looks like our
most important DE: x0 = ax.
2. If we solve the original equation we’ll have found x and hence y = x0 . Conversely
if we solve the matrix system we’ll have found both x and y. Since we already know
the solution to the DE we have the solution to the system is
x
x
c1 e−t + c2 e−7t
1
1
−t
−7t
=
=
= c1 e
+ c2 e
0
−t
−7t
y
x
−c1 e + −7c2 e
−1
−7
Notice that the modal solutions are of the form ert v where v is a constant vector.
Later we will use the method of optimism to guess solutions of this form.
The matrix A of coefficients that arises from this technique will be called the companion matrix to the original DE.
Example 13.3. Heat Flow. In this example we will set up a model for heat flow.
We won’t solve it for a few days.
Suppose we have a metal rod where different parts are at different temperatures. We
divide it into 3 regions and imagine that each region exchanges heat with the adjacent
3
13 Vector Spaces, matrices and linearity
regions. The regions on either end also exchange heat with the environment. We
assume that the top and bottom of the rod are insulated so that heat can only flow
out of the bar at the ends. We assume that the heat transfer follows Newton’s law
and the rate constant is k at each interface. The figure below shows the the metal bar
divided into 3 regions and insulated above and below. The temperature of each region
and the temperature of the environment on the left and right ends are indicated in
the figure.
EL
T1
T2
T3
ER
Using Newton’s law we can write a DE for the temperature of each region.
kEL
T10 = −k(T1 − EL ) − k(T1 − T2 ) = −2kT1 + kT2 +
kT1 − 2kT2 + kT3
T20 = −k(T2 − T1 ) − k(T2 − T3 ) =
T30 = −k(T3 − T2 ) − k(T3 − ER ) =
kT2 − 2kT3 + kER
We can write this in matrix form
  

 0 
−2k
k
0
x1
kEL
T1
T20  =  k
−2k
k   x2  +  0 
0
k
−2k
x3
kER
T30
Remark: This particular coefficient matrix occurs quite often in applications. You
should make sure you know how to modify the equation if we use n divisions of the
rod instead of 3.
13.4 Matrix Multiplication
Combination of columns
We can view the result of multiplying a matrix times a vector as a linear combination
of the columns of the matrix. We will use this again and again, so you should
internalize it now! We illustrate with an example:
Example 13.4. Consider the following product
6 5
3
6·3+5·4
6
1
·
=
=3
+4
1 2
4
1·3+2·4
5
2
Notice that the result is a linear combination of the columns of the matrix.
To express this abstratcly we write a matrix as







v
v
v
v
v
A=
1
2
3
4
5




4
13 Vector Spaces, matrices and linearity
Here each vj is a vector representing a column of A. We then have
  

c1
  c2 

  

v1 v2 v3 v4 v5  · c3  = c1 v1 + c2 v2 + c3 v3 + c4 v4 + c5 v5
  

  c4 

c5
That is, the product is a linear combination of the columns of A.
Block matrices and multiplication

0
0
Consider the following matrix A = 
6
1

0
 0
A=
 6
1
0
0
5
2
1
0
0
0
0
0
5
2
1
0
0
0

0
1
. We can divide this into blocks
0
0

0
0
1 
I
=
0 
B 0
0
As long as the sizes of the blocks are compatible block matrices multiply just like
matrices:
A B
E
AE + BF
·
=
C D
F
CE + DF
To convince yourself of this look at the following product and see that the blocks in
the first column on the left only touch the top block on the right etc.

 

a b
0 0 1 0
 0 0 0 1   c d 

 

 6 5 0 0 · e f 
1 2 0 0
g h
5