Download Matrices and Deformation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Euclidean vector wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Rotation matrix wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Principal component analysis wikipedia , lookup

Determinant wikipedia , lookup

Gaussian elimination wikipedia , lookup

Jordan normal form wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Four-vector wikipedia , lookup

Matrix multiplication wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix calculus wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Transcript
ES 111 Mathematical Methods in the Earth Sciences
Lecture Outline 13 - Thurs 3rd Nov 2016
Strain Ellipse and Eigenvectors
Matrices and Deformation
One way of thinking about a matrix is that it operates on a vector - the vector ends up pointing
somewhere else. In general, the vector will have been both stretched and rotated from its initial
position.
For instance, if our general vector A operates on the vector [x, y], we have



 


 

=

a b   x 
c d
y

ax + by 
cx + dy

The matrix has operated on our original vector [x, y] and produced a new vector [ax+by, cx+dy].
In particular, the x-vector [1,0] has moved to [a, c] and the y-vector [0,1] has moved to [b, d]. So
a good way of thinking about a 2x2 matrix is that the first column describes what happens to an
x-vector and the second column describes what happens to a y-vector.
If you think about the vector as representing some pre-existing fabric in a rock, then it is obvious
that the matrix is describing deformation of that pre-existing fabric. Also recall from last time that
the determinant of a matrix tells us its area or volume, and so is a measure of the volumetric strain
it represents. There is thus a very close link between matrix algebra and structural geology.
One way of describing deformation is to use a strain ellipse. If you start out with a perfectly
circular feature, after it has been deformed it will form an ellipse (see figure). In general a preexisting line will have been both stretched and rotated. However, for any strain ellipse there are
two, perpendicular directions in which lines undergo stretching but no rotation (see figure). These
two directions are called the principal strain axes and are a fundamental concept in structural
geology.
1
Figure 1
So what? It turns out that we can use matrix algebra to find these principal strain axes. To do
so we have to talk about eigenvectors and eigenvalues.
Eigenvectors and eigenvalues
Square matrices have so-called eigenvectors associated with them. An eigenvector is a vector
which gets squeezed or stretched, but not rotated, when operated on by the matrix. The amount
of squeezing or stretching (the strain) is called the eigenvalue.
There is a single, fundamental equation in eigenanalysis:
A x = λx
where A is a square matrix, x is an eigenvector of A and λ is the associated eigenvalue. λ is a
scalar, so the equation tells us that applying A to its eigenvectors does not alter their directions,
but only scales their lengths. In structural geology, the eigenvectors of a deformation matrix are
the principal strain axes associated with that deformation, and the eigenvalues are the associated
principal strains. In general, an n by n matrix A may have up to n eigenvectors, each with an
associated eigenvalue.
Example The matrix

A=

−5
2



has eigenvectors x1 = 

2 
−2



and x2 = 

1 
2
and the associated eigenvalues are
λ1 = −1 and λ2 = −6
How do we find the eigenvectors and eigenvalues of a matrix?
First we rewrite the above equation as
A x − λx = 0
or
(A − λI)x = 0
2

2 
−1

Since we assume x is non-zero, it can be shown that for the above equation to be true, we must
have
det(A − λI) = 0
Note that we have moved from a matrix equation to a scalar equation (determinants are scalars).
This makes things simpler.
The solution to this equation turns out to be a polynomial in terms of λ that can be solved to
obtain the eigenvalues of A. For an n × n matrix, the polynomial has terms up to λn and will have
from 1 to n distinct roots (the eigenvalues).
Example
We’ll use the same matrix as we had before.


−5 − λ
2

det(A − λI) = det 

=0
2
−2 − λ
This implies that
(5 + λ)(2 + λ) − 4 = 0
which is a quadratic equation in λ and gives us two roots (eigenvalues): λ1 =-1 and λ2 =-6.
Now, how do we find the eigenvectors? Simply plug an eigenvalue into the equation
Ax = λx
and solve for the elements of x. The length of the vector is not determined, so we typically normalize
each x to unit length.
Example Carrying on with the same example, let’s take λ1 = −1 and insert it into the above
equation:

 −5

2




2  x 
x 
   = −1  
−2
y
y
which gives us two equations, identical except for a scaling factor, telling us that y = 2x. One way
of writing this down as an (un-normalized) vector is x = [1, 2]. Careful - a common mistake is to
reverse the order!
3
Example with repeated roots. Find the eigenvectors and eigenvalues of


 1 0 0 





0 2 1
0 1 2





[Answer: λ=1,1,3. For λ=3, eigenvector = [0,1,1]. For λ=1, eigenvector = [x,-1,1] i.e. it is not
uniquely determined]
Note that if you get repeated roots, then you don’t have enough information to uniquely define
some of the eigenvectors.
Eigenvalues and eigenvectors have some useful properties.
1. The sum of elements along the main diagonal of A (a quantity known as the trace of A)
equals the sum of the eigenvalues (repeating repeated roots):
TraceA =
n
X
aii =
i=1
n
X
λi
i=1
2. The determinant of A equals the serial product of the eigenvalues
det A = Πni=1 λi
2D Strain
Back to our general matrix A, where

A=


a b 
c d

This matrix A is the strain matrix which describes the deformation to which the medium is being
subjected. Remember the way to think about what this matrix is doing is that the first column
describes where the [1,0] vector ends up, and the second column describes where the [0,1] vector
goes.
There are three important forms of the strain matrix:
1. Rigid (counterclockwise) rotation


cos θ − sin θ 
A=


sin θ cos θ
4
What are the eigenvectors of this matrix? Why? What is the determinant?
√
[Answer λ = cos θ + cos2 θ − 1 which has no solution except for θ = 0 (i.e. no rotation). If
there is rigid rotation, all vectors are rotated and so there are no eigenvectors. The determinant is
1 (rotation involves no change in shape)].
2. Simple shear


 1 γ 
A=
0 1

This represents shear parallel to the x-axis. γ is referred to as the shear strain and γ = tan φ defines
φ which is called the angular shear strain. What are the eigenvectors of this matrix? What
about the determinant?
[Answer λ=1. There is only one eigenvector, [x,0] i.e. the x-vector. Shear parallel to the x-axis
will rotate all vectors except the x-vector. Determinant is 1 (simple shear involves no change in
shape)].
3. Pure shear


a 0 
A=


0 d
This implies expansion (a or d > 1) or contraction (a or d < 1).
What use are these matrices?
They tell us how any initially oriented line will be transformed under a particular strain field.
Successive application of different strain fields is obtained by simply multiplying the relevant matrices together. Note that the order of multiplication is very important - matrix multiplication is not commutative!. The result of a series of deformations D1 ,D2 , · · · DN is found by
doing the matrix multiplication in reverse order: (DN ) (DN −1 ) · · · (D2 ) (D1 ).
Example Examine the effects of a 45◦ counterclockwise rotation and 100% shear parallel to the
x-axis on a line with the initial orientation (1, 1).
Does the order
of application matter?
 √

2
0 
[Answers Rotation followed by shear gives 

√
√ . Shear followed by rotation gives
1/ 2 1/ 2


√
√ √
√
√
0 
 1/ 2

√ √ . Operating these two matrices on (1, 1) yields ( 2, 2) and (1/ 2, 3/ 2), respec2
1/ 2
tively. So the order does matter.]
5
Example What does this strain matrix do?



5
4
3
4
3
4
5
4



[Answer It stretches by a factor of 2 in the (1,1) direction and by a factor of 1/2 in the (1,-1)
direction.]
Optional (if time allows): General Strain Matrix
Each of the elements in the matrix A actually has a physical meaning. For the case when the
object being strained undergoes no rotation, another way of writing the general strain matrix A as


1 + xx
xy 
A=


xy
1 + yy
Here the components xx , yy and xy are the normal strains in the x and y directions and the
shear strains, respectively. The determinant of this matrix tells you the total volume change. The
two shear strain entries are the same because otherwise the object being strained would undergo
rotation. The identity matrix involves zero normal and zero shear strain - the object doesn’t change
shape at all.
Example Show that the general strain matrix results in the following principle strains:
xx + yy
[xx − yy ]2
±
+ 2xy
2
4
Does this make sense?
6
!1/2