Download Proposition 7.3 If α : V → V is self-adjoint, then 1) Every eigenvalue

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Matrix (mathematics) wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Determinant wikipedia , lookup

Exterior algebra wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

System of linear equations wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix calculus wikipedia , lookup

Principal component analysis wikipedia , lookup

Symmetric cone wikipedia , lookup

Matrix multiplication wikipedia , lookup

Four-vector wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Jordan normal form wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Transcript
Proposition 7.3 If α : V → V is self-adjoint, then
1) Every eigenvalue of α is real.
2) Eigenvectors with distinct eigenvalues are orthogonal.
Proof 1) Suppose that α(v) = λv with v 6= 0. Then
hα(v), vi = hv, α(v)i
⇓
hλv, vi = hv, λvi
⇓
λhv, vi = λhv, vi.
As hv, vi =
6 0 it follows that λ = λ and thus λ ∈ R.
2) If α(vi ) = λi vi where λ1 6= λ2 then
λ1 hv1 , v2 i = hα(v1 ), v2 i = hv1 , α(v2 )i = λ2 hv1 , v2 i
(notice that we have used the fact that λ1 ∈ R). It follows that hv1 , v2 i = 0 as λ1 6= λ2 . 2
Corollary 7.4 If A is a hermitian n × n matrix then
∆A (t) = (λ1 − t) · · · (λn − t)
with λ1 , . . . , λn ∈ R.
Proof The corresponding linear map α : Cn → Cn , given by α(v) = Av, is self-adjoint and it
follows from the last proposition that all the eigenvalues are real. 2.
Remark. (Version for self-adjoint maps). If we state last corollary in terms of linear maps, we
get the following: if α : V → is self-adjoint then
∆α (t) = (λ1 − t) · · · (λn − t)
with λ1 , . . . , λn ∈ R.
If V is an inner product space, then any subspace W is also an inner product space with respect
to the same inner product. Furthermore, if α : V → V is self-adjoint and W is α-invariant, then
α|W is self-adjoint as well.
Lemma 7.5 If α : V → V is self-adjoint and W is α-invariant, then
W ⊥ = {v ∈ V : hv, wi = 0 ∀w ∈ W }
is also α-invariant.
Proof Let v ∈ W ⊥ . We want to show that α(v) ∈ W ⊥ but for any w ∈ W we have
hα(v), wi = hv, α(w)i
which is zero as α(w) ∈ W and v ∈ W ⊥ . This shows that α(v) ∈ W ⊥ as required. 2.
The next result is the main result of this chapter and tells us that a self-adjoint operator
α : V → V on a finite dimensional inner product space is always diagonalisable. In fact more is
true as we will see that one can moreover choose the basis of eigenvectors to be orthonormal.
48
Theorem 7.6 (Spectral Theorem) Let α : V → V be a self-adjoint linear operator on an ndimensional inner product space. Then there is an orthonormal basis v1 , . . . , vn of eigenvectors
of α.
Proof (Not examinable). We first prove by induction on n that we can find a orthonormal
basis of eigenvectors. For the case n = 1 pick a orthonormal vector (which is then ofcourse an
eigenvector).
Now suppose that n > 1 and the result is true for all V with dim V < n. We know from Corollary
7.4 that ∆α (t) has a real eigenvalue λ. Let W1 = Eα (λ) be the corresponding eigenspace and
choose any orthonormal basis for W1 . By Lemma 7.5 we know that W2 = W1⊥ is α-invariant.
The restriction α|W2 is then self-adjoint and by induction hypothesis, we can choose a orthonormal basis of or W2 consisting of eigenvectors of α|W2 , i.e. of α. Finally, V = W1 ⊕ W2 is
an orthogonal direct sum decomposition, so we can combine the two bases above to obtain an
orthonormal basis of V . 2
Corollary 7.7 If λ1 , . . . , λn are the corresponding eigenvalues with respect to the ON basis
v1 , . . . , vn , then α can be written explicitly by the formula
n
X
α(w) =
k=1
λk hvk , wivk .
Proof The explicit formula for α follows from the fact that
w=
n
X
k=1
hvk , wi vk .
It then follows that
α(w) =
n
X
hvk , wi α(vk )
k=1
n
X
=
hvk , wi λ α(vk ).
k=1
Theorem 7.8 (The matrix version of the Spectral Theorem)
(1) If A is a symmetric n × n matrix over R then there is an orthogonal invertible real matrix P (i.e. P −1 = P t ) such that P −1 AP is diagonal.
(2) If A is a hermitian n × n matrix over C then there is a unitary complex matrix P (i.e.
t
P −1 = P ) such that P −1 AP is diagonal.
Proof (Not examinable) In both cases the linear map α : Fn → Fn mapping v to Av is selfadjoint. By the Spectral Theorem we have an orthonormal basis v1 , . . . , vn of eigenvectors. Let
P be the base change matrix from the orthonormal basis to the standard basis e1 , . . . , en . The
columns of P are the coordinates of the orthonormal vectors v1 , . . . , vn expressed in e1 , . . . , en .
t
This translates into P t P = I when P is real and P P = I when P is complex. 2
Example. Consider the hermitian matrix
A=
1 i
−i 1
49
We have ∆A (t) = (t − 1)2 − 1 = t(t − 2) and the eigenvalues are 0 and 2. We next determine
the eigenspaces. Firstly we solve Av = 0. That is
1 i
x
0
=
−i 1
y
0
that has the solution C
−i
1
.
Then we solve (A − 2I)v = 0 or
that has the solution C
i
1
−1
i
−i −1
x
y
=
0
0
.
In order to get an orthonormal basis we need to normalise these eigenvectors. This gives thus
basis vectors
1
1
−i
i
, v2 = √
.
v1 = √
1
2
2 1
So if we let
then
P −1 AP
1
P =√
2
=
0 0
0 2
−i i
1 1
.
50
,