Download Unitary Matrices and Hermitian Matrices

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dual space wikipedia , lookup

Factorization wikipedia , lookup

Capelli's identity wikipedia , lookup

Tensor operator wikipedia , lookup

System of linear equations wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Quadratic form wikipedia , lookup

Linear algebra wikipedia , lookup

Determinant wikipedia , lookup

Cartesian tensor wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Oscillator representation wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Invariant convex cone wikipedia , lookup

Bra–ket notation wikipedia , lookup

Four-vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Jordan normal form wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix multiplication wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Transcript
11-24-2014
Unitary Matrices and Hermitian Matrices
Recall that the conjugate of a complex number a + bi is a − bi. The conjugate of a + bi is denoted
a + bi or (a + bi)∗ .
In this section, I’ll use ( ) for complex conjugation of numbers of matrices. I want to use ( )∗ to denote
an operation on matrices, the conjugate transpose.
Thus,
3 + 4i = 3 − 4i, 5 − 6i = 5 + 6i, 7i = −7i, 10 = 10.
Complex conjugation satisfies the following properties:
(a) If z ∈ C, then z = z if and only if z is a real number.
(b) If z1 , z2 ∈ C, then
z1 + z2 = z1 + z2 .
(c) If z1 , z2 ∈ C, then
z1 · z2 = z1 · z2 .
The proofs are easy; just write out the complex numbers (e.g. z1 = a + bi and z2 = c + di) and compute.
The conjugate of a matrix A is the matrix A obtained by conjugating each element: That is,
(A)ij = Aij .
You can check that if A and B are matrices and k ∈ C, then
kA + B = k · A + B
and AB = A · B.
You can prove these results by looking at individual elements of the matrices and using the properties
of conjugation of numbers given above.
Definition. If A is a complex matrix, A∗ is the conjugate transpose of A:
A∗ = AT .
Note that the conjugation and transposition can be done in either order: That is, AT = (A)T . To see
this, consider the (i, j)th element of the matrices:
[(AT )]ij = (AT )ij = Aji = (A)ji = [(A)T ]ij .
Example. If
A=

1 − 2i
4
then A∗ =  2 + i −2 − 7i  .
−3i
6 − 6i

1 + 2i
2−i
3i
,
4
−2 + 7i 6 + 6i
Since the complex conjugate of a real number is the real number, if B is a real matrix, then B ∗ = B T .
Remark. Most people call A∗ the adjoint of A — though, unfortunately, the word “adjoint” has already
been used for the transpose of the matrix of cofactors in the determinant formula for A−1 . (Sometimes
1
people try to get around this by using the term “classical adjoint” to refer to the transpose of the matrix
of cofactors.) In modern mathematics, the word “adjoint” refers to a property of A∗ that I’ll prove below.
This property generalizes to other things which you might see in more advanced courses.
The ( )∗ operation is sometimes called the Hermitian — but this has always sounded ugly to me, so
I won’t use this terminology.
Since this is an introduction to linear algebra, I’ll usually refer to A∗ as the conjugate transpose,
which at least has the virtue of saying what the thing is.
Proposition. Let U and V be complex matrices, and let k ∈ C.
(a) (U ∗ )∗ = U .
(b) (kU + V )∗ = kU ∗ + V ∗ .
(c) (U V )∗ = V ∗ U ∗ .
(d) If u, v ∈ Cn , their dot product is given by
u · v = v ∗ u.
Proof. I’ll prove (a), (c), and (d).
For (a), I use the fact noted above that ( ) and ( )T can be done in either order, along with the facts
that
A = A and (AT )T = A.
I have
(U ∗ )∗ = [(U T )]T = [(U T )T ] = U = U.
This proves (a).
For (c), I have
(U V )∗ = (U V )T = V T U T = V T · U T = V ∗ · U ∗ .
For (d), recall that the dot product of complex vectors u = (u1 , u2 , . . . , un ) and v = (v1 , v2 , . . . , vn ) is
u · v = u1 v1 + u2 v2 + · · · + un vn .
Notice that you take the complex conjugates of the components of v before multiplying!
This can be expressed as the matrix multiplication
u · v = [ v1
v2

u1
 u2 
∗

· · · vn ] 
 ...  = v u.

un
Example. In this example, use the complex dot product.
(a) Compute (1 + 3i, 2 + i) · (4 − 5i, 2 + 3i).
(b) Find k(2 + i, 3 − 5i)k.
(c) Find a nonzero vector (a, b) which is orthogonal to (1 + 8i, 2 − 3i).
(a)
(1 + 3i, 2 + i) · (4 − 5i, 2 + 3i) = [ 4 + 5i 2 − 3i ]
1 + 3i
= (4 + 5i)(1 + 3i) + (2 − 3i)(2 + i) = −4 + 13i.
2+i
2
It’s a common notational abuse to write the number “−4 + 13i” instead of writing it as a 1 × 1 matrix
“[−4 + 13i]”.
(b)
k(2 + i, 3 − 5i)k2 = (2 + i, 3 − 5i) · (2 + i, 3 − 5i) = (2 − i)(2 + i) + (3 + 5i)(3 − 5i) = 4 + 1 + 9 + 25 = 39.
√
Hence, k(2 + i, 3 − 5i)k = 39.
The following formula is evident from this example:
p
k(a + bi, c + di)k = a2 + b2 + c2 + d2 .
This extends in the obvious way to vectors in Cn .
(c) I need
(a, b) · (1 + 8i, 2 − 3i) = 0.
In matrix form, this is
[ 1 − 8i
a
2 + 3i ]
= 0.
b
Note that the vector (1 + 8i, 2 − 3i) was conjugated and transposed.
Doing the matrix multiplication,
(1 − 8i)a + (2 + 3i)b = 0.
I can get a solution (a, b) by switching the numbers 1 − 8i and 2 + 3i and negating one of them:
(a, b) = (2 + 3i, −1 + 8i).
There are two points about the equation u · v = v ∗ u which might be confusing. First, why is it necessary
to conjugate and transpose v? The reason for the conjugation goes back to the need for inner products to
be positive definite (so u · u is a nonnegative real number).
The reason for the transpose is that I’m using the convention that vectors are column vectors. So if u
and v are n-dimensional column vectors and I want the product to be a number — i.e. a 1 × 1 matrix — I
have to multiply an n-dimensional row vector (1 × n) and an n-dimensional column vector (n × 1). To get
the row vector, I have to transpose the column vector.
Finally, why do u and v switch places in going from the left side to the right side? The reason you write
v ∗ u instead of u∗ v is because inner products are defined to be linear in the first variable. If you use u∗ v you
get a product which is linear in the second variable.
Of course, none of this makes any difference if you’re dealing with real numbers. So if x and y are
vectors in Rn , you can write
x · y = xT y or x · y = y T x.
Definition. A complex matrix U is unitary if U U ∗ = I.
Notice that if U happens to be a real matrix, U ∗ = U T , and the equation says U U T = I — that is, U
is orthogonal. In other words, unitary is the complex analog of orthogonal.
By the same kind of argument I gave for orthogonal matrices, U U ∗ = I implies U ∗ U = I — that is, U ∗
−1
is U .
Proposition. Let U be a unitary matrix.
(a) U preserves inner products: x · y = (U x) · (U y). Consequently, it also preserves lengths: kU xk = kxk.
(b) An eigenvalue of U must have length 1.
3
(c) The columns of a unitary matrix form an orthonormal set.
Proof. (a)
(U x) · (U y) = (U y)∗ (U x) = y ∗ U ∗ U x = y ∗ Ix = y ∗ x = x · y.
Since U preserves inner products, it also preserves lengths of vectors, and the angles between them. For
example,
kxk2 = x · x = (U x) · (U x) = kU xk2 , so kxk = kU xk.
(b) Suppose x is an eigenvector corresponding to the eigenvalue λ of U . Then U x = λx, so
kU xk = kλxk = |λ|kxk.
But U preserves lengths, so kU xk = |xk, and hence |λ| = 1.
(c) Suppose

↑
U =  c1
↓
↑
c2
↓

↑
· cn  .
↓
Then U ∗ U = I means
←
←



c1 T
c2 T
..
.
← cn T

→ 
→ ↑
  c1

↓
→
↑
c2
↓
1
0
↑

0
· cn  = 
.
 ..
↓


0
0 0 ···
1 0 ···
0 1 ···
.. ..
. .
0 0 ···

0
0

0.
.. 
.
1
Here ck T is the complex conjugate of the kth column ck , transposed to make it a row vector. If you look
at the dot products of the rows of U ∗ and the columns of U , and note that the result is I, you see that the
equation above exactly expresses the fact that the columns of U are orthonormal.
For example, take the first row c1 T . Its product with the columns c1 , c2 , and so on give the first row of
the identity matrix, so
c1 · c1 = 1, c1 · c2 = 0, . . . , c1 · cn = 0.
This says that c1 has length 1 and is perpendicular to the other columns. Similar statements hold for
c2 , . . . , cn .
Example. Find c and d so that the following matrix is unitary:
 1
√ (1 + 2i)
 7
 1
√ (1 − i)
7
c
d


.
I want the columns to be orthogonal, so their complex dot product should be 0. First, I’ll find a vector that
1
is orthogonal to the first column. I may ignore the factor of √ ; I need
7
(a, b) · (1 + 2i, 1 − i) = 0
a
=0
[ 1 − 2i 1 + i ]
b
This gives
(1 − 2i)a + (1 + i)b = 0.
4
I may take a = 1 + i and b = −1 + 2i. Then
√
k(1 + i, −1 + 2i)k = 7.
√
So I need to divide each of a and b by 7 to get a unit vector. Thus,
1
1
(c, d) = √ (1 + i), √ (−1 + 2i) .
7
7
Proposition. (Adjointness) let A ∈ M (n, C) and let u, v ∈ Cn . Then
Au · v = u · A∗ v.
Proof.
u · A∗ v = (A∗ v)∗ u = v ∗ (A∗ )∗ u = v ∗ Au = Au · v.
Remark. If (·, ·) is any inner product on a vector space V and T : V → V is a linear transformation, the
adjoint T ∗ of T is the linear transformation which satisfies
(T (u), v) = (u, T ∗ (v))
for all u, v ∈ V.
(This definition assumes that there is such a transformation.) This explains why, in the special case
of the complex inner product, the matrix A∗ is called the adjoint. It also explains the term self-adjoint in
the next definition.
Corollary. (Adjointness) let A ∈ M (n, R) and let u, v ∈ Rn . Then
Au · v = u · AT v.
Proof. This follows from adjointness in the complex case, because A∗ = AT for a real matrix.
Definition. An complex matrix A is Hermitian (or self-adjoint) if A∗ = A.
Note that a Hermitian matrix is automatically square.
For real matrices, A∗ = AT , and the definition above is just the definition of a symmetric matrix.
Example. Here are examples of Hermitian matrices:

5
−4
2 + 3i
,  −6i
2 − 3i
17
2

6i
2
0.87 1 − 5i  .
1 + 5i
42
It is no accident that the diagonal entries are real numbers — see the result that follows.
Here’s a table of the correspondences between the real and complex cases:
Real Case
T
Complex Case
T
u·v =u v =v u
u · v = v∗ u
Transpose ()T
Conjugate transpose ()∗
Orthogonal matrix AAT = I
Unitary matrix U U ∗ = I
Symmetric matrix A = AT
Hermitian matrix H = H ∗
5
Proposition. Let A be a Hermitian matrix.
(a) The diagonal elements of A are real numbers, and elements on opposite sides of the main diagonal are
conjugates.
(b) The eigenvalues of a Hermitian matrix are real numbers.
(c) Eigenvectors of A corresponding to different eigenvalues are orthogonal.
Proof. (a) Since A = A∗ , I have Aij = Aji . This shows that elements on opposite sides of the main diagonal
are conjugates.
Taking i = j, I have
Aii = Aii .
But a complex number is equal to its conjugate if and only if it’s a real number, so Aii is real.
(b) Suppose A is Hermitian and λ is an eigenvalue of A with eigenvector v. Then
λ(v · v) = (λv) · v = (Av) · v = v · A∗ v = v · Av = v · (λv) = λ(v · v).
Therefore, λ = λ — but a number that equals its complex conjugate must be real.
(c) Suppose µ is an eigenvalue of A with eigenvector u and λ is an eigenvalue of A with eigenvector v. Then
µ(u · v) = (µu) · v = Au · v = u · A∗ v = u · Av = u · (λv) = λ(u · v) = λ(u · v).
u · v 6= 0 implies µ = λ, so if the eigenvalues are different, then u · v = 0.
Example. Let
A=
1
2−i
.
2 + i −3
Show that the eigenvalues are real, and that eigenvectors for different eigenvalues are orthogonal.
The matrix is Hermitian. The characteristic polynomial is
x2 + 2x − 8 = (x + 4)(x − 2).
The eigenvalues are real numbers: −4 and 2.
For −4, the eigenvector matrix is
5
2−i
A + 4I =
.
2+i
1
(2 − i, −5) is an eigenvector.
For 2, the eigenvector matrix is
−1 2 − i
.
A − 2I =
2 + i −5
(2 − i, 1) is an eigenvector.
Note that
(2 − i, −5) · (2 − i, 1) = (2 + i)(2 − i) + (1)(−5) = 5 − 5 = 0.
Thus, the eigenvectors are orthogonal.
6
Since real symmetric matrices are Hermitian, the previous results apply to them as well. I’ll restate the
previous result for the case of a symmetric matrix.
Corollary. Let A be a symmetric matrix.
(a) The elements on opposite sides of the main diagonal are equal.
(b) The eigenvalues of a symmetric matrix are real numbers.
(c) Eigenvectors of A corresponding to different eigenvalues are orthogonal.
Example. Consider the symmetric matrix
A=
3 2
.
2 6
The characteristic polynomial is x2 − 9x + 14 = (x − 7)(x − 2).
Note that the eigenvalues are real numbers.
For λ = 7, an eigenvector is (1, 2).
For λ = 2, an eigenvector is (−2, 1).
Since (1, 2) · (−2, 1) = 0, the eigenvectors are orthogonal.
Example. A 2 × 2 real symmetric matrix A has eigenvalues 1 and 3.
(2, −3) is an eigenvector corresponding to the eigenvalue 1.
(a) Find an eigenvector corresponding to the eigenvalue 3.
Let (a, b) be an eigenvector corresponding to the eigenvalue 3.
Since eigenvectors for different eigenvalues of a symmetric matrix must be orthogonal, I have
(2, −3) · (a, b) = 0,
or
2a − 3b = 0.
So, for example, (a, b) = (3, 2) is a solution.
(b) Find A.
From (a), a diagonalizing matrix and the corresponding diagonal matrix are
1 0
2 3
.
and D =
P =
0 3
−3 2
Now P −1 AP = D, so
A = P DP
−1
2 3
=
−3 2
1 0
0 3
1
13
1 31 12
2 −3
=
.
3 2
13 12 21
Note that the result is indeed symmetric.
Example. Let p, q, r, s ∈ R, and consider the 2 × 2 Hermitian matrix
p
q + ri
.
A=
q − ri
s
Compute the characteristic polynomial of A, and show directly that the eigenvalues must be real numbers.
7
p − x q + ri = (x − p)(x − s) − (q + ri)(q − ri) = x2 − (p + s)x + [ps − (q 2 + r 2 )].
|A − xI| = q − ri s − x The discriminant is
(p+s)2 −4(1)[ps−(q 2 +r 2 )] = (p2 +2ps+s2 )−4ps+4(q 2 +r 2 ) = (p2 −2ps+s2 )+4(q 2 +r 2 ) = (p−s)2 +4(q 2 +r 2 ).
Since this is a sum of squares, it can’t be negative. Hence, the roots of the characteristic polynomial —
the eigenvalues — must be real numbers.
c 2014 by Bruce Ikenaga
8