Download 2.3 Matrix Representation of Operators

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bra–ket notation wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Determinant wikipedia , lookup

Matrix calculus wikipedia , lookup

Transcript
36
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
2.3
2.3.1
Matrix Representation of Operators
Matrices
• Cn is the set of all ordered n-tuples of complex numbers, which can be
assembled as columns or as rows.
• Let v be a vector in a n-dimensional vector space V with a basis ei . Then
v=
n
�
vi ei
i=1
where v1 , . . . , vn are complex numbers called the components of the vector
v. The column-vector is an ordered n-tuple of the form


 v1 
 v2 


 ..  .
 . 
vn
We say that the column vector represents the vector v in the basis ei .
• Let < v| = (|v >)∗ be a linear functional dual to the vector |v >. Let < ei | be
the dual basis. Then
n
�
< v| =
v̄i < ei |
i=1
The row-vector (also called a covector) is an ordered n-tuple of the form
(v̄1 , v̄2 , . . . , v̄n ) .
It represents the dual vector < v| in the same basis.
• A set of nm complex numbers Ai j , i = 1, . . . , n; j = 1, . . . , m, arranged in an
array that has m columns and n rows


 A11 A12 · · · A1m 


 A21 A22 · · · A2m 
A =  ..
..
.. 
..
.
 .
.
. 


An1 An2 · · · Anm
is called a rectangular n × m complex matrix.
mathphyshass.tex; September 11, 2013; 17:08; p. 35
2.3. MATRIX REPRESENTATION OF OPERATORS
37
• The set of all complex n × m matrices is denoted by Mat(n, m; C).
• The number Ai j (also called an entry of the matrix) appears in the i-th row
and the j-th column of the matrix A


 A11 A12 · · · A1 j · · · A1m 
 A21 A22 · · · A2 j · · · A2m 
 .
..
..
..
.. 
...
 ..
.
.
. 
.


A = 
 Ai1 Ai2 · · · Ai j · · · Aim 
 .
..
..
..
.. 
..
 ..
.
.
.
.
. 


An1 An2 · · · An j · · · Anm
• Remark. Notice that the first index indicates the row and the second index
indicates the column of the matrix.
• The matrix whose all entries are equal to zero is called the zero matrix.
• Finally, we define the multiplication of column-vectors by matrices from
the left and the multiplication of row-vectors by matrices from the right as
follows.
• Each matrix defines a natural left action on a column-vector and a right
action on a row-vector.
• For each column-vector v and a matrix A = (Ai j ) the column-vector u = Av
is given by

 

 

 u1   A11 A12 · · · A1n   v1   A11 v1 + A12 v2 + · · · + A1n vn 
 u2   A21 A22 · · · A2n   v2   A21 v1 + A22 v2 + · · · + A2n vn 

 

 

..
..
..   ..  
...
 ...   ...

.   .  
.
.

 = 

=




 ui   Ai1 Ai2 · · · Ain   vi   Ai1 v1 + Ai2 v2 + · · · + Ain vn 

 

 

..
..
..   ..  
..
 ..   ..





.
.
.
.
.
.
.

 
 
 

un
An1 An2 · · · Ann
vn
An1 v1 + An2 v2 + · · · + Ann vn
• The components of the vector u are
ui =
n
�
j=1
Ai j v j = Ai1 v1 + Ai2 v2 + · · · + Ain vn .
mathphyshass.tex; September 11, 2013; 17:08; p. 36
38
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• Similarly, for a row vector vT the components of the row-vector uT = vT A
are defined by
ui =
n
�
j=1
v j A ji = v1 A1i + v2 A2i + · · · + vn Ani .
• Let W be an m-dimensional vector space with aa basis fi and A : V → W be
a linear transformation. Such an operator defines a n × m matrix (Ai j ) by
Aei =
m
�
A ji f j
j=1
or
A ji = (f j , Aei )
Thus the linear transformation A is represented by the matrix Ai j .
The components of a vector v are obtained by acting on the colum vector
(vi ) from the left by the matrix (A ji ), that is,
(Av)i =
n
�
Ai j v j
j=1
• Proposition. The vector space L(V, W) of linear transformations A : V →
W is isomorphic to the space M(m × n, C) of m × n matrices.
• Proposition. The rank of a linear transformation is equal to the rank of its
matrix.
2.3.2
Operation on Matrices
• The addition of matrices is defined by

 A11 + B11 A12 + B12 · · · A1m + B1m

 A21 + B21 A22 + B22 · · · A2m + B2m
A + B = 
..
..
..
...

.
.
.

An1 + Bn1 An2 + Bn2 · · · Anm + Bnm







mathphyshass.tex; September 11, 2013; 17:08; p. 37
2.3. MATRIX REPRESENTATION OF OPERATORS
and the multiplication by scalars by

 cA11 cA12 · · · cA1m

 cA21 cA22 · · · cA2m
cA =  ..
..
..
..
.
 .
.
.

cAn1 cAn2 · · · cAnm
• A n × m matrix is called a square matrix if n = m.
39







• The numbers Aii are called the diagonal entries. Of course, there are n
diagonal entries. The set of diagonal entries is called the diagonal of the
matrix A.
• The numbers Ai j with i � j are called off-diagonal entries; there are n(n−1)
off-diagonal entries.
• The numbers Ai j with i < j are called the upper triangular entries. The
set of upper triangular entries is called the upper triangular part of the
matrix A.
• The numbers Ai j with i > j are called the lower triangular entries. The set
of lower triangular entries is called the lower triangular part of the matrix
A.
• The number of upper-triangular entries and the lower-triangular entries is
the same and is equal to n(n − 1)/2.
• A matrix whose only non-zero entries are on the diagonal is called a diagonal matrix. For a diagonal matrix
Ai j = 0
• The diagonal matrix
is also denoted by




A = 


if
i � j.
λ1 0 · · · 0
0 λ2 · · · 0
.. .. . .
.
. ..
. .
0 0 · · · λn







A = diag (λ1 , λ2 , . . . , λn )
mathphyshass.tex; September 11, 2013; 17:08; p. 38
40
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• A diagonal matrix whose all diagonal entries are equal to 1




I = 



0 · · · 0 

1 · · · 0 
.. . . .. 
. . 
.

0 0 ··· 1
1
0
..
.
is called the identity matrix. The elements of the identity matrix are
• A matrix A of the form



 1, if i = j
δi j = 

 0, if i � j .




A = 



∗ · · · ∗ 

∗ · · · ∗ 
.. . . .. 
. . 
.

0 0 ··· ∗
∗
0
..
.
where ∗ represents nonzero entries is called an upper triangular matrix.
Its lower triangular part is zero, that is,
Ai j = 0
• A matrix A of the form




A = 


if
i < j.

0 · · · 0 

∗ · · · 0 
.. . . .. 
. . 
.

∗ ∗ ··· ∗
∗
∗
..
.
whose upper triangular part is zero, that is,
Ai j = 0
if
i > j,
is called a lower triangular matrix.
mathphyshass.tex; September 11, 2013; 17:08; p. 39
2.3. MATRIX REPRESENTATION OF OPERATORS
41
• The transpose of a matrix A whose i j-th entry is Ai j is the matrix AT whose
i j-th entry is A ji . That is, AT obtained from A by switching the roles of rows
and columns of A:


 A11 A21 · · · A j1 · · · An1 

 A
 12 A22 · · · A j2 · · · An2 
 ..
..
..
..
.. 
..
.
 .
.
.
.
. 
T

A = 
 A1i A2i · · · A ji · · · Ani 
 .
..
..
..
.. 
..
 ..
.
.
.
.
. 


A1m A2m · · · A jm · · · Anm
or
(AT )i j = A ji .
• The Hermitian conjugate of a matrix A = (Ai j ) is a matrix A∗ = (A∗i j )
defined by
(A∗ )i j = Ā ji
• A matrix A is called symmetric if
AT = A
and anti-symmetric if
AT = −A .
• A matrix A is called Hermitian if
A∗ = A
and anti-Hermitian if
A∗ = −A .
• An anti-Hermitian matrix has the form
A = iH
where H is Hermitian.
• A Hermitian matrix has the form
H = A + iB
where A is real symmetric and B is real anti-symmetric matrix.
mathphyshass.tex; September 11, 2013; 17:08; p. 40
42
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• The number of independent entries of an anti-symmetric matrix is n(n−1)/2.
• The number of independent entries of a symmetric matrix is n(n + 1)/2.
• The diagonal entries of a Hermitian matrix are real.
• The number of independent real parameters of a Hermitian matrix is n2 .
• Every square matrix A can be uniquely decomposed as the sum of its diagonal part AD , the lower triangular part AL and the upper triangular part
AU
A = A D + A L + AU .
• For an anti-symmetric matrix
ATU = −AL
• For a symmetric matrix
and
AD = 0 .
ATU = AL .
• Every square matrix A can be uniquely decomposed as the sum of its symmetric part AS and its anti-symmetric part AA
A = AS + AA ,
where
1
1
AA = (A − AT ) .
AS = (A + AT ) ,
2
2
• The product of square matrices is defined as follows. The i j-th entry of
the product C = AB of two matrices A and B is
Ci j =
n
�
k=1
Aik Bk j = Ai1 B1 j + Ai2 B2 j + · · · + Ain Bn j .
This is again a multiplication of the “i-th row of the matrix A by the j-th
column of the matrix B”.
• Theorem 2.3.1 The product of matrices is associative, that is, for any matrices A, B, C
(AB)C = A(BC) .
• Theorem 2.3.2 For any two matrices A and B
(AB)T = BT AT ,
(AB)∗ = B∗ A∗ .
mathphyshass.tex; September 11, 2013; 17:08; p. 41
2.3. MATRIX REPRESENTATION OF OPERATORS
2.3.3
43
Inverse Matrix
• A matrix A is called invertible if there is another matrix A−1 such that
AA−1 = A−1 A = I .
The matrix A−1 is called the inverse of A.
• Theorem 2.3.3 For any two invertible matrices A and B
(AB)−1 = B−1 A−1 ,
and
(A−1 )T = (AT )−1 .
• A matrix A is called orthogonal if
AT A = AAT = I ,
which means AT = A−1 .
•
• A matrix A is called unitary if
A∗ A = AA∗ = I ,
which means A∗ = A−1 .
• Every unitary matrix has the form
U = exp(iH)
where H is Hermitian.
• A similarity transformation of a matric A is a map
A �→ UAU −1
where U is a given invertible matrix.
• The similarity transformation of a function of a matrix is equal to the function of the similar matrix
U f (A)U −1 = f (UAU −1 ) .
mathphyshass.tex; September 11, 2013; 17:08; p. 42
44
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
2.3.4
Trace
• The trace is a map tr : Mat(n, C) that assigns to each matrix A = (Ai j ) a
complex number tr A equal to the sum of the diagonal elements of a matrix
tr A =
n
�
Akk .
k=1
• Theorem 2.3.4 The trace has the properties
tr (AB) = tr (BA) ,
and
tr AT = tr A ,
tr A∗ = tr A
• Obviously, the trace of an anti-symmetric matrix is equal to zero.
• The trace is invariant under a similarity transformation.
• A natural inner product on the space of matrices is defined by
(A, B) = tr (A∗ B)
2.3.5
Determinant
• Consider the set Zn = {1, 2, . . . , n} of the first n integers. A permutation ϕ
of the set {1, 2, . . . , n} is an ordered n-tuple (ϕ(1), . . . , ϕ(n)) of these numbers.
• That is, a permutation is a bijective (one-to-one and onto) function
ϕ : Zn → Zn
that assigns to each number i from the set Zn = {1, . . . , n} another number
ϕ(i) from this set.
• An elementary permutation is a permutation that exchanges the order of
only two numbers.
mathphyshass.tex; September 11, 2013; 17:08; p. 43
2.3. MATRIX REPRESENTATION OF OPERATORS
45
• Every permutation can be realized as a product (or a composition) of elementary permutations. A permutation that can be realized by an even number of elementary permutations is called an even permutation. A permutation that can be realized by an odd number of elementary permutations is
called an odd permutation.
• Proposition 2.3.1 The parity of a permutation does not depend on the
representation of a permutation by a product of the elementary ones.
• That is, each representation of an even permutation has even number of
elementary permutations, and similarly for odd permutations.
• The sign of a permutation ϕ, denoted by sign(ϕ) (or simply (−1)ϕ ), is
defined by
�
+1, if ϕ is even,
ϕ
sign(ϕ) = (−1) =
−1, if ϕ is odd
• The set of all permutations of n numbers is denoted by S n .
• Theorem 2.3.5 The cardinality of this set, that is, the number of different
permutations, is
|S n | = n! .
• The determinant is a map det : Mat(n, C) → C that assigns to each matrix
A = (Ai j ) a complex number det A defined by
�
sign (ϕ)A1ϕ(1) · · · Anϕ(n) ,
det A =
ϕ∈S n
where the summation goes over all n! permutations.
• The most important properties of the determinant are listed below:
Theorem 2.3.6 1. The determinant of the product of matrices is equal to
the product of the determinants:
det(AB) = det A det B .
2. The determinants of a matrix A and of its transpose AT are equal:
det AT = det A .
mathphyshass.tex; September 11, 2013; 17:08; p. 44
46
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
3. The determinant of the conjugate matrix is
det A∗ = det A .
4. The determinant of the inverse A−1 of an invertible matrix A is equal
to the inverse of the determinant of A:
det A−1 = (det A)−1
5. A matrix is invertible if and only if its determinant is non-zero.
• The determinant is invariant under the similarity transformation.
• The set of complex invertible matrices (with non-zero determinant) is denoted by GL(n, C).
• A matrix with unit determinant is called unimodular.
• The set of complex matrices with unit determinant is denoted by S L(n, C).
• The set of complex unitary matrices is denoted by U(n).
• The set of complex unitary matrices with unit determinant is denoted by
S U(n).
• The set of real orthogonal matrices is denoted by O(n).
• An orthogonal matrix with unit determinant (a unimodular orthogonal matrix) is called a proper orthogonal matrix or just a rotation.
• The set of real orthogonal matrices with unit determinant is denoted by
S O(n).
• Theorem 2.3.7 The determinant of an orthogonal matrix is equal to either
1 or −1.
• Theorem. The determinant of a unitary matrix is a complex number of
modulus 1.
• A set G of invertible matrices forms a group if it is closed under taking
inverse and matrix multiplication, that is, if the inverse A−1 of any matrix A
in G belongs to the set G and the product AB of any two matrices A and B
in G belongs to G.
mathphyshass.tex; September 11, 2013; 17:08; p. 45
2.3. MATRIX REPRESENTATION OF OPERATORS
2.3.6
47
Exercises
1. Show that
tr [A, B] = 0
2. Show that the product of invertible matrices is an invertible matrix.
3. Show that the product of matrices with positive determinant is a matrix with positive determinant.
4. Show that the inverse of a matrix with positive determinant is a matrix with positive
determinant.
5. Show that GL(n, R) forms a group (called the general linear group).
6. Show that GL+ (n, R) is a group (called the proper general linear group).
7. Show that the inverse of a matrix with negative determinant is a matrix with negative determinant.
8. Show that: a) the product of an even number of matrices with negative determinant
is a matrix with positive determinant, b) the product of odd matrices with negative
determinant is a matrix with negative determinant.
9. Show that the product of matrices with unit determinant is a matrix with unit determinant.
10. Show that the inverse of a matrix with unit determinant is a matrix with unit determinant.
11. Show that S L(n, R) forms a group (called the special linear group or the unimodular group).
12. Show that the product of orthogonal matrices is an orthogonal matrix.
13. Show that the inverse of an orthogonal matrix is an orthogonal matrix.
14. Show that O(n) forms a group (called the orthogonal group).
15. Show that orthogonal matrices have determinant equal to either +1 or −1.
16. Show that the product of orthogonal matrices with unit determinant is an orthogonal
matrix with unit determinant.
17. Show that the inverse of an orthogonal matrix with unit determinant is an orthogonal matrix with unit determinant.
mathphyshass.tex; September 11, 2013; 17:08; p. 46
48
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
18. Show that S O(n) forms a group (called the proper orthogonal group or the rotation group).
mathphyshass.tex; September 11, 2013; 17:08; p. 47