Download Physics 70007, Fall 2009 Answers to HW set #2

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Capelli's identity wikipedia , lookup

System of linear equations wikipedia , lookup

Rotation matrix wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Jordan normal form wikipedia , lookup

Four-vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Determinant wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Transcript
Physics 70007, Fall 2009
Answers to HW set #2
October 16, 2009
1. (Sakurai 1.2)
Suppose that a 2x2 matrix X (not necessarily Hermitian, or unitary) is written as
X = a0 + ~σ · ~a
where a0 and a1,2,3 are numbers.
(a) How are a0 and ak (k = 1, 2, 3) related to tr(X) and tr(σk X)?
Since the matrices σk are traceless, while the trace of the 2x2 identity matrix is 2, we nd tr(X) =
2a0 , and so
1
a0 = tr(X)
2
To multiply in a factor of σk , we rst write out the dot product in the denition of X explicitly:
X = a0 I + a1 σ1 + a2 σ2 + a3 σ3
where I represents the 2x2 identity matrix. Then we can use the fundamental relation σi σj =
δij I + iijk σk to multiply in a Pauli matrix: for example,
σ1 X = a0 (σ1 ) + a1 (I) + a2 (iσ3 ) + a3 (−iσ2 )
When we take the trace, only the term involving I gives a non-zero contribution (as above): we
obtain tr(σ1 X) = 2a1 , or a1 = 12 tr(σ1 X). We obtain similar results for k = 2, 3; so in general
1
tr(σk X)
2
ak =
(b) Obtain a0 and ak in terms of the matrix elements Xij .
We now write out the various traces obtained in part (a): using
X11 X12
X=
X21 X22
and
σ1 =
0
1
1
0
σ2 =
we have
a0 =
and
σ1 X =
σ2 X =
X21
X11
−iX21
iX11
X22
X12
0
i
−i
0
σ3 =
1
0
0
−1
X11 + X22
1
tr(X) =
2
2
−iX22
iX12
=⇒
a1 =
1
X21 + X12
tr(σ1 X) =
2
2
=⇒
1
a2 =
1
X12 − X21
tr(σ2 X) = i
2
2
σ3 X =
X11
−X21
X12
−X22
=⇒
a3 =
X11 − X22
1
tr(σ3 X) =
2
2
2. (Sakurai 1.3)
Show that the determinant of a 2x2 matrix ~σ · ~a is invariant under
~σ · ~a → ~σ · ~a0 ≡ exp
i~σ · n̂φ
2
(~σ · ~a) exp
−i~σ · n̂φ
2
Find a0k in terms of ak when n̂ is in the positive z-direction and interpret your result.
Since the determinant of a product of matrices equals the product of the determinants of the
individual matrices, what we are being asked to show is that
i~σ · n̂φ
−i~σ · n̂φ
det exp
det exp
=1
2
2
Now it would be sucient to simply note that the matrices in question are inverses of each other
(to see this, apply the Baker-Campbell-Hausdor formula for eA eB , where we set B = −A and
therefore all the commutators in the formula become zero), and so the above relation is trivially
true. However, it turns out that each individual determinant equals one, all by itself:
i~σ · n̂φ
det exp
=1
2
Showing this is an instructive exercise, so we will work through it in detail. First we write out the
exponential as a power series:
exp
i~σ · n̂φ
2
=I+
1
1!
i~σ · n̂φ
2
1
+
1
2!
i~σ · n̂φ
2
2
+ ...
(This denes what we mean by the exponential of a matrix.) Then we separate out the matrix and
constant pieces of each term:
exp
i~σ · n̂φ
2
=I+
1
1!
iφ
2
1
1
(~σ · n̂) +
1
2!
iφ
2
2
2
(~σ · n̂) + . . .
2
Each term involves a power of ~σ · n̂. We can simplify these greatly by noting that (~σ · n̂) = I:
2
(~σ · n̂) = σi ni σj nj = ni nj (δij I + iijk σk ) = (n̂ · n̂) I + 0 = I
(in the third equality, the product of a term symmetric in i and j and a corresponding anti-symmetric
term gives zero.) So, for an arbitrary power, we have
2j
(~σ · n̂)
ij
h
2
= (~σ · n̂)
=I
and
2j+1
(~σ · n̂)
= ~σ · n̂
This lets us collect the even and odd powers in our power series separately:
!
!
2
4
1
3
i~σ · n̂φ
1 iφ
1 iφ
1 iφ
1 iφ
exp
= I 1+
+
+ . . . + (~σ · n̂)
+
+ ...
2
2! 2
4! 2
1! 2
3! 2
!
!
2
4
1
3
1 φ
1 φ
1 φ
1 φ
= I 1−
+
− . . . + i (~σ · n̂)
−
+ ...
2! 2
4! 2
1! 2
3! 2
= I cos
φ
φ
+ i (~σ · n̂) sin
2
2
2
At this point, the simplest way to take the determinant is to write out the matrix explicitly: writing
c ≡ cos φ2 and s ≡ sin φ2 , we have
i~σ · n̂φ
c + isn3
s (in1 + n2 )
=
exp
s (in1 − n2 )
c − isn3
2
and so
det exp
i~σ · n̂φ
2
=
c2 + s2 n23 − s2 −n21 − n22
c2 + s2 (n̂ · n̂)
φ
φ
= cos2 + sin2
2
2
= 1.
=
Now we take n̂ = ẑ and determine ~a0 in terms of ~a. Here, (n1 , n2 , n3 ) = (0, 0, 1), so
i~σ · n̂φ
−i~σ · n̂φ
0
~σ · ~a = exp
(~σ · ~a) exp
2
2
c + is
0
a3
a1 − ia2
c − is
0
=
0
c − is
a1 + ia2
−a3
0
c + is
2 a3 (c + is) (c − is) (a1 − ia2 ) (c + is)
=
2
(a1 + ia2 ) (c − is) −a3 (c − is) (c + is)
a3
(a1 − ia2 ) eiφ
=
(a1 + ia2 ) e−iφ
−a3
We can now use our work from part (b) of the rst problem to determine ~a0 . We call this matrix
∗
X and write it as a00 + ~σ · ~a0 ; then (noting that X12 = X21
)
a00
=
a01
=
=
a02
=
=
a03
=
X11 + X22
=0
2
1
(X12 + X21 ) = Re X12
2
a1 cos φ + a2 sin φ
i
(X12 − X21 ) = −Im X12
2
−a1 sin φ + a2 cos φ
1
(X11 − X22 ) = a3
2
Thus the vector ~a0 ends up being the vector ~a rotated around the z-axis by an angle φ. It is
reasonable to assume (and is in fact the case) that an arbitrary n̂ will lead to a rotation of ~a
around the n̂-axis. This result will become more meaningful once we get into rotations and angular
momentum in chapter 3 of Sakurai.
3. (Sakurai 1.8)
Using the orthonormality of |+i and |−i, prove
{Si , Sj } =
[Si , Sj ] = iijk ~Sk ,
~2
2
where
Sx
=
Sy
=
Sz
=
~
(|+ih−| + |−ih+|)
2
i~
(− |+ih−| + |−ih+|)
2
~
(|+ih+| − |−ih−|) .
2
3
δij
The properties of the basis |+i , |−i which we will use are orthonormality:
h+|+i = h−|−i = 1,
h+|−i = h−|+i = 0
and completeness:
|+ih+| + |−ih−| = I.
An ecient way to do the problem is to rst write out all possible products of two spin operators.
Just to give you the idea, I'll write out only the ones which don't involve Sz :
Sx2
Sx Sy
Sy Sx
~
~
(|+ih−| + |−ih+|) (|+ih−| + |−ih+|)
2
2
2
~
=
(|+ih−|+ih−| + |+ih−|−ih+| + |−ih+|+ih−| + |−ih+|−ih+|)
2
~2
=
(0 + |+ih+| + |−ih−| + 0)
4
~2
=
I
4
=
i~
~
(|+ih−| + |−ih+|) (− |+ih−| + |−ih+|)
2
2
2
~
= i
(− |+ih−|+ih−| + |+ih−|−ih+| − |−ih+|+ih−| + |−ih+|−ih+|)
2
2
~
(0 + |+ih+| − |−ih−| + 0)
= i
2
~
= i Sz
2
=
i~
~
(− |+ih−| + |−ih+|) (|+ih−| + |−ih+|)
2
2
2
~
= i
(− |+ih−|+ih−| − |+ih−|−ih+| + |−ih+|+ih−| + |−ih+|−ih+|)
2
2
~
= i
(0 − |+ih+| − + |−ih−| + 0)
2
~
= −i Sz
2
=
i~
i~
(− |+ih−| + |−ih+|) (− |+ih−| + |−ih+|)
2
2
2
~
= −
(|+ih−|+ih−| − |+ih−|−ih+| − |−ih+|+ih−| + |−ih+|−ih+|)
2
~2
= − (0 − |+ih+| − |−ih−| + 0)
4
~2
=
I
4
With this work done, we can quickly write down the commutators and anti-commutators:
Sy2
=
[Sx , Sx ]
=
[Sy , Sy ] = 0
[Sx , Sy ]
=
[Sy , Sx ]
=
(trivially)
i~
−i~
Sx Sy − Sy Sx = Sz −
Sz = i~Sz
2
2
−[Sx , Sy ] = −i~Sz
{Sx , Sx }
=
Sx Sx + Sx Sx =
{Sx , Sy }
=
{Sy , Sx }
=
~2
I
2
i~
−i~
Sx Sy + Sy Sx = Sz +
Sz = 0
2
2
{Sx , Sy } = 0
{Sy , Sy }
=
Sy Sy + Sy Sy =
4
~2
I
2
These agree with the general formulas we are trying to establish. The relations involving Sz follow
analogously.
4. (Sakurai 1.9)
E
Construct S~ · n̂; + such that
E ~ E
~
~
~
S · n̂ S · n̂; + =
S · n̂; +
2
where n̂ is a unit vector with polar angle β and azimuthal angle α with respect to the z-axis.
The vector n̂ has Cartesian components
(sin β cos α, sin β sin α, cos β)
~ · n̂:
so we can construct the matrix for the operator S
~ · n̂ =
S
=
=
~
(σx nx + σy ny + σz nz )
2
~
0 1
0
sin β cos α
+ sin β sin α
1 0
i
2
−iα
~
cos β
e
sin β
eiα sin β
− cos β
2
−i
0
+ cos β
1
0
0
−1
~ · n̂ − λI = 0:
The eigenvalues of this matrix are found by solving det S
2
~
~
~
cos β − λ
− cos β − λ −
sin2 β
2
2
2
2
~
−
+ λ2
2
0
=
=
in the normalized eigenket corresponding to
yielding λ = ± ~2 . In this problem, we
are interested
E
~
the positive eigenvalue; writing it as S · n̂; + ≡ A |+i + B |−i, we have
~
2
cos β
eiα sin β
e−iα sin β
− cos β
A
B
~
=
2
A
B
The two equations contained in this matrix equation are redundant, so we can consider either one:
for example, the top equation is
(cos β − 1) A + e−iα sin β B = 0
This gives us one condition on A and B ; we need a second one to x the actual values. This is
given by the requirement that the ket be normalized:
2
2
|A| + |B| = 1
Taking the modulus of the rst condition and solving for |B|, we nd
|B| = |A|
1 − cos β
sin β
Substituting into the second condition and solving for |A| gives
|A| = cos
5
β
2
where we have used the trigonometric identities sin β = 2 sin β2 cos β2 and 1 − cos β = 2 sin2 β2 . We
choose the phase of A to be real and positive, i.e.
A = cos
β
2
and then our rst condition gives the value of B :
B
1 − cos β
A
sin β
β
= eiα sin
2
= eiα
using our trig identities again. So the desired normalized eigenket is
E
β
β
~
S · n̂; + = cos |+i + eiα sin |−i
2
2
5. (Sakurai 1.10)
The Hamiltonian operator for a two-state system is given by
H = a (|1ih1| − |2ih2| + |1ih2| + |2ih1|)
where a is a number with the dimension of energy. Find the energy eigenvalues and the corresponding eigenkets (as linear combinations of |1i and |2i.
The matrix of H is
H=
a
a
a
−a
The sum of the eigenvalues λ1 , λ2 of H equals its trace, and their product equals its determinant:
i.e.
λ1 + λ2 = 0,
λ1 λ2 = −2a2
√
√
which are easily solved to give λ1 = a 2, λ2 = −a 2.
√
For λ1 = a 2, we write the eigenket as c1 |1i + c2 |2i; then
√
1− 2
1√
c1
a
=0
c2
1
−1 − 2
√
2
2
so we have (1 − 2)c1 + c2 = 0, plus normalization: |c1 | + |c2 | = 1. Choosing c1 to be real and
positive, we nd for the normalized eigenket
1
1
c1
0.924
√
=p
≈
√
c2
0.383
2−1
4−2 2
√
For λ2 = −a 2, we write the eigenket as d1 |1i + d2 |2i; then
√
1+ 2
1√
d1
=0
a
d2
1
−1 + 2
√
2
2
so we have (1 + 2)d1 + d2 = 0, plus normalization: |d1 | + |d2 | = 1. Choosing d1 to be real and
positive, we nd for the normalized eigenket
1
1
d1
0.383
√
≈
=p
√
d2
−0.924
− 2−1
4+2 2
6
6. (Sakurai 1.11)
A two-state system is characterized by the Hamiltonian
H = H11 |1ih1| + H22 |2ih2| + H12 [|1ih2| + |2ih1|]
where H11 , H22 , and H12 are real numbers with the dimension of energy, and |1i and |2i are eigenkets of some observable (6= H ). Find the energy eigenkets and corresponding energy eigenvalues.
Make sure that your answer makes good sense for H12 = 0.
The eigenvalues are found in the usual manner:
H −λ
0 = det (H − λI) = 11
H12
H12 H22 − λ which gives, after a little algebra,
q
1
2
2
λ± =
H11 + H22 ± (H11 − H22 ) + 4H12
2
(If H12 = 0, the eigenvalues simplify to just H11 and H22 , which makes sense since the Hamiltonian
becomes diagonal w.r.t. the |1i , |2i basis in this situation.)
The eigenkets then follow as usual (I won't bother to normalize them doing so is not hard, but the
resulting normalization constant is rather messy.) For λ+ , we write the eigenket as c1 |1i + c2 |2i;
then
H11 H12
c1
c1
= λ+
H12 H22
c2
c2
12
, so the (unnormalized)
Choosing c1 = 1, we nd from the bottom equation that c2 = λ+H−H
22
eigenket is specied.
Note that when H12 = 0 (and so λ+ = H11 ), then c2 becomes zero, and the
1
eigenket is
as expected.
0
Similarly, for λ− , we write the eigenket as d1 |1i + d2 |2i; then
H11 H12
d1
d1
= λ−
H12 H22
d2
d2
12
, so the (unnormalized) eigenket
Choosing d2 = 1, we nd from the top equation that d1 = λ−H−H
11
is specied.
Note
that
when
H
=
0
(and
so
λ
=
H
),
then
d
becomes zero, and the eigenket
12
−
22
1
0
is
as expected.
1
7